Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/spiceai/spiceai/llms.txt

Use this file to discover all available pages before exploring further.

Overview

The Models API provides an endpoint to list all machine learning and language models available in the runtime, compatible with the OpenAI models API format.

List Models

GET /v1/models
Returns a list of all models (both ML and LLM) available in the runtime.

Query Parameters

status
boolean
default:false
Include the current status of each model. Possible values:
  • ready - Model is ready for inference
  • loading - Model is being loaded
  • error - Model encountered an error
format
string
default:"json"
Response format: json or csv
metadata_fields
string
Comma-separated list of metadata fields to include. Available fields:
  • supports_responses_api - Whether the model supports the responses API

Response

object
string
Always "list"
data
array<object>
Array of model information objects.
id
string
The model identifier (e.g., gpt-4, text-embedding-ada-002)
object
string
Always "model"
owned_by
string
The provider or owner of the model (e.g., openai, openai-internal, spiceai)
datasets
array<string>
List of datasets associated with this model (for embedding models). null if no datasets.
status
string
Current status of the model (only when status=true query parameter is set)
error
object
Error information when status is error (only when status=true)
category
string
Error category (e.g., model, worker)
type
string
Error type (e.g., auth, connection, loading)
code
string
Stable error code (e.g., model.auth, model.loading)
error_message
string
Human-readable error message (only when status=true and status is error)
metadata
object
Additional model metadata (only when metadata_fields query parameter is set)
supports_responses_api
boolean
Whether the model supports the responses API

Response Example (JSON)

{
  "object": "list",
  "data": [
    {
      "id": "gpt-4",
      "object": "model",
      "owned_by": "openai",
      "datasets": null,
      "status": "ready",
      "error": null,
      "error_message": null
    },
    {
      "id": "text-embedding-ada-002",
      "object": "model",
      "owned_by": "openai-internal",
      "datasets": ["text-dataset-1", "text-dataset-2"],
      "status": "error",
      "error": {
        "category": "model",
        "type": "auth",
        "code": "model.auth"
      },
      "error_message": "Invalid API key"
    }
  ]
}

Response Example (CSV)

id,object,owned_by,datasets,status,error,error_message
gpt-4,model,openai,,ready,,
text-embedding-ada-002,model,openai-internal,"text-dataset-1,text-dataset-2",error,model.auth,Invalid API key

Response with Metadata

When requesting metadata_fields=supports_responses_api:
{
  "object": "list",
  "data": [
    {
      "id": "gpt-4",
      "object": "model",
      "owned_by": "openai",
      "datasets": null,
      "status": "ready",
      "error": null,
      "error_message": null,
      "metadata": {
        "supports_responses_api": true
      }
    },
    {
      "id": "text-embedding-ada-002",
      "object": "model",
      "owned_by": "openai-internal",
      "datasets": ["text-dataset-1", "text-dataset-2"],
      "status": "ready",
      "error": null,
      "error_message": null,
      "metadata": {
        "supports_responses_api": false
      }
    }
  ]
}

Status Codes

  • 200 OK - Models retrieved successfully
  • 500 Internal Server Error - App not initialized or unexpected error

Error Response (500)

App not initialized

Examples

List All Models

curl http://localhost:8090/v1/models

List Models with Status

curl http://localhost:8090/v1/models?status=true

List Models with Metadata

curl http://localhost:8090/v1/models?metadata_fields=supports_responses_api

Get CSV Format

curl http://localhost:8090/v1/models?format=csv

Combined Parameters

curl http://localhost:8090/v1/models?status=true&format=json&metadata_fields=supports_responses_api

OpenAI Compatibility

This endpoint follows the OpenAI models API format, making it compatible with OpenAI client libraries:
import openai

client = openai.OpenAI(
    api_key="your-key",
    base_url="http://localhost:8090/v1"
)

# List all models
models = client.models.list()
for model in models.data:
    print(f"{model.id} - {model.owned_by}")
import OpenAI from 'openai';

const client = new OpenAI({
  apiKey: 'your-key',
  baseURL: 'http://localhost:8090/v1',
});

// List all models
const models = await client.models.list();
models.data.forEach(model => {
  console.log(`${model.id} - ${model.owned_by}`);
});