Skip to main content

POST /v1/completions

Creates a completion for the given prompt and model.

Request body

model
string
required
The model ID to use. See the models page for available models.Example: openai/gpt-oss-120b
prompt
string
required
The prompt to generate a completion for.
max_tokens
integer
Maximum number of tokens to generate. Defaults to the model’s maximum.
temperature
number
Sampling temperature between 0 and 2. Defaults to 1.
top_p
number
Nucleus sampling parameter. Defaults to 1.
stream
boolean
If true, responses are sent as server-sent events. Defaults to false.
stop
string | array
Up to 4 sequences where the model will stop generating.

Example request

curl https://api.inducta.ai/v1/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $INDUCTA_API_KEY" \
  -d '{
    "model": "openai/gpt-oss-120b",
    "prompt": "The capital of France is",
    "max_tokens": 50
  }'

Example response

{
  "id": "cmpl-abc123",
  "object": "text_completion",
  "created": 1709000000,
  "model": "openai/gpt-oss-120b",
  "choices": [
    {
      "index": 0,
      "text": " Paris, which is also the largest city in France.",
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 7,
    "completion_tokens": 12,
    "total_tokens": 19
  }
}

Streaming

Set stream: true to receive responses as server-sent events.
curl https://api.inducta.ai/v1/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $INDUCTA_API_KEY" \
  -d '{
    "model": "openai/gpt-oss-120b",
    "prompt": "Hello",
    "max_tokens": 50,
    "stream": true
  }'