POST
/
v1
/
openai
/
chat
/
completions
curl --request POST \
  --url https://api.vlm.run/v1/openai/chat/completions \
  --header 'Authorization: Bearer <token>' \
  --header 'Content-Type: application/json' \
  --data '{
  "id": "<string>",
  "model": "vlm-1",
  "messages": [
    {
      "role": "user",
      "content": "<string>"
    }
  ],
  "max_tokens": 1024,
  "n": 123,
  "temperature": 0.7,
  "top_p": 1,
  "top_k": 123,
  "logprobs": 123,
  "stream": false,
  "response_format": {},
  "domain": "<string>",
  "json_schema": {},
  "metadata": {
    "environment": "dev",
    "session_id": "<string>",
    "allow_training": true
  }
}'
{
  "id": "<string>",
  "object": "chat.completion",
  "created": 123,
  "model": "<string>",
  "choices": [
    "<any>"
  ],
  "usage": {
    "completion_tokens": 123,
    "prompt_tokens": 123,
    "total_tokens": 123
  },
  "system_fingerprint": "<string>"
}

Authorizations

Authorization
string
header
required

Bearer authentication header of the form Bearer <token>, where <token> is your auth token.

Body

application/json
messages
object[]
required
id
string
model
string
default:
vlm-1
max_tokens
integer
default:
1024
n
integer | null
default:
1
temperature
number
default:
0.7
top_p
number
default:
1
top_k
integer | null
logprobs
integer | null
stream
boolean
default:
false
response_format

Format of the response

domain
string | null

Domain of the request

json_schema
object | null

Schema of the request

metadata
object

Metadata of the request

Response

201
application/json
Successful Response

https://platform.openai.com/docs/api-reference/chat/object

model
string
required
choices
any[]
required
id
string
object
enum<string>
default:
chat.completion
Available options:
chat.completion,
chat.completion.chunk
created
integer
usage
object | null
system_fingerprint
string | null
default:
2024-08-01