Skip to main content
POST
/
v1
/
openai
/
chat
/
completions
"<any>"
Our VLM Agents are fully compatible with the OpenAI API. Notably, our API also supports a whole range of features with multi-modal data types that OpenAI currently does not support. Our OpenAI-Compatible endpoint is available at https://agent.vlm.run/v1/openai.

Authorizations

Authorization
string
header
required

Bearer authentication header of the form Bearer <token>, where <token> is your auth token.

Headers

user-agent
string | null

Body

application/json

Request payload for the OpenAI chat completions API for vlm-agent-1

messages
Message · object[]
required

Messages to complete

id
string

ID of the completion

model
enum<string>
default:vlm-agent-1

VLM Run Agent model to use for completion

Available options:
vlm-agent-1,
vlm-agent-1:pro,
vlm-agent-1:auto
max_tokens
integer
default:32768

Maximum number of tokens to generate

n
integer | null
default:1

Number of completions to generate

temperature
number
default:0

Temperature of the sampling distribution

top_p
number
default:1

Cumulative probability of parameter highest probability vocabulary tokens to keep for nucleus sampling

top_k
integer | null

Number of highest probability vocabulary tokens to keep for top-k-filtering

logprobs
integer | null

Include the log probabilities on the logprobs most likely tokens, as well the chosen tokens

stream
boolean
default:false

Whether to stream the response or not

preview
boolean | null

Whether to generate previews for the response or not

response_format
object | null

Format of the response Response format for JSON schema mode as per Fireworks AI specification.

  • JSONSchemaResponseFormat
  • JSONModeResponseFormat
  • JSONSchemaResponseFormatStrict
session_id
string | null

Session ID for persisting the chat history

Response

Successful Response

The response is of type any.

I