POST
/
v1
/
chat
/
completions
!pip install vlmrun

import openai

# Initialize the OpenAI client
client = openai.OpenAI(
  base_url="https://agent.vlm.run/v1/openai", 
  api_key="<VLMRUN_API_KEY>"
)

# Create a chat completion
response = client.chat.completions.create(
  model="vlm-agent-1",
  messages=[
    {
      "role": "user", 
      "content": "Who are you and what can you do?"
    }
  ],
  temperature=0.7,
)
"<any>"
!pip install vlmrun

import openai

# Initialize the OpenAI client
client = openai.OpenAI(
  base_url="https://agent.vlm.run/v1/openai", 
  api_key="<VLMRUN_API_KEY>"
)

# Create a chat completion
response = client.chat.completions.create(
  model="vlm-agent-1",
  messages=[
    {
      "role": "user", 
      "content": "Who are you and what can you do?"
    }
  ],
  temperature=0.7,
)

Authorizations

Authorization
string
header
required

Bearer authentication header of the form Bearer <token>, where <token> is your auth token.

Headers

user-agent
string | null

Body

application/json

Request payload for the OpenAI chat completions API

messages
Message · object[]
required
id
string
model
enum<string>
default:vlm-agent-1
Available options:
vlm-agent-1,
vlm-agent,
vlm-agent:document,
vlm-agent:image,
vlm-agent:video,
vlm-agent:multimodal
max_tokens
integer
default:32768
n
integer | null
default:1
temperature
number
default:0
top_p
number
default:1
top_k
integer | null
logprobs
integer | null
stream
boolean
default:false
preview
boolean | null
response_format

Format of the response

session_id
string | null

Response

Successful Response

The response is of type any.