Skip to main content
The VLM Run Agent API enables you to run complex, multi-step and multi-modal workflows with a unified chat-completions like interface.
  • Base URL: https://agent.vlm.run/v1
  • Authentication: Authorization: Bearer <VLMRUN_API_KEY>
  • Models Supported: vlmrun-orion-1:auto, vlmrun-orion-1:fast, vlmrun-orion-1:pro
Access your API keys in our dashboard.

Example Request

from vlmrun.client import VLMRun

# Initialize the VLM Run client
client = VLMRun(
    base_url="https://agent.vlm.run/v1", api_key="<VLMRUN_API_KEY>"
)

# Create a chat completion
response = client.agent.completions.create(
    model="vlmrun-orion-1:auto",
    messages=[
        {
            "role": "user",
            "content": [
                {"type": "text", "text": "What do you see in this image?" },
                {"type": "image_url", "image_url": {"url": "https://example.com/image.jpg"}}
            ]
        }
    ],
    max_tokens=1000
)