Skip to main content
The VLM Run Agent API enables you to run complex, multi-step and multi-modal workflows with a unified chat-completions like interface.
  • Base URL: https://agent.vlm.run/v1
  • Authentication: Authorization: Bearer <VLMRUN_API_KEY>
  • Models Supported: vlm-agent-1:auto, vlm-agent-1:fast, vlm-agent-1:pro
Access your API keys in our dashboard.

Example Request

from openai import OpenAI

# Initialize the OpenAI client with the custom base URL
client = OpenAI(
    api_key="your-vlmrun-api-key",
    base_url="https://agent.vlm.run/v1/openai"
)

# Create a chat completion
response = client.chat.completions.create(
    model="vlm-agent-1",
    messages=[
        {
            "role": "user",
            "content": [
                {"type": "text", "text": "What do you see in this image?" },
                {"type": "image_url", "image_url": {"url": "https://example.com/image.jpg"}}
            ]
        }
    ],
    max_tokens=1000
)