Skip to main content
The VLM Run CLI (vlmrun) lets you interact with the VLM Run platform directly from your terminal — chat with Orion, generate structured predictions, manage files and skills, and more.

Installation

The CLI is included with the VLM Run Python SDK. For the full experience with rich terminal output, install with the cli extra:
# Basic installation
pip install vlmrun

# Full CLI with rich terminal output (recommended)
pip install vlmrun[cli]
Verify the installation:
vlmrun --version

Configuration

Quick Setup

Initialize a config file and set your API key:
vlmrun config init
vlmrun config set --api-key "your-api-key"
Configuration is stored at ~/.vlmrun/config.toml.

Environment Variables

Alternatively, set your API key via environment variables:
export VLMRUN_API_KEY="your-api-key"
export VLMRUN_BASE_URL="https://api.vlm.run/v1"  # Optional

Managing Config

# Show current configuration
vlmrun config show

# Set values
vlmrun config set --api-key "your-api-key"
vlmrun config set --base-url "https://api.vlm.run/v1"

# Unset values
vlmrun config unset --api-key
vlmrun config unset --base-url

# Re-initialize (overwrite existing)
vlmrun config init --force

Global Options

All commands accept these global options:
OptionDescription
--api-key TEXTAPI key (overrides config/env)
--base-url TEXTAPI base URL (overrides config/env)
--debugEnable debug mode
-v, --versionShow version and exit
--helpShow help and exit

Commands

CommandDescription
chatChat with Orion to process images, videos, and documents
generateGenerate structured predictions from files
predictionsList and retrieve prediction results
filesUpload, list, retrieve, and delete files
skillsCreate, list, lookup, update, and download skills
hubBrowse available domains and JSON schemas
modelsList supported models
configManage CLI configuration

Shell Completion

Install tab completion for your shell:
# Install completion
vlmrun --install-completion

# Show completion script (to customize)
vlmrun --show-completion