Installation
The CLI is included when you install the VLM Run Python SDK. For the full CLI experience with rich terminal output, install with thecli extra:
Configuration
Global Options
Using Environment Variables
You can set your API key and other settings using environment variables:Command Groups
The CLI is organized into logical command groups:| Command | Description |
|---|---|
chat | Visual AI chat with the Orion agent |
files | File upload and management |
models | Model listing and information |
generate | Generate predictions from files |
hub | Domain and schema management |
datasets | Dataset creation and management |
fine-tuning | Model fine-tuning operations |
predictions | View and manage predictions |
Chat Command
Thevlmrun chat command enables visual AI chat with the Orion agent directly from your terminal. Process images, videos, and documents with natural language prompts.
Basic Usage
Prompt Sources
The chat command supports multiple ways to provide prompts, with the following precedence:Available Models
The chat command supports three Orion agent tiers:| Model | Description |
|---|---|
vlmrun-orion-1:fast | Speed-optimized for quick responses |
vlmrun-orion-1:auto | Auto-selects the best model (default) |
vlmrun-orion-1:pro | Most capable, highest quality |
Command Options
Supported File Types
The chat command supports a wide range of file types:| Category | Extensions |
|---|---|
| Images | .jpg, .jpeg, .png, .gif, .webp, .bmp, .tiff |
| Videos | .mp4, .mov, .avi, .mkv, .webm |
| Documents | .pdf, .doc, .docx |
| Audio | .mp3, .wav, .m4a, .flac, .ogg |
Output Formats
By default, the chat command displays rich formatted output with panels. Use--json for programmatic access:
Artifact Handling
When the Orion agent generates artifacts (images, videos, etc.), they are automatically downloaded to a local cache directory:Examples
Common Operations
File Management
Upload a file to VLM Run:Model Operations
Generating Predictions
Generate predictions from various file types:Working with Domains
Managing Datasets
Fine-tuning Models
Advanced Usage
JSON Output
Add the--json flag to get machine-readable JSON output: