vlmrun) lets you interact with the VLM Run platform directly from your terminal — chat with Orion, generate structured predictions, manage files and skills, and more.
Installation
The CLI is included with the VLM Run Python SDK. For the full experience with rich terminal output, install with thecli extra:
Configuration
Quick Setup
Initialize a config file and set your API key:~/.vlmrun/config.toml.
Environment Variables
Alternatively, set your API key via environment variables:Managing Config
Global Options
All commands accept these global options:| Option | Description |
|---|---|
--api-key TEXT | API key (overrides config/env) |
--base-url TEXT | API base URL (overrides config/env) |
--debug | Enable debug mode |
-v, --version | Show version and exit |
--help | Show help and exit |
Commands
| Command | Description |
|---|---|
chat | Chat with Orion to process images, videos, and documents |
generate | Generate structured predictions from files |
predictions | List and retrieve prediction results |
files | Upload, list, retrieve, and delete files |
skills | Create, list, lookup, update, and download skills |
hub | Browse available domains and JSON schemas |
models | List supported models |
config | Manage CLI configuration |