Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.vlm.run/llms.txt

Use this file to discover all available pages before exploring further.

The vlmrun chat command enables visual AI chat with Orion directly from your terminal. Process images, videos, and documents with natural language prompts.

Basic Usage

# Describe an image
vlmrun chat "Describe this image" -i photo.jpg

# Analyze a document
vlmrun chat "Extract the key information" -i document.pdf

# Process a video
vlmrun chat "Summarize this video" -i video.mp4

# Compare multiple files
vlmrun chat "Compare these images" -i image1.jpg -i image2.jpg

Prompt Sources

Prompts can be provided in three ways (in precedence order):
# 1. Direct argument
vlmrun chat "Your prompt here" -i file.jpg

# 2. Using -p option (text, file path, or stdin)
vlmrun chat -p "Your prompt" -i file.jpg
vlmrun chat -p prompt.txt -i file.jpg

# 3. Piped stdin
echo "Describe this" | vlmrun chat - -i file.jpg
cat prompt.txt | vlmrun chat - -i file.jpg

Using Skills

Pass a local skill directory with -k to apply skill instructions inline:
# Use a local skill
vlmrun chat "Extract data from this invoice" -i invoice.pdf -k ./my-skill

# Multiple skills
vlmrun chat "Analyze this" -i photo.jpg -k ./skill-a -k ./skill-b
The -k flag sends the skill inline with the request (no server-side upload). To create a persistent server-side skill, use vlmrun skills upload.

Stateful Sessions

Use --session-id to persist chat history across multiple calls:
# Start a session
vlmrun chat "What's in this image?" -i photo.jpg --session-id my-session

# Continue the conversation
vlmrun chat "Now describe the colors" --session-id my-session

Models

ModelDescription
vlmrun-orion-1:fastSpeed-optimized
vlmrun-orion-1:autoBalanced (default)
vlmrun-orion-1:proMost capable
vlmrun chat "Describe this" -i photo.jpg -m vlmrun-orion-1:pro

Output Formats

# Rich formatted output (default)
vlmrun chat "Describe this" -i photo.jpg

# JSON output for scripting
vlmrun chat "Describe this" -i photo.jpg --json

# Pipe JSON to jq
vlmrun chat "Describe this" -i photo.jpg --json | jq '.content'

# Disable streaming (wait for complete response)
vlmrun chat "Describe this" -i photo.jpg --no-stream

Artifact Handling

When Orion generates artifacts (images, videos, etc.), they are automatically downloaded:
# Default: saved to ~/.vlm/cache/artifacts/<session_id>/
vlmrun chat "Generate a variation of this image" -i photo.jpg

# Custom output directory
vlmrun chat "Generate a variation" -i photo.jpg -o ./output/

# Skip artifact download
vlmrun chat "Generate a variation" -i photo.jpg --no-download

Supported File Types

CategoryExtensions
Images.jpg, .jpeg, .png, .gif, .webp, .bmp, .tiff
Videos.mp4, .mov, .avi, .mkv, .webm
Documents.pdf, .doc, .docx
Audio.mp3, .wav, .m4a, .flac, .ogg

Options Reference

OptionShortDescription
--prompt-pPrompt: text string, file path, or stdin
--input-iInput file (repeatable)
--skill-kPath to a skill directory (repeatable)
--output-oArtifact output directory
--model-mModel variant (default: vlmrun-orion-1:auto)
--json-jOutput JSON instead of formatted text
--no-stream-nsDisable streaming
--no-download-ndSkip artifact download
--session-id-sSession UUID for stateful conversations
--base-urlAPI base URL override