POST
/
v1
/
video
/
generate
curl --request POST \
  --url https://api.vlm.run/v1/video/generate \
  --header 'Authorization: Bearer <token>' \
  --header 'Content-Type: application/json' \
  --data '{
  "metadata": {
    "environment": "dev",
    "session_id": "<string>",
    "allow_training": true
  },
  "config": {
    "prompt": "<string>",
    "detail": "auto",
    "response_model": "<any>",
    "json_schema": {},
    "gql_stmt": "<string>",
    "max_retries": 3,
    "max_tokens": 4096,
    "temperature": 0,
    "confidence": false,
    "grounding": false
  },
  "url": "<string>",
  "file_id": "<string>",
  "id": "<string>",
  "created_at": "2023-11-07T05:31:56Z",
  "callback_url": "<string>",
  "model": "vlm-1",
  "domain": "video.transcription",
  "batch": true
}'
"<any>"

Authorizations

Authorization
string
header
required

Bearer authentication header of the form Bearer <token>, where <token> is your auth token.

Body

application/json

Request to the Video API (i.e. structured prediction).

domain
enum<string>
required

The domain identifier for the model (e.g. video.transcription).

Available options:
video.transcription,
video.transcription-summary,
video.tv-news-summary,
video.dashcam
metadata
object

Optional metadata to pass to the model.

config
object

The VLM generation config to be used for /<dtype>/generate.

url
string | null

The URL of the file (provide either file_id or url).

file_id
string | null

The ID of the uploaded file (provide either file_id or url).

id
string

Unique identifier of the request.

created_at
string

Date and time when the request was created (in UTC timezone)

callback_url
string | null

The URL to call when the request is completed.

Minimum length: 1
model
string
default:vlm-1

The model to use for generating the response.

Allowed value: "vlm-1"
batch
boolean
default:true

Whether to process the document in batch mode (async).

Response

200
application/json
Successful Response

The response is of type any.