POST
/
v1
/
video
/
generate
!pip install vlmrun

from pathlib import Path
from vlmrun.client import VLMRun

client = VLMRun(api_key="<VLMRUN_API_KEY>")
response = client.video.generate(
    file=Path("<path>.mp4"),
    domain="video.transcription",
    batch=True
)
{
  "usage": {
    "elements_processed": 123,
    "element_type": "image",
    "credits_used": 123
  },
  "id": "<string>",
  "created_at": "2023-11-07T05:31:56Z",
  "completed_at": "2023-11-07T05:31:56Z",
  "response": "<see JSON response example>",
  "status": "enqueued"
}
!pip install vlmrun

from pathlib import Path
from vlmrun.client import VLMRun

client = VLMRun(api_key="<VLMRUN_API_KEY>")
response = client.video.generate(
    file=Path("<path>.mp4"),
    domain="video.transcription",
    batch=True
)
{
  "usage": {
    "elements_processed": 123,
    "element_type": "image",
    "credits_used": 123
  },
  "id": "<string>",
  "created_at": "2023-11-07T05:31:56Z",
  "completed_at": "2023-11-07T05:31:56Z",
  "response": "<see JSON response example>",
  "status": "enqueued"
}

Try our Colab Cookbook Example

Try our Colab Cookbook example for long-form video transcription.

Authorizations

Authorization
string
header
required

Bearer authentication header of the form Bearer <token>, where <token> is your auth token.

Body

application/json

Request to the Video API (i.e. structured prediction).

domain
enum<string>
required

The domain identifier for the model (e.g. video.transcription).

Available options:
video.transcription,
video.transcription-summary,
video.tv-news-summary,
video.dashcam
metadata
object

Optional metadata to pass to the model.

config
object

The VLM generation config to be used for /<dtype>/generate.

url
string | null

The URL of the file (provide either file_id or url).

file_id
string | null

The ID of the uploaded file (provide either file_id or url).

id
string

Unique identifier of the request.

created_at
string

Date and time when the request was created (in UTC timezone)

callback_url
string | null

The URL to call when the request is completed.

Minimum length: 1
model
string
default:vlm-1

The model to use for generating the response.

Allowed value: "vlm-1"
batch
boolean
default:true

Whether to process the document in batch mode (async).

Response

200
application/json
Successful Response

The response is of type any.