Skip to main content
GET
/
v1
/
predictions
!pip install vlmrun

from vlmrun.client import VLMRun
from vlmrun.client.types import PredictionResponse

client = VLMRun(api_key="<VLMRUN_API_KEY>")
response: list[PredictionResponse] = client.predictions.list()
[
  {
    "usage": {
      "elements_processed": 123,
      "element_type": "image",
      "credits_used": 123,
      "steps": 123,
      "message": "<string>",
      "duration_seconds": 0
    },
    "id": "<string>",
    "created_at": "2023-11-07T05:31:56Z",
    "completed_at": "2023-11-07T05:31:56Z",
    "response": "<unknown>",
    "status": "pending",
    "domain": "<string>"
  }
]

Documentation Index

Fetch the complete documentation index at: https://docs.vlm.run/llms.txt

Use this file to discover all available pages before exploring further.

Get the list of predictions for the current user.
!pip install vlmrun

from vlmrun.client import VLMRun
from vlmrun.client.types import PredictionResponse

client = VLMRun(api_key="<VLMRUN_API_KEY>")
response: list[PredictionResponse] = client.predictions.list()

Authorizations

Authorization
string
header
required

Bearer authentication header of the form Bearer <token>, where <token> is your auth token.

Query Parameters

skip
integer
default:0

Number of items to skip

Required range: x >= 0
limit
integer | null
default:10

Maximum number of items to return

Required range: 1 <= x <= 1000

Response

Successful Response

usage
CreditUsageResponse · object

The usage metrics for the request.

id
string

Unique identifier of the response.

created_at
string<date-time>

Date and time when the request was created (in UTC timezone)

completed_at
string<date-time> | null

Date and time when the response was completed (in UTC timezone)

response
any | null

The response from the model.

status
enum<string>
default:pending

The status of the job.

Available options:
pending,
enqueued,
running,
completed,
failed,
paused
domain
string | null

The domain of the prediction (e.g. document.invoice, image.caption).