Skip to main content
GET
/
v1
/
predictions
/
{id}
!pip install vlmrun

from vlmrun.client import VLMRun
from vlmrun.client.types import PredictionResponse

client = VLMRun(api_key="<VLMRUN_API_KEY>")
response: PredictionResponse = client.predictions.get("<prediction_id>")
{
  "usage": {
    "elements_processed": 123,
    "element_type": "image",
    "credits_used": 123,
    "steps": 123,
    "message": "<string>",
    "duration_seconds": 0
  },
  "id": "<string>",
  "created_at": "2023-11-07T05:31:56Z",
  "completed_at": "2023-11-07T05:31:56Z",
  "response": "<any>",
  "status": "pending",
  "domain": "<string>"
}
Get the predictions for a given prediction ID.
!pip install vlmrun

from vlmrun.client import VLMRun
from vlmrun.client.types import PredictionResponse

client = VLMRun(api_key="<VLMRUN_API_KEY>")
response: PredictionResponse = client.predictions.get("<prediction_id>")

Authorizations

Authorization
string
header
required

Bearer authentication header of the form Bearer <token>, where <token> is your auth token.

Path Parameters

id
string
required

Response

Successful Response

Base prediction response for all API responses.

usage
object

The usage metrics for the request.

id
string

Unique identifier of the response.

created_at
string<date-time>

Date and time when the request was created (in UTC timezone)

completed_at
string<date-time> | null

Date and time when the response was completed (in UTC timezone)

response
any

The response from the model.

status
enum<string>
default:pending

The status of the job.

Available options:
pending,
enqueued,
running,
completed,
failed,
paused
domain
string | null

The domain of the prediction (e.g. document.invoice, image.caption).

I