GET
/
v1
/
predictions
/
{id}
!pip install vlmrun

from vlmrun.client import VLMRun
from vlmrun.client.types import PredictionResponse

client = VLMRun(api_key="<VLMRUN_API_KEY>")
response: PredictionResponse = client.predictions.get("<prediction_id>")
{
  "usage": {
    "elements_processed": 123,
    "element_type": "image",
    "credits_used": 123
  },
  "id": "<string>",
  "created_at": "2023-11-07T05:31:56Z",
  "completed_at": "2023-11-07T05:31:56Z",
  "response": "<any>",
  "status": "pending"
}

Get the predictions for a given prediction ID.

!pip install vlmrun

from vlmrun.client import VLMRun
from vlmrun.client.types import PredictionResponse

client = VLMRun(api_key="<VLMRUN_API_KEY>")
response: PredictionResponse = client.predictions.get("<prediction_id>")

Authorizations

Authorization
string
header
required

Bearer authentication header of the form Bearer <token>, where <token> is your auth token.

Path Parameters

id
string
required

Response

200
application/json
Successful Response

Base prediction response for all API responses.

usage
object

The usage metrics for the request.

id
string

Unique identifier of the response.

created_at
string

Date and time when the request was created (in UTC timezone)

completed_at
string | null

Date and time when the response was completed (in UTC timezone)

response
any | null

The response from the model.

status
enum<string>
default:pending

The status of the job.

Available options:
enqueued,
pending,
running,
completed,
failed,
paused