GET
/
v1
/
predictions
/
{id}
curl --request GET \
  --url https://api.vlm.run/v1/predictions/{id} \
  --header 'Authorization: Bearer <token>'
{
  "usage": {
    "elements_processed": 123,
    "element_type": "image",
    "credits_used": 123
  },
  "id": "<string>",
  "created_at": "2023-11-07T05:31:56Z",
  "completed_at": "2023-11-07T05:31:56Z",
  "response": "<any>",
  "status": "pending"
}

Get the predictions for a given prediction ID.

Authorizations

Authorization
string
header
required

Bearer authentication header of the form Bearer <token>, where <token> is your auth token.

Path Parameters

id
string
required

Response

200
application/json
Successful Response

Base prediction response for all API responses.

usage
object

The usage metrics for the request.

id
string

Unique identifier of the response.

created_at
string

Date and time when the request was created (in UTC timezone)

completed_at
string | null

Date and time when the response was completed (in UTC timezone)

response
any | null

The response from the model.

status
enum<string>
default:
pending

The status of the job.

Available options:
enqueued,
pending,
running,
completed,
failed,
paused