GET
/
v1
/
predictions
!pip install vlmrun

from vlmrun.client import VLMRun
from vlmrun.client.types import PredictionResponse

client = VLMRun(api_key="<VLMRUN_API_KEY>")
response: list[PredictionResponse] = client.predictions.list()
[
  {
    "usage": {
      "elements_processed": 123,
      "element_type": "image",
      "credits_used": 123
    },
    "id": "<string>",
    "created_at": "2023-11-07T05:31:56Z",
    "completed_at": "2023-11-07T05:31:56Z",
    "response": "<any>",
    "status": "pending"
  }
]

Get the list of predictions for the current user.

!pip install vlmrun

from vlmrun.client import VLMRun
from vlmrun.client.types import PredictionResponse

client = VLMRun(api_key="<VLMRUN_API_KEY>")
response: list[PredictionResponse] = client.predictions.list()

Authorizations

Authorization
string
header
required

Bearer authentication header of the form Bearer <token>, where <token> is your auth token.

Query Parameters

skip
integer
default:0

Number of items to skip

Required range: x >= 0
limit
integer | null
default:10

Maximum number of items to return

Required range: 1 <= x <= 1000

Response

200
application/json
Successful Response
usage
object

The usage metrics for the request.

id
string

Unique identifier of the response.

created_at
string

Date and time when the request was created (in UTC timezone)

completed_at
string | null

Date and time when the response was completed (in UTC timezone)

response
any | null

The response from the model.

status
enum<string>
default:pending

The status of the job.

Available options:
enqueued,
pending,
running,
completed,
failed,
paused