Skip to content

Request / Response Examples

Concrete examples for all implemented inference API endpoints. All paths and schemas match API Contract.


Sync prediction

cURL

curl -X POST http://localhost:8000/predict/ \
  -H "Content-Type: application/json" \
  -d '{
    "match_id": 99,
    "features": {
      "diff_win_5_mean": 0.3,
      "diff_goals_for_3_mean": 0.6,
      "home_elo_pre": 1520.0,
      "sex": 0
    }
  }'

Response

{
  "match_id": 99,
  "prediction": {
    "predicted_class": 0,
    "probabilities": {"0": 0.58, "1": 0.27, "2": 0.15},
    "model_version": "Production",
    "model_run_id": "3f7a1c9d2e4b"
  }
}

Python

import httpx

response = httpx.post(
    "http://localhost:8000/predict/",
    json={
        "match_id": 99,
        "features": {
            "diff_win_5_mean": 0.3,
            "diff_goals_for_3_mean": 0.6,
            "home_elo_pre": 1520.0,
            "sex": 0,
        },
    },
)
response.raise_for_status()
print(response.json())

Batch lookup

Retrieve a pre-computed prediction for a match from the batch_inference output:

curl http://localhost:8000/predict/42

Response shape is identical to the sync prediction response.


List upcoming matches

curl http://localhost:8000/predict/matches/
[
  {"match_id": 42, "home_team": "Arsenal", "away_team": "Chelsea", "match_date": "2026-04-25"},
  {"match_id": 43, "home_team": "Liverpool", "away_team": "Man City", "match_date": "2026-04-26"}
]

Model info

curl http://localhost:8000/predict/model/info
{
  "model_name": "soccer_model",
  "model_version": "Production",
  "model_run_id": "3f7a1c9d2e4b",
  "loaded": true
}

Async prediction

Submit task

curl -X POST http://localhost:8000/predict/async/ \
  -H "Content-Type: application/json" \
  -d '{
    "match_id": 99,
    "features": {
      "diff_win_5_mean": 0.3,
      "diff_goals_for_3_mean": 0.6,
      "home_elo_pre": 1520.0,
      "sex": 0
    }
  }'
{
  "task_id": "abc-123-def-456",
  "status": "submitted",
  "status_url": "/monitoring/task_status/abc-123-def-456"
}

Poll for result

# While pending:
curl http://localhost:8000/monitoring/task_status/abc-123-def-456
# {"task_id": "abc-123-def-456", "status": "pending"}

# After completion:
curl http://localhost:8000/monitoring/task_status/abc-123-def-456
{
  "task_id": "abc-123-def-456",
  "status": "success",
  "result": {
    "predicted_class": 0,
    "probabilities": {"0": 0.58, "1": 0.27, "2": 0.15},
    "model_version": "Production",
    "model_run_id": "3f7a1c9d2e4b"
  }
}

Python (async submit + poll)

import time
import httpx

client = httpx.Client(base_url="http://localhost:8000")

payload = {
    "match_id": 99,
    "features": {"diff_win_5_mean": 0.3, "diff_goals_for_3_mean": 0.6, "sex": 0},
}

# Submit
submit = client.post("/predict/async/", json=payload)
submit.raise_for_status()
task_id = submit.json()["task_id"]

# Poll
while True:
    poll = client.get(f"/monitoring/task_status/{task_id}")
    poll.raise_for_status()
    data = poll.json()
    if data["status"] in ("success", "failure"):
        print(data)
        break
    time.sleep(1)

Health check

curl http://localhost:8000/healthcheck/
{
  "status": "ok",
  "worker_id": "celery@worker-ml-abc123",
  "memory_usage_mb": 210.4
}

Metrics (Prometheus)

curl http://localhost:8000/metrics

Returns plain-text Prometheus exposition format. Example:

# HELP prediction_requests_total Total prediction requests
# TYPE prediction_requests_total counter
prediction_requests_total{source="sync"} 142.0
prediction_requests_total{source="async"} 38.0
prediction_timeouts_total 0.0

Validation error example

curl -X POST http://localhost:8000/predict/ \
  -H "Content-Type: application/json" \
  -d '{"match_id": "not-an-int"}'
{
  "detail": [
    {
      "loc": ["body", "match_id"],
      "msg": "value is not a valid integer",
      "type": "type_error.integer"
    }
  ]
}