Skip to content

API Reference (FastAPI)

This page documents the public HTTP API of the Time2Bet inference service. The API contract is the stable interface between clients (UI, batch jobs) and the model.

Source of truth for the complete schema is the generated OpenAPI spec. This page provides the human-oriented contract and examples.


Base URLs

  • Local: http://localhost:8000
  • Production demo: https://time2bet.ru

Authentication (if enabled)

  • If authentication is enabled, requests must include a token header (TBD).
  • Public demo endpoints may be read-only and rate-limited.

Endpoints

GET /healthcheck

Health endpoint for readiness/liveness checks.

Response - 200 OK when the service is healthy and ready - includes basic status fields (optional)


GET /metrics

Prometheus metrics endpoint.

Response - 200 OK plaintext Prometheus format


POST /predict

Synchronous inference.

Use cases - interactive requests - small batch predictions

Contract - request body is validated strictly via Pydantic - unknown fields are rejected (prevents silent contract drift)

Example request

curl -X POST http://localhost:8000/predict \
  -H "Content-Type: application/json" \
  -d '{
    "match_id": 123,
    "home_team_id": 10,
    "away_team_id": 20,
    "match_datetime_utc": "2026-02-10T18:00:00Z"
  }'
````

**Example response**

```json
{
  "prediction": {
    "home_win_proba": 0.45,
    "draw_proba": 0.28,
    "away_win_proba": 0.27
  },
  "model": {
    "model_uri": "models:/time2bet/Production",
    "model_version": "42"
  },
  "meta": {
    "request_id": "..."
  }
}

Replace fields with your actual schema. The example is intentionally generic.


POST /predict/async

Asynchronous inference submission (if enabled).

Behavior

  • returns a job ID immediately
  • job is executed by Celery workers

Example response

{
  "job_id": "abc123",
  "status": "queued"
}

GET /predict/async/{job_id}

Fetch async inference result (if enabled).

Response

  • 200 OK with result when ready
  • 202 Accepted when still processing (optional)
  • 404 if job not found/expired

Error handling

Validation errors (client-side)

  • 422 Unprocessable Entity on schema validation failures
  • response includes details about invalid fields

Server errors

  • 5xx indicates internal failure
  • errors are logged with request_id for correlation

Model traceability

Every prediction response should allow traceability to:

  • active model version,
  • model registry URI,
  • request_id (for logs),
  • optionally dataset version used during training (if exposed as metadata).

  • Serving → Inference API Contract
  • ML → Model Contract
  • Monitoring → Metrics