API & Serving Evidence¶
Proof that the inference API is live and responding correctly.
Work in progress
This page is a placeholder. Live API screenshots and curl outputs will be added here.
Live API Response¶
curl -X POST "http://api.time2bet.ru/v1/predict" \
-H "Content-Type: application/json" \
-d '{"home_team": "Arsenal", "away_team": "Chelsea",
"match_date": "2025-05-10", "league": "premier_league"}'
Expected output placeholder:
{
"match_id": "arsenal_chelsea_20250510",
"probabilities": {"home_win": 0.42, "draw": 0.28, "away_win": 0.30},
"predicted_outcome": "home_win",
"model_version": "v2",
"inference_latency_ms": 18
}
OpenAPI / Swagger UI¶
Screenshot placeholder — FastAPI Swagger UI at http://api.time2bet.ru/docs
Health Check¶
curl http://api.time2bet.ru/health
# {"status": "ok"}
curl http://api.time2bet.ru/ready
# {"status": "ready", "model_version": "v2"}
Latency Measurements¶
Placeholder — k6 or locust load test output
| Percentile | Latency |
|---|---|
| p50 | ~18ms |
| p95 | ~45ms |
| p99 | ~120ms |
See Request / Response Examples and Performance, Capacity & SLOs.