AI-Assisted Development¶
This section documents how AI tooling is used as a disciplined engineering practice in SoccerPredictAI — not as ad-hoc autocomplete. It exists for two audiences:
- a reviewer who wants to see how the system was built, not just what was built;
- a future maintainer (myself or others) who needs the same workflow to remain reproducible.
Status: Operational. The customization layer, audit cycles, and iteration plans below are all wired and used in day-to-day work. See Implementation Status for the rest of the system.
Why this section exists¶
A modern MLOps codebase is too large to keep "in head". GitHub Copilot is therefore used as a bounded, rule-driven collaborator:
- it operates inside explicit guardrails (boundaries, contracts, reproducibility rules)
defined in
copilot-instructions.mdand the scoped files ininstructions/; - recurring multi-step tasks (audits, test-coverage analysis, model registration, release checklists) are encoded as skills and prompts so the procedure is the same every time;
- specialised work (code review, docs lookup) is delegated to subagents with narrow, read-only roles;
- every non-trivial change still goes through human review — the AI never bypasses Hydra, DVC, MLflow, tests, or documented contracts.
The principle: AI accelerates the typing, the engineering discipline still comes from the project rules.
How a request flows through the system¶
The four tiers are the same idea as IDE autocomplete vs. command palette vs. CI hooks —
just applied to a chat agent. Full mechanics live in the human-facing guide
.github/AGENT_CUSTOMIZATION.md.
What's in this section¶
| Page | What it covers |
|---|---|
| Customization Layer | The contents of .github/: agents, instructions, prompts, skills, hooks. What each one is, when it activates, and how to invoke it. |
| Continuous System Audits | The audit-system skill and the docs/validation/ artifacts it produces. How a system-wide health check is reduced to a reproducible procedure. |
| Iteration Plans | The docs/planning/ artifacts: dated, phased plans (v1.0 demo track, test-coverage remediation) generated and tracked with AI assistance. |
Key facts¶
| Aspect | Current state |
|---|---|
| Always-on rules | 1 file (copilot-instructions.md) |
| Scoped instruction files | 9 (Python, FastAPI, Airflow, MLflow, DVC, features, tests, docs, agent-customization) |
| Subagents | 2 (code reviewer, docs agent) |
| Prompts | 7 (add endpoint / pipeline stage / feature, register model, release checklist, debug test, sync docs) |
| Skills | 2 (audit-system, plan-test-coverage) |
| Hooks | 1 config (pre-tool-checks.json) |
| MCP servers wired | 2 (awesome-copilot-main reference catalogue, soccer-docs filesystem) |
| Audit cycles run | 3 (2026-04-24, 2026-04-26, 2026-04-28) |
| Iteration plans on file | 3 (v1.0, test-coverage v1, test-coverage v2) |
Hard constraints the AI must respect¶
These are enforced by the always-on rules and by every scoped instruction file:
- No bypassing Hydra, DVC, or MLflow.
- No coupling training to serving.
- No silent change of public behaviour.
- No new dependencies without justification.
- No opportunistic refactor outside task scope.
- No claiming planned design as implemented — every status claim must be backed by code.
These are the same rules a human contributor follows. The customization layer just makes them visible to the agent on every request.