Phase 1: Discovery (Start Now)
Step 1 — Build Your AI System Inventory
Catalog every AI system in your organization — including embedded vendor AI, third-party APIs, and internal tools. For each system, document the intended purpose, who it serves, what decisions it informs or makes, and its limitations.
Common mistake: Companies overlook AI embedded in SaaS products they use. If your HR platform uses AI to rank candidates, that AI system needs to be in your inventory.
Step 2 — Determine Territorial Scope
For each AI system, ask: does this system's output affect anyone in the EU? This includes EU customers, EU employees, EU job applicants, or EU-based users of your product. If yes, the EU AI Act applies to your company for that system.
Step 3 — Classify Risk for Each System
Map each in-scope AI system to the EU AI Act's four risk tiers: unacceptable (prohibited), high-risk (Annex III), limited risk (transparency obligations), or minimal risk (no mandatory obligations). Document your classification rationale.
Start with your highest-impact AI systems. If you use AI for hiring, credit decisions, or insurance — those are almost certainly high-risk under Annex III. Classify those first, then work outward.
Phase 2: Gap Analysis (Q2 2026)
Step 4 — Assess Current Documentation
For each high-risk system, compare your existing documentation against the Annex IV technical documentation requirements. Common gaps include: missing data governance records, incomplete risk management documentation, and no formal human oversight procedures.
Step 5 — Evaluate Data Governance
Review training, validation, and testing datasets for relevance, representativeness, and error rates. Under Article 10, data governance is not optional — you need documented evidence that your data meets quality standards.
Step 6 — Design Human Oversight Mechanisms
Article 14 requires that high-risk AI systems enable effective human oversight. This means real humans with the authority and competence to override, intervene, or shut down the AI system. Document who has this authority, how they exercise it, and what training they've received.
Phase 3: Implementation (Q2–Q3 2026)
Step 7 — Prepare Technical Documentation
Create the Annex IV documentation package for each high-risk system. This includes: system description and intended purpose, risk management results, data governance evidence, performance metrics, human oversight procedures, and cybersecurity measures.
Step 8 — Implement Risk Management System
Article 9 requires a continuous risk management system — not a one-time assessment. Implement processes for ongoing risk identification, estimation, evaluation, and mitigation throughout the AI system's lifecycle.
Step 9 — Conduct Conformity Assessment
For most Annex III systems, complete the internal self-assessment under Annex VI. Issue your Declaration of Conformity and apply CE marking. For biometric systems, engage a notified body.
Step 10 — Register in the EU Database
Under Article 71, high-risk AI systems must be registered in the EU database before deployment. The registration includes system description, risk classification, contact details, and conformity assessment results.
Phase 4: Ongoing Compliance
Step 11 — Post-Market Monitoring
Article 72 requires providers to establish a post-market monitoring system proportionate to the AI system's risk. This includes collecting and analyzing data on system performance, incidents, and user feedback.
Step 12 — Incident Reporting
Under Article 73, providers must report serious incidents to market surveillance authorities. Establish internal procedures for incident detection, investigation, and reporting.
Lexara Advisory provides a structured compliance assessment that maps your current status against every requirement. We identify gaps, prioritize actions, and deliver the documentation templates you need. Start your assessment.