Compliance · Step-by-Step Checklist

EU AI Act Compliance
Checklist
for US Companies

A practical, step-by-step roadmap for US companies preparing for EU AI Act compliance before the August 2, 2026 deadline. From AI inventory to conformity assessment — every action item in order.

By Lexara Advisory 10 min read
EU AI Act Compliance Guide

Phase 1: Discovery (Start Now)

Step 1 — Build Your AI System Inventory

Catalog every AI system in your organization — including embedded vendor AI, third-party APIs, and internal tools. For each system, document the intended purpose, who it serves, what decisions it informs or makes, and its limitations.

Common mistake: Companies overlook AI embedded in SaaS products they use. If your HR platform uses AI to rank candidates, that AI system needs to be in your inventory.

Step 2 — Determine Territorial Scope

For each AI system, ask: does this system's output affect anyone in the EU? This includes EU customers, EU employees, EU job applicants, or EU-based users of your product. If yes, the EU AI Act applies to your company for that system.

Step 3 — Classify Risk for Each System

Map each in-scope AI system to the EU AI Act's four risk tiers: unacceptable (prohibited), high-risk (Annex III), limited risk (transparency obligations), or minimal risk (no mandatory obligations). Document your classification rationale.

Practical Tip

Start with your highest-impact AI systems. If you use AI for hiring, credit decisions, or insurance — those are almost certainly high-risk under Annex III. Classify those first, then work outward.

Phase 2: Gap Analysis (Q2 2026)

Step 4 — Assess Current Documentation

For each high-risk system, compare your existing documentation against the Annex IV technical documentation requirements. Common gaps include: missing data governance records, incomplete risk management documentation, and no formal human oversight procedures.

Step 5 — Evaluate Data Governance

Review training, validation, and testing datasets for relevance, representativeness, and error rates. Under Article 10, data governance is not optional — you need documented evidence that your data meets quality standards.

Step 6 — Design Human Oversight Mechanisms

Article 14 requires that high-risk AI systems enable effective human oversight. This means real humans with the authority and competence to override, intervene, or shut down the AI system. Document who has this authority, how they exercise it, and what training they've received.

Phase 3: Implementation (Q2–Q3 2026)

Step 7 — Prepare Technical Documentation

Create the Annex IV documentation package for each high-risk system. This includes: system description and intended purpose, risk management results, data governance evidence, performance metrics, human oversight procedures, and cybersecurity measures.

Step 8 — Implement Risk Management System

Article 9 requires a continuous risk management system — not a one-time assessment. Implement processes for ongoing risk identification, estimation, evaluation, and mitigation throughout the AI system's lifecycle.

Step 9 — Conduct Conformity Assessment

For most Annex III systems, complete the internal self-assessment under Annex VI. Issue your Declaration of Conformity and apply CE marking. For biometric systems, engage a notified body.

Step 10 — Register in the EU Database

Under Article 71, high-risk AI systems must be registered in the EU database before deployment. The registration includes system description, risk classification, contact details, and conformity assessment results.

Phase 4: Ongoing Compliance

Step 11 — Post-Market Monitoring

Article 72 requires providers to establish a post-market monitoring system proportionate to the AI system's risk. This includes collecting and analyzing data on system performance, incidents, and user feedback.

Step 12 — Incident Reporting

Under Article 73, providers must report serious incidents to market surveillance authorities. Establish internal procedures for incident detection, investigation, and reporting.

Need Help With Your Checklist?

Lexara Advisory provides a structured compliance assessment that maps your current status against every requirement. We identify gaps, prioritize actions, and deliver the documentation templates you need. Start your assessment.

Frequently Asked Questions

Timeline depends on the number of AI systems and their risk classification. For a company with 2-5 high-risk systems, expect 3-6 months for full compliance including documentation, risk management, and conformity assessment. Starting now gives sufficient time before the August 2, 2026 deadline.
The first step is building a complete AI system inventory — cataloging every AI system in your organization, including embedded vendor AI and third-party APIs. For each system, determine whether its outputs affect anyone in the EU, which triggers the Act's territorial scope.
If you are a non-EU provider placing a high-risk AI system on the EU market, you must appoint an EU Authorized Representative before deployment. This representative acts as your regulatory contact point within the EU and must be formally mandated in writing.
High-risk AI systems require Annex IV technical documentation covering: system design and intended purpose, risk management evidence, data governance records, performance testing results, human oversight procedures, cybersecurity measures, and a Declaration of Conformity.

Need Help With
EU AI Act Compliance?

Lexara Advisory provides scope assessments, risk classification, Annex IV documentation, and end-to-end compliance support for US companies facing the August 2026 deadline.

Contact Lexara Advisory →

Lexara Advisory LLC — AI governance consulting, not legal practice.

Lexara AI Assistant

🤖 AI — not a human or lawyer

⚠️ AI Disclosure (EU AI Act · Art. 50): You are interacting with an automated AI system, not a human. For professional guidance contact Lexara Advisory directly.
Hello. I can help you understand EU AI Act compliance for US companies.

What would you like to know?
Powered by Lexara Advisory LLC