What Makes an AI System High-Risk
The EU AI Act does not classify AI systems by their technical complexity. Classification depends on deployment context and impact on people's lives. A simple logistic regression model used to screen job applicants is high-risk. A sophisticated neural network playing chess is minimal risk.
Under Article 6(2), an AI system is high-risk if it falls into one of the use cases listed in Annex III — and poses a significant risk of harm to health, safety, or fundamental rights.
High-risk classification is based on what decisions your AI influences, not how complex the technology is. A spreadsheet formula that auto-rejects loan applications could qualify as high-risk.
The 8 Categories of Annex III
1. Biometrics
Remote biometric identification systems, biometric categorization based on sensitive attributes, and emotion recognition in workplaces or schools. Real-time biometric identification in public spaces is generally prohibited, with narrow law enforcement exceptions.
2. Critical Infrastructure
AI used as safety components in the management and operation of critical digital infrastructure, road traffic, and the supply of water, gas, heating, and electricity. This covers AI systems that could cause physical harm if they malfunction.
3. Education and Vocational Training
AI systems that determine access to education, evaluate learning outcomes, assess the appropriate level of education for an individual, or monitor students during tests. This includes AI-powered proctoring and grading systems.
4. Employment, Workers Management, Self-Employment
This is the category that catches most US companies. AI used for recruitment, screening, filtering, or evaluating candidates. AI that makes decisions about promotion, termination, task allocation, or performance monitoring. Any US company using AI-powered hiring tools that screen EU applicants is in scope.
5. Access to Essential Services
AI for evaluating creditworthiness, setting insurance premiums, risk assessment in life and health insurance, and evaluating eligibility for public benefits. US fintech companies with EU customers are directly affected.
6. Law Enforcement
AI used for individual risk assessments, polygraphs, evidence evaluation, crime prediction, and profiling during criminal investigations. Primarily affects government use, but private contractors supplying these systems to EU agencies are providers under the Act.
7. Migration, Asylum, and Border Control
AI used in immigration risk assessment, visa application processing, and border monitoring. Companies providing AI tools to EU immigration authorities must comply.
8. Administration of Justice and Democratic Processes
AI used to assist judicial authorities in researching and interpreting facts and law, and to apply the law to facts. Also covers AI used to influence election outcomes.
For US companies, categories 4 (Employment) and 5 (Essential Services) are the most common high-risk triggers. If you use AI to screen job applicants or assess creditworthiness for anyone in the EU, your system is likely high-risk under Annex III.
Compliance Obligations for High-Risk Systems
Providers of high-risk AI systems must implement a comprehensive set of requirements before placing their system on the EU market or putting it into service:
- Risk Management System (Art. 9) — continuous identification, estimation, and evaluation of risks throughout the system's lifecycle
- Data Governance (Art. 10) — training, validation, and testing datasets must be relevant, representative, and as free of errors as possible
- Technical Documentation (Art. 11, Annex IV) — detailed documentation demonstrating compliance with all requirements
- Record-Keeping (Art. 12) — automatic logging of events relevant to risk identification
- Transparency (Art. 13) — sufficient information for deployers to understand the system's capabilities and limitations
- Human Oversight (Art. 14) — designed to enable effective human oversight during use
- Accuracy, Robustness, Cybersecurity (Art. 15) — appropriate levels of accuracy, robustness, and cybersecurity
- Conformity Assessment (Art. 43) — internal self-assessment (most Annex III systems) or third-party assessment (biometric ID)
- EU Database Registration (Art. 71) — registration in the public EU AI Act database before deployment
The Conformity Assessment Process
For most Annex III high-risk systems, providers follow an internal conformity assessment under Annex VI. This means the provider verifies its own compliance — no external auditor required. However, the documentation must be rigorous enough to withstand regulatory scrutiny.
The exception is biometric identification systems, which require a third-party conformity assessment by a notified body under Annex VII.
After completing the assessment, providers must issue a Declaration of Conformity and apply the CE marking before placing the system on the EU market.
We prepare Annex IV technical documentation packages and guide US companies through the conformity assessment process. Our documentation templates are designed for internal self-assessment under Annex VI. Contact us for a classification review.