Annex III · High-Risk Classification

High-Risk AI Systems
Under the EU AI Act

Annex III of the EU AI Act defines eight categories of high-risk AI systems — from hiring algorithms to credit scoring. If your AI system falls into any of these categories, you face the heaviest compliance obligations before August 2, 2026.

By Lexara Advisory 12 min read
EU AI Act Compliance Guide

What Makes an AI System High-Risk

The EU AI Act does not classify AI systems by their technical complexity. Classification depends on deployment context and impact on people's lives. A simple logistic regression model used to screen job applicants is high-risk. A sophisticated neural network playing chess is minimal risk.

Under Article 6(2), an AI system is high-risk if it falls into one of the use cases listed in Annex III — and poses a significant risk of harm to health, safety, or fundamental rights.

Key Principle

High-risk classification is based on what decisions your AI influences, not how complex the technology is. A spreadsheet formula that auto-rejects loan applications could qualify as high-risk.

The 8 Categories of Annex III

1. Biometrics

Remote biometric identification systems, biometric categorization based on sensitive attributes, and emotion recognition in workplaces or schools. Real-time biometric identification in public spaces is generally prohibited, with narrow law enforcement exceptions.

2. Critical Infrastructure

AI used as safety components in the management and operation of critical digital infrastructure, road traffic, and the supply of water, gas, heating, and electricity. This covers AI systems that could cause physical harm if they malfunction.

3. Education and Vocational Training

AI systems that determine access to education, evaluate learning outcomes, assess the appropriate level of education for an individual, or monitor students during tests. This includes AI-powered proctoring and grading systems.

4. Employment, Workers Management, Self-Employment

This is the category that catches most US companies. AI used for recruitment, screening, filtering, or evaluating candidates. AI that makes decisions about promotion, termination, task allocation, or performance monitoring. Any US company using AI-powered hiring tools that screen EU applicants is in scope.

5. Access to Essential Services

AI for evaluating creditworthiness, setting insurance premiums, risk assessment in life and health insurance, and evaluating eligibility for public benefits. US fintech companies with EU customers are directly affected.

6. Law Enforcement

AI used for individual risk assessments, polygraphs, evidence evaluation, crime prediction, and profiling during criminal investigations. Primarily affects government use, but private contractors supplying these systems to EU agencies are providers under the Act.

7. Migration, Asylum, and Border Control

AI used in immigration risk assessment, visa application processing, and border monitoring. Companies providing AI tools to EU immigration authorities must comply.

8. Administration of Justice and Democratic Processes

AI used to assist judicial authorities in researching and interpreting facts and law, and to apply the law to facts. Also covers AI used to influence election outcomes.

US Companies: Most Common Triggers

For US companies, categories 4 (Employment) and 5 (Essential Services) are the most common high-risk triggers. If you use AI to screen job applicants or assess creditworthiness for anyone in the EU, your system is likely high-risk under Annex III.

Compliance Obligations for High-Risk Systems

Providers of high-risk AI systems must implement a comprehensive set of requirements before placing their system on the EU market or putting it into service:

The Conformity Assessment Process

For most Annex III high-risk systems, providers follow an internal conformity assessment under Annex VI. This means the provider verifies its own compliance — no external auditor required. However, the documentation must be rigorous enough to withstand regulatory scrutiny.

The exception is biometric identification systems, which require a third-party conformity assessment by a notified body under Annex VII.

After completing the assessment, providers must issue a Declaration of Conformity and apply the CE marking before placing the system on the EU market.

Lexara Advisory Can Help

We prepare Annex IV technical documentation packages and guide US companies through the conformity assessment process. Our documentation templates are designed for internal self-assessment under Annex VI. Contact us for a classification review.

Frequently Asked Questions

Your AI system is high-risk if it falls into one of the eight use-case categories listed in Annex III of the EU AI Act — including employment screening, credit scoring, biometric identification, and critical infrastructure management. The classification depends on the deployment context and impact on people's fundamental rights, not the technical complexity of the AI.
For most Annex III high-risk systems, providers conduct an internal self-assessment under Annex VI, verifying compliance with all requirements in Articles 9-15. Biometric identification systems require a third-party assessment by a notified body. After passing, providers issue a Declaration of Conformity and apply CE marking.
Yes. Any US company whose AI system produces outputs used in the EU must classify that system under the EU AI Act risk framework. If the system falls into an Annex III high-risk category — such as AI used for hiring EU-based candidates or assessing credit for EU customers — full compliance obligations apply by August 2, 2026.
Non-compliance with high-risk AI system requirements can result in fines up to €15 million or 3% of global annual turnover, whichever is higher. Additionally, non-compliant systems can be withdrawn from the EU market, and civil claims from affected individuals are possible.

Need Help With
EU AI Act Compliance?

Lexara Advisory provides scope assessments, risk classification, Annex IV documentation, and end-to-end compliance support for US companies facing the August 2026 deadline.

Contact Lexara Advisory →

Lexara Advisory LLC — AI governance consulting, not legal practice.

Lexara AI Assistant

🤖 AI — not a human or lawyer

⚠️ AI Disclosure (EU AI Act · Art. 50): You are interacting with an automated AI system, not a human. For professional guidance contact Lexara Advisory directly.
Hello. I can help you understand EU AI Act compliance for US companies.

What would you like to know?
Powered by Lexara Advisory LLC