You have a system in production or about to launch, and someone on the business side asked whether it is high-risk under the EU AI Act. You have probably already skimmed Article 6. You already know Annex III is where the categories live. The harder question, and the one that will end up in your memo, is whether the facts of your specific system fit inside one of those categories, and if so, whether Article 6(3) lets you out.
This is a working note on how I have been reading Article 6 and Annex III in light of the Commission guidelines published through early 2026. It is not legal advice. Flag uncertainty in your own memo where it exists, because there is still plenty.
What Article 6 actually says
Article 6 is the classification rule. It has two routes into the high-risk bucket.
Article 6(1) covers AI systems that are (a) intended to be used as a safety component of a product, or are themselves a product, covered by the Union harmonisation legislation listed in Annex I, and (b) required under that legislation to undergo a third-party conformity assessment. This is the "product safety" route. Medical devices under the MDR, machinery under the Machinery Regulation, toys, lifts, radio equipment — if your AI is a safety component of one of those products and the underlying product already needs a notified body assessment, you are in.
Article 6(2) is the one most in-house teams care about. It says an AI system referred to in Annex III is high-risk. Full stop, subject to Article 6(3).
Annex III is a list of eight use-case areas. Being in the list does not mean you are always high-risk — it means you are presumptively high-risk, and you have to run the Article 6(3) analysis to see if you can rebut the presumption.
The structure of Annex III
Annex III lists eight areas, each with one or more specific sub-items. The eight areas are:
- Biometrics — remote biometric identification systems; biometric categorisation based on sensitive attributes; emotion recognition. Note that real-time remote biometric identification in publicly accessible spaces for law enforcement is mostly prohibited under Article 5(1)(h), not merely high-risk.
- Critical infrastructure — AI used as a safety component in the management and operation of critical digital infrastructure, road traffic, and the supply of water, gas, heating, and electricity.
- Education and vocational training — access and admission decisions, evaluation of learning outcomes, assessment of the appropriate level of education, and monitoring prohibited behaviour during tests.
- Employment, workers management, and access to self-employment — recruitment and selection (screening CVs, evaluating candidates), and decisions affecting terms of work, promotion, termination, task allocation, and monitoring/evaluation of performance.
- Access to and enjoyment of essential private services and essential public services and benefits — eligibility for public benefits, creditworthiness evaluation and credit scoring (with a carve-out for fraud detection), risk assessment and pricing in life and health insurance, and dispatching emergency first-response services.
- Law enforcement — individual risk assessment, polygraph-type tools, evaluation of evidence reliability, profiling in the course of detection/investigation/prosecution, and assessment of personality traits for predicting offences.
- Migration, asylum, and border control management — polygraph-type tools, risk assessment of persons, examination of applications, and detection/recognition/identification of persons in migration contexts.
- Administration of justice and democratic processes — AI intended to assist a judicial authority in researching and interpreting facts and law and in applying the law, and AI intended to influence the outcome of elections or referendums or the voting behaviour of natural persons.
Read the specific sub-items. The area headings are broad; the actual triggers are narrower. "Employment" is not a category by itself — recruiting screening tools and performance evaluation systems are. A chatbot that helps HR answer benefits questions is not automatically in Annex III just because HR uses it.
The Article 6(3) exception
Article 6(3) is where most of the real classification work happens. It provides that an AI system referred to in Annex III is not considered high-risk if it does not pose a significant risk of harm to the health, safety, or fundamental rights of natural persons, including by not materially influencing the outcome of decision-making.
The article lists four situations where the system is deemed not to pose a significant risk:
- (a) the AI system is intended to perform a narrow procedural task;
- (b) the AI system is intended to improve the result of a previously completed human activity;
- (c) the AI system is intended to detect decision-making patterns or deviations from prior decision-making patterns and is not meant to replace or influence the previously completed human assessment without proper human review; or
- (d) the AI system is intended to perform a preparatory task to an assessment relevant for the purposes listed in Annex III.
Two important limits. First, Article 6(3) does not apply at all if the AI performs profiling of natural persons — that is a hard carve-out in the last subparagraph of Article 6(3). A tool that otherwise looks like a "preparatory task" but profiles candidates is still high-risk. Second, the provider bears the burden. Under Article 6(4), a provider who believes its Annex III system is not high-risk must document the assessment before placing it on the market, register the system in the EU database under Article 49(2), and make the assessment available to authorities on request.
The Commission's February 2026 guidelines on Article 6(3) tightened the reading of "preparatory task." Summarising documents for a human reviewer who will make the actual decision generally fits. Generating a draft recommendation that the human is practically likely to rubber-stamp generally does not, because it materially influences the decision even if a human signs off.
A working decision tree
Copy this into your memo or your classification template. Run every candidate system through it.
- Is the system an AI system as defined in Article 3(1)? If no, the AI Act does not apply in the way you think. Stop.
- Is it prohibited under Article 5? Social scoring by public authorities, certain biometric categorisation, untargeted scraping of facial images, emotion recognition in workplace and education, and the real-time remote biometric identification carve-outs are all in Article 5. If yes, you are not classifying — you are redesigning or killing the feature.
- Does Article 6(1) apply? Is the AI a safety component of, or itself, a product covered by Annex I Union harmonisation law requiring third-party conformity assessment? If yes, high-risk. Proceed to Chapter III obligations.
- Does the system fall within one of the eight Annex III areas at the sub-item level? Read the sub-items, not the headings. If no, not high-risk under Article 6. Confirm transparency obligations under Article 50 and general-purpose AI obligations under Chapter V if applicable.
- If yes to Annex III, does the system perform profiling of natural persons? If yes, Article 6(3) is unavailable. The system is high-risk.
- If no profiling, does the system fit one of the four Article 6(3) conditions (narrow procedural task, improving prior human output, pattern detection with proper human review, preparatory task)? Be honest. If the system materially influences the outcome, it does not fit.
- If you claim Article 6(3), have you documented the assessment and registered under Article 49(2)? Both are required before placing on the market.
Where uncertainty still lives
Three areas I would flag explicitly in any memo. First, "materially influences the outcome" under Article 6(3) is still under-litigated and the Commission guidelines use examples rather than a bright-line test — err toward high-risk if the human reviewer will in practice defer to the model. Second, the Annex III employment sub-items use "used to make decisions" and "used for monitoring and evaluating" — tools that produce signals a human then incorporates into a broader decision sit in a gray zone, and national supervisory authorities may read the scope differently in the early years. Third, general-purpose AI models integrated into Annex III use cases trigger both the high-risk obligations on the deployer and the GPAI obligations on the upstream provider under Chapter V — the two regimes overlap and the allocation of responsibility is still being worked out in contract.
For a concise daily read on AI Act enforcement, guideline updates, and how in-house teams are actually classifying systems, subscribe to AI Governance Daily. Citation-first, no hype, written for practitioners.