Article 22 GDPR has sat quietly in the back of most privacy programs for years. It was drafted before the current generation of ML-driven scoring tools, and for a long time its scope felt narrow enough to manage with a carve-out and a consent flow.

That is no longer a defensible posture. Between the CJEU's SCHUFA Holding judgment (Case C-634/21), the EU AI Act's high-risk classifications now in force, and regulator-led enforcement actions, Article 22 has become one of the more consequential provisions in the GDPR for any organisation deploying AI that affects individuals. This is a working note for in-house counsel, DPOs, and compliance leads. I'll flag uncertainty where it exists — there is still a fair amount.

What Article 22 actually says

Article 22(1) provides that a data subject "shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her."

For the prohibition to apply, three conditions must be satisfied concurrently:

  1. A decision — there must be an actual decision, not merely a recommendation or an intermediate output.
  2. Based solely on automated processing — no meaningful human involvement in the outcome. The EDPB's Guidelines on Automated Individual Decision-Making (WP251rev.01) make clear that token human review — a rubber-stamp approval — does not remove a decision from Article 22.
  3. Producing legal effects or similarly significant effects — refusal of credit, refusal of employment, termination, denial of insurance, access to essential services.

It is important to read Article 22(1) as a default prohibition with exemptions, not as a right the data subject must actively invoke. The CJEU confirmed this framing in SCHUFA. If your processing falls within Article 22(1), you need an exemption under Article 22(2). Otherwise the processing is unlawful.

The three exemptions and their limits

Article 22(2) permits the processing where it is:

Each exemption has been narrowed in practice.

On necessity, "necessary" means strictly necessary — not convenient, not efficient. If a human decision process would achieve the same contractual purpose, the automated one is unlikely to qualify. The French CNIL has taken this view consistently, and the EDPB guidance is aligned.

On Member State authorisation, the authorising law must itself contain safeguards. A generic competence provision is not enough. This is where SCHUFA ultimately turned.

On explicit consent, all the usual GDPR consent constraints apply — freely given, specific, informed, unambiguous, and revocable. Consent obtained as a precondition of essential services is not freely given, per Article 7(4). In credit and hiring contexts, consent is almost never a robust basis.

Even where an exemption applies, Article 22(3) requires "at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision." This is not optional. It is a hard-coded procedural right that runs alongside the substantive permission.

How SCHUFA changed the analysis

In SCHUFA Holding AG (Case C-634/21, 7 December 2023), the CJEU addressed a credit-scoring practice in which SCHUFA — a German credit bureau — generated probability values indicating the likelihood that an individual would repay a loan. The scores were provided to banks, which then made lending decisions.

SCHUFA's position had been that it did not make the decision — the bank did. SCHUFA merely provided a score. Under that reading, Article 22 would not apply to SCHUFA at all.

The CJEU rejected that framing. The court held that where a third-party score is a determining factor in the downstream decision — meaning the bank relies on it substantially in granting or refusing credit — the scoring itself constitutes a "decision" for Article 22 purposes. The controller producing the score is within scope, even though a different entity executes the customer-facing outcome.

This is the "substantial effect" or "determining factor" test, and it matters enormously for the AI supply chain. A vendor that scores, ranks, flags, or classifies individuals for downstream human or automated use by a customer cannot assume Article 22 stops at the customer's door. If the score drives the decision, the scorer is a controller subject to Article 22, with the obligations that follow.

SCHUFA also undermined Germany's attempt to authorise the scoring under the Article 22(2)(b) Member State exemption. The court found that §31 of the Bundesdatenschutzgesetz did not lay down "suitable measures" with the specificity GDPR requires. National provisions that simply name a practice and leave safeguards to general law will likely fail the same test.

Uncertainty flag: the "determining factor" threshold is not quantified. We do not yet have case law telling us whether an 80% weighting on a model output crosses the threshold, or a 40% weighting. Assume for planning purposes that any output routinely adopted without independent human reasoning is within scope.

Overlap with the EU AI Act

Many systems that trigger Article 22 will also be classified as high-risk under Annex III of the EU AI Act — credit scoring (with exceptions for fraud detection), employment screening, admissions decisions, access to public benefits, and evaluation of creditworthiness all appear there.

Where both regimes apply, the obligations layer. GDPR governs the processing of personal data. The AI Act governs the lifecycle of the AI system — risk management, data governance, technical documentation, logging, human oversight, post-market monitoring, and conformity assessment.

Article 26 of the AI Act imposes deployer obligations: assigning human oversight to qualified personnel, ensuring input data is relevant and sufficiently representative, monitoring operation, and suspending use where serious risks emerge. For in-house counsel, this matters because the "meaningful human review" required under Article 22(3) GDPR and the "human oversight" required under Article 14 and Article 26 AI Act are now being read together. A program that does one without the other will not satisfy either.

DPIAs under Article 35 GDPR and fundamental-rights impact assessments (FRIAs) under Article 27 AI Act should be run in tandem, not sequentially. Much of the evidence base overlaps.

Practical compliance steps

For programs I have seen work, the core components are:

  1. Inventory. Identify every system that produces an output affecting individuals — scoring, ranking, classification, recommendation feeding downstream decisions. Include vendor-provided scores and any internal models. Capture whether the output is a "determining factor" under the SCHUFA test.
  2. DPIA triggers. Any system meeting the Article 22 threshold, or appearing in Annex III of the AI Act, should trigger a DPIA automatically. Do not rely on discretionary review.
  3. Meaningful human review. Define what "meaningful" looks like in writing. The reviewer must have authority to overturn, access to the underlying reasoning (including the relevant features that drove the output), time allocated to conduct review, and training to evaluate it. Document this, because regulators will ask.
  4. Contestation mechanism. Article 22(3) requires the data subject be able to contest the decision. Build a documented path — who receives the contest, what service-level applies, what evidence the individual can submit, how the review is recorded. A generic "contact us" inbox is not sufficient.
  5. Explainability obligations. Articles 13(2)(f), 14(2)(g), and 15(1)(h) require that data subjects receive "meaningful information about the logic involved" as well as the significance and envisaged consequences. Generic model-card language will not satisfy this where a specific adverse decision is concerned. Plan for case-level explanations.
  6. Vendor contracts. Where you purchase scoring or classification, flow down SCHUFA-relevant obligations explicitly: controller/processor analysis, documentation rights, human-review support, contestation cooperation, and notification of material model changes.
  7. Consent architecture (where used). If you rely on 22(2)(c), ensure consent is genuinely optional, with an equivalent non-automated path offered at comparable cost and timing. Where no such path exists, consent will not stand.

Where the risk concentrates

In the programs I work with, enforcement and litigation exposure concentrates in four areas: credit and lending scoring, employment screening and promotion tooling, insurance underwriting, and access to essential services (including public benefits and healthcare triage). In each, the combination of significant effect, automated processing, and imperfect human review creates Article 22 exposure regardless of whether the vendor or the customer operates the model.

The direction of travel is clear. Regulators are reading Article 22 more broadly post-SCHUFA, the AI Act is filling in the operational backbone, and data subjects are increasingly willing to litigate. The defensible position is to treat Article 22 as a structural design constraint on any automated decisioning program, not as a disclosure line in a privacy notice.

AI Governance Daily sends one concise brief each weekday on AI regulation, enforcement, and compliance — written for in-house counsel, DPOs, and compliance leads. Subscribe free.