Article 5 and the EU AI Act’s Absolute Red Lines – FireTail Blog

Apr 20, 2026 – Alan Fagan – Most conversations about the EU AI Act focus on August 2026, when obligations for high-risk AI systems become fully enforceable. But Article 5 is already live. The Act’s eight prohibited practices became enforceable in February 2025. Fines of up to €35 million or 7% of global annual turnover apply now. And the infrastructure to act on violations is in place.
For AI providers operating in or serving the EU market, understanding Article 5 is critical.
The EU AI Act takes a risk-based approach to AI governance. The practices represent the EU’s judgement that certain applications of AI are incompatible with fundamental rights and democratic values, and the European Commission reinforced that position in the guidelines it published on 4 February 2025, two days after the prohibitions.
The guidelines break each prohibition into cumulative conditions and provide practical examples of what falls in scope and what does not. They are the clearest signal available of how regulators will interpret borderline cases.
The penalty structure reflects the seriousness with which the EU treats these provisions. At up to €35 million or 7% of global annual turnover, violations of Article 5 carry steeper fines than any other category of non-compliance in the Act.
The Eight Prohibitions
1. Subliminal and Manipulative Techniques
AI systems that deploy techniques operating below conscious awareness, or that exploit psychological vulnerabilities, biases, or weaknesses in decision-making to distort behaviour and cause significant harm, are banned.
The prohibition is targeted at systems designed to circumvent rational agency. It does not cover normal personalisation, recommendation engines, or advertising that simply presents persuasive content. The key conditions are that the technique must be subliminal or manipulative, and that it must cause or be reasonably likely to cause significant harm.
In practice, the compliance question for providers is whether their optimisation objectives could drive the system toward manipulative behaviour as a side effect. A recommender system trained purely on engagement maximisation can, over time, evolve into something that exploits psychological patterns in ways that meet the prohibition’s conditions. 2. Exploiting Vulnerabilities
AI systems that exploit vulnerabilities arising from a person’s age, disability, or socioeconomic circumstances to distort behaviour in ways that cause harm are banned.
The practical example that clarifies this prohibition is an AI advertising tool that identifies users showing signs of financial hardship, through search behaviour, location data, or device signals, and targets them with offers specifically designed to exploit that vulnerability. The Commission’s guidelines explicitly name this kind of system as a violation.
This prohibition has direct implications for any AI system operating in consumer finance, healthcare, or social services, where users may be in vulnerable circumstances by definition. The question is not whether the system serves those users, but whether it is designed to exploit their circumstances rather than serve their interests.
3. Social Scoring
General-purpose social scoring of individuals or groups based on social behaviour or personal characteristics, leading to detrimental treatment in contexts unrelated to where the data was collected, is banned when conducted by or on behalf of public authorities.
This is the provision most directly aimed at preventing the kind of surveillance infrastructure that has emerged in certain authoritarian contexts. It applies to public authorities, but it also catches systems that aggregate data across domains in ways that create de facto social profiles affecting access to services, employment, or civic participation.
4. Predictive Policing Based on Profiling
AI systems that assess the likelihood of an individual committing a criminal offence solely on the basis of profiling or personality traits, absent objective and verifiable facts directly linked to criminal activity, are prohibited.
A retail security system that analyses CCTV footage to detect actual suspicious behaviour, such as someone concealing merchandise, is permitted because it reacts to observable actions. A system that flags customers as high risk based on demographic profiling, is not.
5. Untargeted Facial Recognition Scraping
Building or expanding facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage is banned absolutely.
This provision addresses the data acquisition practices used by a number of controversial biometric surveillance providers in recent years. Several of these companies built large-scale facial recognition datasets by scraping billions of images from social media platforms and public web sources without consent. That practi

[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.

This article has been indexed from Security Boulevard

Read the original article: