Prohibited AI under the EU AI Act
The European Union's Artificial Intelligence Act (AI Act) represents a significant step in the regulation of AI technologies, particularly focusing on those that pose serious risks to individual rights and societal values. The Act categorises AI systems according to their risk levels, with a specific emphasis on prohibiting those that present unacceptable risks. This article delves into the details of these prohibitions, their implications.

TABLE OF CONTENTS
Prohibited AI Due to Unacceptable Risks
Certain AI applications have been classified under the AI Act as having unacceptable risks and are thus legally restricted in the EU. AI systems that fall into this risk category may not be promoted, offered, or used in the EU. These limitations, outlined in Chapter II, Article 5, target AI systems that have the potential to significantly infringe on personal rights or have negative societal consequences due to ethical and privacy issues. The categories of prohibited AI applications are:
- Manipulative AI: AI systems that use subliminal or deceptive techniques to significantly alter a person's behaviour, impairing their ability to make informed decisions, and potentially causing significant harm.
- Exploitative AI: AI systems that target vulnerable individuals, exploiting characteristics such as age, disability, or socioeconomic status to manipulate behaviour, likely resulting in harm.
- Social scoring AI: Systems that evaluate individuals based on their social behaviour or perceived personal traits over time, leading to unjust treatment unrelated to the behaviour's context.
- Risk assessment AI: Systems used solely for assessing the likelihood of an individual committing a crime based on profiling, except when supporting human assessments based on factual evidence linked to criminal activity.
- Facial recognition AI: AI applications that create or expand databases by scraping images from the internet or closed-circuit television without targeted purposes are prohibited.
- Emotion detection AI: The use of AI to infer emotions in settings such as workplaces and educational institutions is prohibited, except for specific medical or safety reasons.
- Biometric categorisation AI: Systems that categorise individuals based on biometric data to infer sensitive attributes such as race, political views, or sexual orientation, unless it involves lawful law enforcement activities.
- Real-time biometric identification: The use of real-time biometric technologies in public places for law enforcement purposes is highly regulated. Such methods are only authorised under strict conditions, such as looking for victims of heinous crimes, preventing impending threats, or identifying suspects in major criminal cases. These uses must be necessary, proportionate, and subject to prior authorisation by a judicial or independent administrative authority.
Overall, the regulations aim to protect individuals from invasive or manipulative AI technologies while ensuring any use of biometric and surveillance technologies is strictly regulated, safeguarding fundamental rights, and ensuring ethical compliance.
Exceptions
The AI Act does mention several exceptions to the general prohibitions on certain AI practices:
- Criminal activity assessment: The prohibition on using AI for risk assessments to predict criminal behaviour does not apply to AI systems used to support human assessments of a person's involvement in criminal activity, provided these assessments are based on objective and verifiable facts directly linked to criminal activity.
- Biometric data in law enforcement: The prohibition on using biometric categorisation systems does not apply to labelling or filtering lawfully acquired biometric datasets, such as images based on biometric data, in the area of law enforcement.
- Real-time remote biometric identification: The use of real-time remote biometric identification systems in publicly accessible spaces for law enforcement is allowed under strict conditions. These include necessary situations such as searching for victims of serious crimes, preventing substantial and imminent threats, or identifying suspects in significant criminal investigations. Such use must be authorised by a judicial or independent administrative authority, and it must comply with necessary and proportionate safeguards.
These exceptions are designed to balance the protection of individual rights with the potential utility of AI systems in specific, highly regulated circumstances, particularly in law enforcement and public safety contexts.
Consequences and Penalties for Non-Compliance
The EU AI Act enforces these prohibitions with severe consequences to deter misuse:
- Heavy fines: Violations can lead to fines of up to €40 million, or 7% of the annual worldwide turnover, whichever is higher.
- Reputational damage: Non-compliance can significantly affect a company's reputation, impacting organisational relationships and consumer trust.
- Operational disruptions: Organisations may need to withdraw non-compliant AI systems, potentially causing significant operational and financial losses.
When Will AI Act Prohibitions Begin?
The general prohibitions on AI under the AI Act will come into effect six months after the Act enters into force. The Act itself will enter into force 20 days after its publication in the Official Journal of the EU, which is expected in June 2024. Therefore, the prohibitions on certain AI practices are anticipated to be enforceable in early 2025.
Conclusion
The prohibition of AI systems under the EU AI Act is a proactive measure to ensure that technological advancements do not come at the expense of human rights and ethical standards. By banning AI use cases that pose unacceptable risks, the EU is setting a precedent for responsible and ethical AI development, emphasising the importance of aligning technology with societal values and legal norms. The implications of these prohibitions extend beyond Europe, potentially affecting global AI policies and practices.
This article is for informational purposes and does not constitute legal advice.
« More AI Governance Summary EU AI Act »