High-risk AI under the EU AI Act
The EU AI Act represents a pioneering step in the regulation of Artificial Intelligence (AI) within the European Union. This law focuses on high-risk AI systems with serious consequences for public safety, rights, and societal values. This regulatory framework aims to prevent this while simultaneously encouraging technological advancement and innovation. The law seeks to foster an atmosphere in which AI technology can develop safely and responsibly under stringent control.

TABLE OF CONTENTS
Classification of AI Act High-Risk Systems
High-risk AI systems are those that meet specific criteria under the EU AI Act. Outlined in Articles 6 and 7, they are typically involved as safety components in products or are products themselves that require a third-party conformity assessment before they can be marketed or used. This classification is not static; the list of AI Act high-risk systems might be changed when new information and technological breakthroughs emerge.
Estimates of High-Risk AI
The proportion of AI systems categorised as high-risk is unknown, given that both the AI field and regulatory frameworks continue to evolve. The European Commission's impact research estimated that just 5-15% of AI applications will be subject to rigorous rules. In contrast, appliedAI's examination of 100 AI systems revealed that 18% were high-risk, 42% were low-risk, and 40% had uncertain risk ratings. Thus, the proportion of high-risk systems in this sample spans from 18% to 58%. One of the artificial intelligence systems would be prohibited.
Furthermore, a poll of 113 EU AI startups found that 33% of respondents expected their AI systems to be categorised as high-risk, substantially higher than the European Commission's assessment.
Exemptions and Special Considerations
Not all AI systems that could be considered high-risk based on their applications fall under this stringent categorisation. Systems intended to perform narrowly defined procedural tasks, enhance results from previous human activities, detect patterns in decision-making without influencing outcomes without human review, or prepare for assessments related to high-risk use cases, are generally not classified as high-risk if they don't pose significant health, safety, or rights risks.
However, any AI performing profiling of individuals is automatically categorised as high-risk, regardless of its other functionalities. Providers of systems that might be borderline are required to document their assessments thoroughly before market entry, ensuring transparency and accountability.
Obligations for Providers
Providers of high-risk AI systems face comprehensive obligations to ensure their systems are safe and compliant:
- Risk management: Implementation of a continual risk management system.
- Data and documentation: Utilisation of high-quality data and extensive documentation of system performance.
- Conformity assessment and CE marking: Systems must pass a conformity assessment and receive CE marking before market entry.
- Transparency and human oversight: Providers must ensure system transparency and enable human oversight, including mechanisms to halt operations if needed.
Guidelines and Amendments
The European Commission, after consultation with the European Artificial Intelligence Board, is tasked with providing practical implementation guidelines. These guidelines will help clarify which AI systems are considered high-risk and include examples to aid in this determination. Furthermore, the Commission has the power to amend the criteria for what constitutes a high-risk system through delegated acts, ensuring the legislation remains relevant as technology evolves.
Adjusting High-Risk Annex
Annex III of the regulation, which lists specific AI Act high-risk applications, can also be amended to include new use cases or modify existing ones if they present a comparable or greater risk than those already listed. These decisions are based on several factors, including the AI system's purpose, the extent of its use, the nature of the data it processes, its autonomy, and the potential harm it could cause.
Consequences of Non-Compliance
Non-compliance with the EU AI Act can lead to severe penalties:
- Financial penalties: Up to €20 million or 4% of the annual global turnover, whichever is higher.
- Operational restrictions: Non-compliant systems may be prohibited from being marketed or might have to be withdrawn from the market.
- Reputational damage: Failing to comply can significantly affect the provider's market trust and brand reputation.
Timeline and Implementation Phases
The EU AI Act will be introduced gradually:
- Mid 2024: The Act becomes enforceable, with initial provisions coming into effect.
- Mid 2026 onwards: All high-risk AI systems must comply with the full range of obligations, including registration and continuous monitoring.
Balancing Risks and Benefits
The EU AI Act's amendment process takes a balanced approach to regulation, assessing the potential benefits of AI systems against the risks in order to stimulate innovation while also protecting public interests. It allows systems that no longer pose significant risks to be removed from the high-risk category, ensuring that rules keep up with the constantly evolving AI field. However, concerns regarding the influence of these laws on the pace of innovation are clear, as the study of 113 EU-based AI firms revealed that 50% feel the AI Act will slow down AI innovation in Europe, and 16% are considering discontinuing development or relocating outside the EU.
Conclusion
The EU AI Act sets a significant global precedent by rigorously addressing the challenges posed by high-risk AI technologies. By ensuring thorough documentation, continuous assessment, and stringent compliance requirements, the Act meticulously balances the dual objectives of fostering innovation and protecting public interests. As AI technology continues to advance, the Act's flexible framework is designed to adapt, ensuring that the regulatory environment remains relevant and effective in promoting responsible AI use across the EU. This proactive approach underlines the EU's commitment to leading in ethical AI governance, setting a benchmark for global AI regulation.
This article is for informational purposes and does not constitute legal advice.
« More AI Governance Summary EU AI Act »