EU's AI Act: A Summary

Imagine a world where AI systems decide who gets a loan, what news people read, and who receives critical healthcare; all based on inscrutable algorithms. This scenario could rapidly become a reality without robust regulation. To mitigate these risks, the European Union (EU) has drawn up the Artificial Intelligence Act (AI Act). A crucial initiative to ensure that AI development aligns with fundamental rights, promotes a transparent and equitable AI environment across the EU. A summary of the 460 pages of this act.

eu ai act summary

Updated 27 May 2025 5-minute read

EU AI Act in Plain Language

The EU has passed the world’s first big law to control artificial intelligence.

AI Rules and Timeline

  • From Feb 2025: Dangerous AI uses (like social scoring or manipulative systems) are banned.
  • From Aug 2025: Big AI models (like ChatGPT-style systems) must follow transparency rules (e.g. showing where data comes from).
  • From Aug 2026: High-risk AI (in schools, jobs, healthcare, justice, or policing) must meet strict safety checks.
  • From Aug 2027: AI built into products like toys, cars, or medical devices must follow extra safety rules.

Breaking the law can mean huge fines (up to €35 million).

The rules are enforced by a new European AI Office and national watchdogs. The goal is to make sure AI in Europe is safe, fair, and respects people’s rights, while still encouraging innovation.

The Purpose of the AI Act

The AI Act aims to establish a uniform regulatory landscape for AI across all EU member states, ensuring that AI systems are developed and used in a manner that respects human rights and safety while promoting technological advancement. These objectives are multifaceted:

  • Risk management: Introduce a structured, risk-based approach to AI regulation that categorises systems according to their potential threat to safety and fundamental rights.
  • Protection of fundamental rights: Prevent AI applications that could harm individual freedoms and privacy, such as invasive surveillance technologies or systems that could facilitate discrimination.
  • Enhancement of transparency and accountability: Mandate comprehensive disclosures about AI operations, ensuring systems are understandable and their actions can be accounted for.
  • Promotion of innovation: Facilitate the growth of the AI sector by providing clear rules that help innovators and businesses navigate the legal landscape.
  • Global leadership: Position the EU as a global leader in ethical AI standards, influencing international norms and practices.

Key Components of the AI Act

The framework of the AI Act encompasses several critical provisions designed to tackle various aspects of AI governance:

  • Risk-based classification: AI systems are sorted into four risk categories - unacceptable, high, limited, and minimal risk - with corresponding regulatory requirements.
  • Prohibited practices: The Act bans particularly dangerous applications of AI, including exploitative behaviour modification and indiscriminate social scoring systems.
  • Transparency requirements: Entities must disclose how AI systems operate, the data they use, and the rationale behind their decisions, particularly for high-risk applications.
  • Human oversight: The legislation mandates that decisions made by high-risk AI systems can be overseen and intervened in by humans to mitigate risks effectively.
  • Compliance and monitoring: Implementation is overseen by the European AI Office (within the European Commission), supported by the European AI Board and national supervisory authorities, which carry out regular compliance assessments.
  • Extraterritorial scope: Similar to the General Data Protection Regulation (GDPR), the AI Act applies to any AI system utilised within the EU, regardless of the provider's geographical location.
  • Enforcement and penalties: Enforcement of the AI Act imposes fines up to €35 million or 7% of global revenue for the most serious infringements (prohibited practices). For GPAI violations, fines may reach €15 million or 3%. SMEs and start-ups benefit from proportionate penalty reductions.

Compliance Deadlines

On 13 March 2024, the European Parliament passed the final text of the EU AI Act, incorporating all 808 amendments, by a large majority. The European Council approved the act on 21 May 2024 and subsequent linguistic adjustments, the Act was published in the Official Journal of the EU on 12 July 2024 and entered into force on 1 August 2024.

The compliance deadlines for the EU AI Act are as follows after its entry into force:

  • 6 months after entry into force (February 2025):
    • Prohibitions on AI systems posing unacceptable risks (Title II) apply.
  • 12 months (August 2025):
    • General Purpose AI (GPAI), including foundation models, obligations become applicable. Member States must designate national authorities.
    • Member States' appropriate authorities must be appointed.
    • Annual Commission review and potential revisions to restrictions.
  • 18 months (1st quarter 2026):
    • The European Commission will issue implementing acts outlining a framework for high-risk AI companies' post-market monitoring plans.
  • 24 months (3rd quarter 2026):
    • Obligations for high-risk AI systems listed in Annex III (biometrics, education, employment, critical infrastructure, law enforcement, etc.) apply. Regulatory sandboxes must be operational.
    • Member states must have established penalties, including administrative fines.
  • 36 months (3rd quarter 2027):
    • Obligations for Annex II high-risk systems (AI embedded in products subject to EU product safety rules) apply.
  • By end 2030:
    • AI systems that are part of large-scale IT systems established by EU freedom, security, and justice law, such as the Schengen Information System, must comply with the requirements of the AI Act by the end of 2030.

Conclusion

This summary shows that the EU's AI Act is a forward-looking initiative that sets a global precedent for AI regulation. By balancing the imperatives of innovation and safety, it aims to foster an AI ecosystem that is both dynamic and principled. As these regulations come into effect, they are expected to not only shape the development of AI within the EU but also influence global standards. This will make sure that AI is a force for good that aligns with human values and societal needs.

« More AI Governance Concerns about the EU AI Act »