Artificial Intelligence Act (AI Act)
AI-assisted content notice: this page includes AI-assisted summaries, FAQs, and glossary entries prepared for navigation purposes. Verify the underlying legal text before relying on this content.
Summary
Regulation (EU) 2024/1689 (the Artificial Intelligence Act) lays down harmonised rules for the development, placing on the market, putting into service and use of AI systems in the EU, using a risk-based approach. It prohibits certain AI practices considered to pose unacceptable risks, sets mandatory requirements for high-risk AI systems (including conformity assessment and post-market monitoring), and introduces transparency duties for certain AI systems (e.g., some systems interacting with humans or generating/manipulating content). It also establishes specific obligations for general-purpose AI (GPAI) models, with additional requirements for GPAI models with systemic risk, and creates an EU-level governance and enforcement framework including the European AI Office.
Who is affected?
The Act affects providers (developers), deployers (users in a professional context), importers, distributors and authorised representatives of AI systems and GPAI models, including public authorities, when AI is placed on the EU market or put into service in the EU. It also impacts downstream businesses integrating AI into products/services and organisations using high-risk AI in areas such as employment, education, essential services, law enforcement, migration and justice.
Scope
It applies to AI systems and general-purpose AI models placed on the market, put into service or used in the EU, with a risk-based set of prohibitions, requirements and transparency obligations, subject to specified exemptions.
Key Points
- Risk-based framework: prohibited AI practices, regulated high-risk AI systems, and transparency obligations for certain other AI uses
- Prohibitions include specified manipulative/exploitative practices and certain uses of biometric categorisation and remote biometric identification, subject to narrowly defined exceptions
- High-risk AI systems must meet requirements on risk management, data governance, technical documentation, logging, transparency, human oversight, accuracy/robustness and cybersecurity, plus conformity assessment and post-market monitoring
- General-purpose AI (GPAI) model obligations include documentation and information-sharing duties; additional obligations apply to GPAI models with systemic risk (e.g., model evaluation, risk mitigation and incident reporting)
- Governance and enforcement via national competent authorities/market surveillance, coordination mechanisms, and an EU-level European AI Office for GPAI-related supervision and support
- Phased application dates, with earlier application for prohibited practices and GPAI-related rules and later application for most high-risk obligations
Key Deadlines
- — Prohibited AI practices apply
- — General-purpose AI (GPAI) model rules apply
- — Most obligations for high-risk AI systems apply
Related Regulations
Frequently Asked Questions
Who must comply with the AI Act?
The AI Act applies to providers (developers), deployers (professional users), importers, distributors, and authorised representatives of AI systems and general-purpose AI (GPAI) models placed on the EU market or put into service in the EU, including public authorities and downstream businesses integrating AI.
What types of AI systems are covered by the AI Act?
The Act covers AI systems and GPAI models that are placed on the market, put into service, or used in the EU, with obligations varying according to the risk level (prohibited, high-risk, or other regulated uses). Certain exemptions apply, such as for systems used exclusively for military or research purposes.
What are the key obligations for high-risk AI systems?
High-risk AI systems must meet strict requirements on risk management, data governance, technical documentation, logging, transparency, human oversight, accuracy, robustness, cybersecurity, conformity assessment, and post-market monitoring.
What AI practices are prohibited under the AI Act?
The Act prohibits certain manipulative or exploitative AI practices, as well as specific uses of biometric categorisation and remote biometric identification, except in narrowly defined circumstances.
What are the obligations for general-purpose AI (GPAI) models?
GPAI model providers must meet documentation and information-sharing requirements. GPAI models with systemic risk face additional obligations, including model evaluation, risk mitigation, and incident reporting.
How does the AI Act interact with other EU regulations?
The AI Act complements existing EU legislation, such as the General Data Protection Regulation (GDPR) and product safety laws, and does not override sector-specific rules but introduces additional, AI-specific requirements.
What are the penalties for non-compliance with the AI Act?
Non-compliance can result in significant administrative fines, with the amount depending on the type and severity of the infringement. Fines can reach up to €35 million or 7% of global annual turnover for the most serious breaches.
What are the main steps for practical compliance with the AI Act?
Organisations should identify the risk category of their AI systems, implement required risk management and documentation processes, ensure transparency and human oversight, prepare for conformity assessment, and establish post-market monitoring mechanisms.
When do the different obligations under the AI Act apply?
The Act has phased application dates: prohibitions and GPAI-related rules apply earlier, while most high-risk AI system obligations apply later, allowing organisations time to adapt to the new requirements.
Who enforces the AI Act and how is governance structured?
National competent authorities and market surveillance bodies enforce the Act, coordinated at EU level by the European AI Office, which oversees GPAI-related supervision and supports consistent application across Member States.
Key Terms
- High-risk AI system
- An AI system identified as posing significant risks to health, safety, or fundamental rights, subject to strict regulatory requirements under the AI Act.
- General-purpose AI (GPAI) model
- An AI model intended for a wide range of applications, not limited to a specific use case, with distinct obligations under the AI Act.
- Systemic risk (GPAI)
- A characteristic of certain GPAI models that, due to their scale or impact, pose heightened risks to public interests and are subject to additional regulatory controls.
- Conformity assessment
- A process by which high-risk AI systems are evaluated for compliance with the AI Act’s requirements before being placed on the market or put into service.
- Post-market monitoring
- Ongoing surveillance and evaluation of AI systems after deployment to ensure continued compliance and address emerging risks.
- European AI Office
- An EU-level body established to coordinate and supervise the implementation of the AI Act, especially regarding GPAI models and cross-border issues.
- Transparency obligations
- Requirements for certain AI systems to disclose their AI nature, capabilities, or outputs to users, particularly when interacting with humans or generating/manipulating content.
- Risk management system
- A documented process for identifying, assessing, and mitigating risks associated with high-risk AI systems throughout their lifecycle.
- Biometric categorisation
- The use of AI to assign individuals to categories based on biometric data, a practice restricted or prohibited under the AI Act.
- Human oversight
- Measures ensuring that humans can understand, intervene in, or override AI system decisions, especially in high-risk contexts.