policy report
Published by Convergence Analysis, this series is designed to be a primer for policymakers, researchers, and individuals seeking to develop a high-level overview of the current state of AI regulation.
Updated: April 28, 2024
Structure of AI Regulations
The European Union (EU) has conducted almost all of its AI governance initiatives within a single piece of legislation: the EU AI Act, formally adopted in March 2024. Initially proposed in 2021, this comprehensive legislation aims to regulate AI systems based on their potential risks and safeguard the rights of EU citizens.
At the core of the EU AI Act is a risk-based approach to AI regulation. The act classifies AI systems into four categories: unacceptable risk, high risk, limited risk, and minimal risk. Unacceptable risk AI systems, such as those that manipulate human behavior or exploit vulnerabilities, are banned outright. High-risk AI systems, including those used in critical infrastructure, education, and employment, are subject to strict requirements and oversight. Limited risk AI systems require transparency measures, while minimal risk AI systems are largely unregulated.
In direct response to the publicization of foundational AI models in 2022 starting with the launch of ChatGPT, the Act includes clauses specifically addressing the challenges posed by general purpose AI (GPAI). GPAI systems, which can be adapted for a wide range of tasks, are subject to additional requirements, including being categorized as high-risk systems depending on their intended domain of use.
What are the key traits of the US’ AI governance strategy?
The EU AI Act is a horizontally integrated, comprehensive piece of legislation implemented by a centralized body:
The EU has demonstrated a clear prioritization for the protection of citizen’s rights:
The EU AI Act implements strict and binding requirements for high-risk AI systems:
AI Evaluation & Risk Assessments
The EU’s draft AI Act has mandated some safety and risk assessments for high-risk AI and, in more recent iterations, frontier AI.
As summarized here, the act classifies models by risk, and higher risk AI has stricter requirements, including for assessment. Developers must determine the risk category of their AI, and may self-assess and self-certify their models by adopting upcoming standards or justifying their own (or be fined at least €20 million). High-risk models must undergo a third-party “conformity assessment” before they can be released to the public, which includes conforming to requirements regarding “risk management system”, “human oversight”, and “accuracy, robustness, and cybersecurity”.
In earlier versions, general-purpose AI such as ChatGPT would not have been considered high-risk. However, since the release of ChatGPT in 2022, EU legislators have developed new provisions to account for similar general purpose models (see more on the changes here). Article 4b introduces a new category of “general-purpose AI” (GPAI) that must follow a lighter set of restrictions than high-risk AI. However, GPAI models in high-risk contexts count as high-risk, and powerful GPAI must undergo the conformity assessment described above.
Title VIII of the act, on post-market monitoring, information sharing, and market surveillance, includes the following:
AI Model Registries
Via the EU AI Act, the EU has opted to categorize AI systems into tiers of risk by their use cases, notably splitting permitted AI systems into high-risk and limited-risk categorizations. In particular, it requires that high-risk AI systems must be entered into an EU database for tracking.
AI Incident Reporting
The EU AI Act requires that developers of both high-risk AI systems and general purpose AI (“GPAI”) systems set up internal tracking and reporting systems for “serious incidents” as part of their post-market monitoring infrastructure.
As defined in Article 3(44), a serious incident is:
Any incident or malfunctioning of an AI system that directly or indirectly leads to any of the following:
In the event that such an incident occurs, Article 62 requires that the developer reports the incident to the relevant authorities (specifically the European Data Protection Supervisor) and cooperate with them on an investigation, risk assessment, and corrective action. It specifies time limits for reporting and specific reporting obligations.
Open-Source AI Models
The EU AI Act states that open-sourcing can increase innovation and economic growth. The act therefore exempts open-source models and developers from some restrictions and responsibilities placed on other models and developers. Note though that these exemptions do not apply to foundation models (meaning generative AI like ChatGPT), or if the open-source software is monetized or is a component in high-risk software.
Notably, the treatment of open-source models was contentious during the development of the EU AI Act (see also here).
Cybersecurity of Frontier AI Models
The EU has a comprehensive data privacy and security law that applies to all organizations operating in the EU or handling the personal data of EU citizens: the General Data Protection Regulation (GDPR). Passed in 2018, it does not contain language specific to AI systems, but provides a strong base of privacy requirements for collecting user data, such as mandatory disclosures, purpose limitations, security, and rights to access one’s personal data.
The EU AI Act includes some cybersecurity requirements for organizations running “high-risk AI systems” or “general purpose AI models with systemic risk”. It generally identifies specific attack vectors that organizations should protect against, but provides little to no specificity about how an organization might protect against these attack vectors or what level of security is required.
Sections discussing cybersecurity for AI models include:
AI Discrimination Requirements
The EU AI Act directly addresses discriminatory practices classified by the use cases of AI systems considered. In particular, it classifies all AI systems with potential discriminatory practices as high-risk systems and bars them from discrimination, including:
In particular, AI systems that provide social scoring of natural persons (which pose a significant discriminatory risk) are deemed unacceptable systems and are banned.
AI Disclosures
Article 52 of the EU AI Act lists the transparency obligations for AI developers. These largely relate to AI systems “intended to directly interact with natural persons”, where natural persons are individual people (excluding legal persons, which can include businesses). For concision, we will just call these “public-facing” AIs. Notably, the following requirements have exemptions for AI used to detect, prevent, investigate, or prosecute crimes (assuming other laws and rights are observed).
AI and Chemical, Biological, Radiological, & Nuclear Hazards
The EU AI Act does not contain any specific provisions for CBRN hazards, though article (60m) on the category of “general purpose AI that could pose systemic risks” includes the following mention of CBRN: “international approaches have so far identified the need to devote attention to risks from [...] chemical, biological, radiological, and nuclear risks, such as the ways in which barriers to entry can be lowered, including for weapons development, design acquisition, or use”.