policy report
Published by Convergence Analysis, this series is designed to be a primer for policymakers, researchers, and individuals seeking to develop a high-level overview of the current state of AI regulation.
AI Incident Reporting
What is AI incident reporting?
AI incident reporting refers to an emerging series of voluntary practices or regulatory requirements for AI developers and deployers to publicly report adverse effects or “near-misses” that arise from the use of AI systems. Such mechanisms are designed to capture a wide range of potential issues, such as privacy breaches, security vulnerabilities, and biases in decision-making.
In most domains, incidents can be divided into two subcategories:
The rationale for incident reporting is to create a feedback loop where regulators, developers, and the public can learn from past AI deployments to continuously improve safety standards and legal compliance. By systematically documenting incidents, stakeholders can identify patterns, initiate inquiries into causes of failure, and implement corrective measures to prevent their recurrence.
What precedent policies exist for AI incident reporting?
Incident reporting has been a highly effective tool used across a variety of industries for decades to mitigate risk from emerging technologies. Here are two examples:
Incident reporting in AI is still in its nascent stages, and a variety of approaches are being explored globally. The specific requirements for incident reporting, such as the types of incidents that must be reported, the timeframe for reporting, and the level of detail required can vary significantly between jurisdictions.
The most prominent public example of an AI incident reporting tool today is the AI Incident Database, launched by the Responsible AI Collaborative. This database crowdsources incident reports involving AI technologies as documented in public sources or news articles. It’s used by AI researchers as a tool to surface broad trends and individual case studies regarding AI safety incidents. As a voluntary public database, it doesn’t adhere to any regulatory standards nor does it require input or resolution from the developers of the AI tool involved.
What are current regulatory policies around AI incident reporting?
China
The PRC is developing a governmental incident reporting database, as announced in the Draft Measures on the Reporting of Cybersecurity Incidents on Dec 20th, 2023. This proposed legislation categorize cybersecurity incidents into four categories of severity (“Extremely Severe”, “Severe”, “Relatively Severe”, and “General”), and requires that the top three levels (“Critical Incidents”) are reported to governmental authorities within one hour of occurrence. The criteria for meeting the level of “Critical” incidents include the following:
Though this set of measures does not directly mention frontier AI models as a target for enforcement, any of the negative outcomes above resulting from the use of frontier AI models would be reported under the same framework. This draft measure can be understood as the Cyberspace Administration of China (CAC) pursuing two major goals:
Elsewhere, leading Chinese AI regulatory measures make reference to reporting key events (specifically the distribution of unlawful information) to the Chinese government, but none of them have specific requirements for the creation of an incident reporting database:
The EU
The EU AI Act requires that developers of both high-risk AI systems and general purpose AI (“GPAI”) systems set up internal tracking and reporting systems for “serious incidents” as part of their post-market monitoring infrastructure.
As defined in Article 3(44), a serious incident is:
Any incident or malfunctioning of an AI system that directly or indirectly leads to any of the following:
In the event that such an incident occurs, Article 62 requires that the developer reports the incident to the relevant authorities (specifically the European Data Protection Supervisor) and cooperate with them on an investigation, risk assessment, and corrective action. It specifies time limits for reporting and specific reporting obligations.
The US
The US does not currently have any existing or proposed legislation regarding reporting databases for AI-related incidents. However, the Executive Order on AI contains some preliminary language directing the Secretary of Health and Human Services (HHS) and the Secretary of Homeland Security to establish new programs within their respective agencies. These directives essentially request the creation of domain-specific incident databases:
Convergence’s Analysis
In the next 2-3 years, the US, EU, and China will have established mandatory incident reporting requirements by AI service providers for “severe” incidents encompassing AI technologies.
However, such governmental compliance requirements represent only the minimum base layer of an effective network of incident reporting systems to mitigate risk from AI technologies.