policy report

2024 State of the AI Regulatory Landscape

2024 State of the AI Regulatory Landscape

Published by Convergence Analysis, this series is designed to be a primer for policymakers, researchers, and individuals seeking to develop a high-level overview of the current state of AI regulation.

AI Incident Reporting

Deric Cheng

Governance Research Lead

Last updated Mar 11, 2024

Last updated Mar 11, 2024

Author's Note

This report is one in a series of ~10 posts comprising a State of the AI Regulatory Landscape in 2024 Review, conducted by the Governance Research Program at Convergence Analysis. Each post will cover a specific domain of AI governance. We’ll provide an overview of existing regulations, focusing on the US, EU, and China as the leading governmental bodies currently developing AI legislation. Additionally, we’ll discuss the relevant context behind each domain and conduct a short analysis.

This series is intended to be a primer for policymakers, researchers, and individuals seeking to develop a high-level overview of the current AI governance space. We’ll be releasing a comprehensive report at the end of this series.

Author's Note

This report is one in a series of ~10 posts comprising a State of the AI Regulatory Landscape in 2024 Review, conducted by the Governance Research Program at Convergence Analysis. Each post will cover a specific domain of AI governance. We’ll provide an overview of existing regulations, focusing on the US, EU, and China as the leading governmental bodies currently developing AI legislation. Additionally, we’ll discuss the relevant context behind each domain and conduct a short analysis.

This series is intended to be a primer for policymakers, researchers, and individuals seeking to develop a high-level overview of the current AI governance space. We’ll be releasing a comprehensive report at the end of this series.

Author's Note

This report is one in a series of ~10 posts comprising a State of the AI Regulatory Landscape in 2024 Review, conducted by the Governance Research Program at Convergence Analysis. Each post will cover a specific domain of AI governance. We’ll provide an overview of existing regulations, focusing on the US, EU, and China as the leading governmental bodies currently developing AI legislation. Additionally, we’ll discuss the relevant context behind each domain and conduct a short analysis.

This series is intended to be a primer for policymakers, researchers, and individuals seeking to develop a high-level overview of the current AI governance space. We’ll be releasing a comprehensive report at the end of this series.

What is AI incident reporting?

AI incident reporting refers to an emerging series of voluntary practices or regulatory requirements for AI developers and deployers to publicly report adverse effects or “near-misses” that arise from the use of AI systems. Such mechanisms are designed to capture a wide range of potential issues, such as privacy breaches, security vulnerabilities, and biases in decision-making. 

In most domains, incidents can be divided into two subcategories:

An accident is a type of incident that caused significant damage, injury, or harm to a person, property, or equipment.

An accident is a type of incident that caused significant damage, injury, or harm to a person, property, or equipment.

An accident is a type of incident that caused significant damage, injury, or harm to a person, property, or equipment.

A near-miss is a type of incident that had the potential to cause significant damage, injury, or harm, but was narrowly avoided.

A near-miss is a type of incident that had the potential to cause significant damage, injury, or harm, but was narrowly avoided.

A near-miss is a type of incident that had the potential to cause significant damage, injury, or harm, but was narrowly avoided.

The rationale for incident reporting is to create a feedback loop where regulators, developers, and the public can learn from past AI deployments to continuously improve safety standards and legal compliance. By systematically documenting incidents, stakeholders can identify patterns, initiate inquiries into causes of failure, and implement corrective measures to prevent their recurrence.

What precedent policies exist for AI incident reporting?

Incident reporting has been a highly effective tool used across a variety of industries for decades to mitigate risk from emerging technologies. Here are two examples:

The Aviation Safety Reporting System (ASRS) has been noted for its effectiveness at drastically reducing the fatality rate in US aviation. Its success has been attributed to its confidential, voluntary, and non-punitive approach: anybody can submit a confidential incident report of a near-miss or an abuse of safety standards to a neutral third-party organization (in this case, NASA). The reporting aviation worker is typically granted limited immunity, which encourages more reporting without fear of reprisals. In response to incidents, the ASRS typically distributes non-binding notices summarizing key failures and recommending new industry standards.

The Aviation Safety Reporting System (ASRS) has been noted for its effectiveness at drastically reducing the fatality rate in US aviation. Its success has been attributed to its confidential, voluntary, and non-punitive approach: anybody can submit a confidential incident report of a near-miss or an abuse of safety standards to a neutral third-party organization (in this case, NASA). The reporting aviation worker is typically granted limited immunity, which encourages more reporting without fear of reprisals. In response to incidents, the ASRS typically distributes non-binding notices summarizing key failures and recommending new industry standards.

The Aviation Safety Reporting System (ASRS) has been noted for its effectiveness at drastically reducing the fatality rate in US aviation. Its success has been attributed to its confidential, voluntary, and non-punitive approach: anybody can submit a confidential incident report of a near-miss or an abuse of safety standards to a neutral third-party organization (in this case, NASA). The reporting aviation worker is typically granted limited immunity, which encourages more reporting without fear of reprisals. In response to incidents, the ASRS typically distributes non-binding notices summarizing key failures and recommending new industry standards.

It’s important to note that accidents still have mandatory reporting requirements via the FAA, and that the ASRS is a supplementary system.

It’s important to note that accidents still have mandatory reporting requirements via the FAA, and that the ASRS is a supplementary system.

It’s important to note that accidents still have mandatory reporting requirements via the FAA, and that the ASRS is a supplementary system.

The Occupational Safety and Health Administration (OSHA) is a governmental agency tasked with guaranteeing safe conditions for American workers by setting and enforcing workplace standards. Its primary day-to-day responsibility is following up on incident reports of unsafe work practices, injuries, and fatalities by investigating corporations. It enforces its standards primarily by assessing hefty fines on organizations for non-compliance.

The Occupational Safety and Health Administration (OSHA) is a governmental agency tasked with guaranteeing safe conditions for American workers by setting and enforcing workplace standards. Its primary day-to-day responsibility is following up on incident reports of unsafe work practices, injuries, and fatalities by investigating corporations. It enforces its standards primarily by assessing hefty fines on organizations for non-compliance.

The Occupational Safety and Health Administration (OSHA) is a governmental agency tasked with guaranteeing safe conditions for American workers by setting and enforcing workplace standards. Its primary day-to-day responsibility is following up on incident reports of unsafe work practices, injuries, and fatalities by investigating corporations. It enforces its standards primarily by assessing hefty fines on organizations for non-compliance.

Independent reports have found that OSHA has resulted in a modest increase in workplace safety, reducing worker injuries by a modest four percent.

Independent reports have found that OSHA has resulted in a modest increase in workplace safety, reducing worker injuries by a modest four percent.

Independent reports have found that OSHA has resulted in a modest increase in workplace safety, reducing worker injuries by a modest four percent.

Incident reporting in AI is still in its nascent stages, and a variety of approaches are being explored globally. The specific requirements for incident reporting, such as the types of incidents that must be reported, the timeframe for reporting, and the level of detail required can vary significantly between jurisdictions.

The most prominent public example of an AI incident reporting tool today is the AI Incident Database, launched by the Responsible AI Collaborative. This database crowdsources incident reports involving AI technologies as documented in public sources or news articles. It’s used by AI researchers as a tool to surface broad trends and individual case studies regarding AI safety incidents. As a voluntary public database, it doesn’t adhere to any regulatory standards nor does it require input or resolution from the developers of the AI tool involved.

What are current regulatory policies around AI incident reporting?

China

The PRC is developing a governmental incident reporting database, as announced in the Draft Measures on the Reporting of Cybersecurity Incidents on Dec 20th, 2023. This proposed legislation categorize cybersecurity incidents into four categories of severity (“Extremely Severe”, “Severe”, “Relatively Severe”, and “General”), and requires that the top three levels (“Critical Incidents”) are reported to governmental authorities within one hour of occurrence. The criteria for meeting the level of “Critical” incidents include the following:

Interruption of overall operation of critical information infrastructure for more than 30 minutes, or its main function for more than two hours;

Interruption of overall operation of critical information infrastructure for more than 30 minutes, or its main function for more than two hours;

Interruption of overall operation of critical information infrastructure for more than 30 minutes, or its main function for more than two hours;

Incidents affecting the work and life of more than 10% of the population in a single city-level administrative region;

Incidents affecting the work and life of more than 10% of the population in a single city-level administrative region;

Incidents affecting the work and life of more than 10% of the population in a single city-level administrative region;

Incidents affecting the water, electricity, gas, oil, heating or transportation usage of more than 100,000 people;

Incidents affecting the water, electricity, gas, oil, heating or transportation usage of more than 100,000 people;

Incidents affecting the water, electricity, gas, oil, heating or transportation usage of more than 100,000 people;

Incidents causing direct economic losses of more than RMB 5 million (around $694k USD)

Incidents causing direct economic losses of more than RMB 5 million (around $694k USD)

Incidents causing direct economic losses of more than RMB 5 million (around $694k USD)

Though this set of measures does not directly mention frontier AI models as a target for enforcement, any of the negative outcomes above resulting from the use of frontier AI models would be reported under the same framework. This draft measure can be understood as the Cyberspace Administration of China (CAC) pursuing two major goals:

Consolidating disparate reporting requirements across various laws regarding cybersecurity incidents.

Consolidating disparate reporting requirements across various laws regarding cybersecurity incidents.

Consolidating disparate reporting requirements across various laws regarding cybersecurity incidents.

Developing regulatory infrastructure in preparation for an evolving cybersecurity landscape, particularly with respect to advanced AI.

Developing regulatory infrastructure in preparation for an evolving cybersecurity landscape, particularly with respect to advanced AI.

Developing regulatory infrastructure in preparation for an evolving cybersecurity landscape, particularly with respect to advanced AI.

Elsewhere, leading Chinese AI regulatory measures make reference to reporting key events (specifically the distribution of unlawful information) to the Chinese government, but none of them have specific requirements for the creation of an incident reporting database:

Algorithmic Recommendation Provisions, Article 7: Service providers shall…establish and complete management systems and technical measures…[such as] security assessment and monitoring and security incident response and handling.

Algorithmic Recommendation Provisions, Article 7: Service providers shall…establish and complete management systems and technical measures…[such as] security assessment and monitoring and security incident response and handling.

Algorithmic Recommendation Provisions, Article 7: Service providers shall…establish and complete management systems and technical measures…[such as] security assessment and monitoring and security incident response and handling.

Article 9: Where unlawful information is discovered…a report shall be made to the cybersecurity and informatization department and relevant departments.

Article 9: Where unlawful information is discovered…a report shall be made to the cybersecurity and informatization department and relevant departments.

Article 9: Where unlawful information is discovered…a report shall be made to the cybersecurity and informatization department and relevant departments.

Deep Synthesis Provisions, Article 10: Where deep synthesis service providers discover illegal or negative information, they shall…promptly make a report to the telecommunications department or relevant departments in charge.

Deep Synthesis Provisions, Article 10: Where deep synthesis service providers discover illegal or negative information, they shall…promptly make a report to the telecommunications department or relevant departments in charge.

Deep Synthesis Provisions, Article 10: Where deep synthesis service providers discover illegal or negative information, they shall…promptly make a report to the telecommunications department or relevant departments in charge.

Generative AI Measures, Article 14: Where providers discover illegal content they shall promptly employ measures to address it such as stopping generation, stopping transmission, and removal, employ measures such as model optimization training to make corrections and report to the relevant departments in charge.

Generative AI Measures, Article 14: Where providers discover illegal content they shall promptly employ measures to address it such as stopping generation, stopping transmission, and removal, employ measures such as model optimization training to make corrections and report to the relevant departments in charge.

Generative AI Measures, Article 14: Where providers discover illegal content they shall promptly employ measures to address it such as stopping generation, stopping transmission, and removal, employ measures such as model optimization training to make corrections and report to the relevant departments in charge.

The EU

The EU AI Act requires that developers of both high-risk AI systems and general purpose AI (“GPAI”) systems set up internal tracking and reporting systems for “serious incidents” as part of their post-market monitoring infrastructure.

As defined in Article 3(44), a serious incident is: 

Any incident or malfunctioning of an AI system that directly or indirectly leads to any of the following:

(a) the death of a person or serious damage to a person’s health

(a) the death of a person or serious damage to a person’s health

(a) the death of a person or serious damage to a person’s health

(b) a serious and irreversible disruption of the management and operation of critical infrastructure

(b) a serious and irreversible disruption of the management and operation of critical infrastructure

(b) a serious and irreversible disruption of the management and operation of critical infrastructure

(ba) breach of obligations under Union law intended to protect fundamental rights

(ba) breach of obligations under Union law intended to protect fundamental rights

(ba) breach of obligations under Union law intended to protect fundamental rights

(bb) serious damage to property or the environment.

(bb) serious damage to property or the environment.

(bb) serious damage to property or the environment.

In the event that such an incident occurs, Article 62 requires that the developer reports the incident to the relevant authorities (specifically the European Data Protection Supervisor) and cooperate with them on an investigation, risk assessment, and corrective action. It specifies time limits for reporting and specific reporting obligations.

The US

The US does not currently have any existing or proposed legislation regarding reporting databases for AI-related incidents. However, the Executive Order on AI contains some preliminary language directing the Secretary of Health and Human Services (HHS) and the Secretary of Homeland Security to establish new programs within their respective agencies. These directives essentially request the creation of domain-specific incident databases:

Section 5.2: The Secretary of Homeland Security…shall develop a training, analysis, and evaluation program to mitigate AI-related IP risks. Such a program shall: (i) include appropriate personnel dedicated to collecting and analyzing reports of AI-related IP theft, investigating such incidents with implications for national security, and, where appropriate and consistent with applicable law, pursuing related enforcement actions.

Section 5.2: The Secretary of Homeland Security…shall develop a training, analysis, and evaluation program to mitigate AI-related IP risks. Such a program shall: (i) include appropriate personnel dedicated to collecting and analyzing reports of AI-related IP theft, investigating such incidents with implications for national security, and, where appropriate and consistent with applicable law, pursuing related enforcement actions.

Section 5.2: The Secretary of Homeland Security…shall develop a training, analysis, and evaluation program to mitigate AI-related IP risks. Such a program shall: (i) include appropriate personnel dedicated to collecting and analyzing reports of AI-related IP theft, investigating such incidents with implications for national security, and, where appropriate and consistent with applicable law, pursuing related enforcement actions.

Section 8: The Secretary of HHS shall…consider appropriate actions [such as]...establish[ing] a common framework for approaches to identifying and capturing clinical errors resulting from AI deployed in healthcare settings as well as specifications for a central tracking repository for associated incidents that cause harm, including through bias or discrimination, to patients, caregivers, or other parties.

Section 8: The Secretary of HHS shall…consider appropriate actions [such as]...establish[ing] a common framework for approaches to identifying and capturing clinical errors resulting from AI deployed in healthcare settings as well as specifications for a central tracking repository for associated incidents that cause harm, including through bias or discrimination, to patients, caregivers, or other parties.

Section 8: The Secretary of HHS shall…consider appropriate actions [such as]...establish[ing] a common framework for approaches to identifying and capturing clinical errors resulting from AI deployed in healthcare settings as well as specifications for a central tracking repository for associated incidents that cause harm, including through bias or discrimination, to patients, caregivers, or other parties.

Convergence’s Analysis

In the next 2-3 years, the US, EU, and China will have established mandatory incident reporting requirements by AI service providers for “severe” incidents encompassing AI technologies.

Each of these leading governments is currently developing or has tasked their internal agencies with the responsibility to develop systems to track and enforce mandatory incident reporting.

Each of these leading governments is currently developing or has tasked their internal agencies with the responsibility to develop systems to track and enforce mandatory incident reporting.

Each of these leading governments is currently developing or has tasked their internal agencies with the responsibility to develop systems to track and enforce mandatory incident reporting.

As defined in the previous section, such “severe” incidents will typically include significant monetary damages, injury or death to a person, or the disruption of critical infrastructure.

As defined in the previous section, such “severe” incidents will typically include significant monetary damages, injury or death to a person, or the disruption of critical infrastructure.

As defined in the previous section, such “severe” incidents will typically include significant monetary damages, injury or death to a person, or the disruption of critical infrastructure.

In many cases (such as the US and China today), these reporting requirements may not be designed specifically for AI incidents, but rather include them as aspects of more specific domains of use-cases, such as cybersecurity, IP theft, or healthcare. Enforcement of these reporting requirements may be spread across a variety of agencies.

In many cases (such as the US and China today), these reporting requirements may not be designed specifically for AI incidents, but rather include them as aspects of more specific domains of use-cases, such as cybersecurity, IP theft, or healthcare. Enforcement of these reporting requirements may be spread across a variety of agencies.

In many cases (such as the US and China today), these reporting requirements may not be designed specifically for AI incidents, but rather include them as aspects of more specific domains of use-cases, such as cybersecurity, IP theft, or healthcare. Enforcement of these reporting requirements may be spread across a variety of agencies.

In many cases (such as the US and China today), these reporting requirements may not be designed specifically for AI incidents, but rather include them as aspects of more specific domains of use-cases, such as cybeSimilar to governmental agencies like OSHA, these incident reporting systems will enforce compliance via mandatory reporting, comprehensive reviews following qualifying reports, and applying substantial fines for negligence.rsecurity, IP theft, or healthcare. Enforcement of these reporting requirements may be spread across a variety of agencies.

In many cases (such as the US and China today), these reporting requirements may not be designed specifically for AI incidents, but rather include them as aspects of more specific domains of use-cases, such as cybeSimilar to governmental agencies like OSHA, these incident reporting systems will enforce compliance via mandatory reporting, comprehensive reviews following qualifying reports, and applying substantial fines for negligence.rsecurity, IP theft, or healthcare. Enforcement of these reporting requirements may be spread across a variety of agencies.

In many cases (such as the US and China today), these reporting requirements may not be designed specifically for AI incidents, but rather include them as aspects of more specific domains of use-cases, such as cybeSimilar to governmental agencies like OSHA, these incident reporting systems will enforce compliance via mandatory reporting, comprehensive reviews following qualifying reports, and applying substantial fines for negligence.rsecurity, IP theft, or healthcare. Enforcement of these reporting requirements may be spread across a variety of agencies.

However, such governmental compliance requirements represent only the minimum base layer of an effective network of incident reporting systems to mitigate risk from AI technologies.

There exist several notable precedents from other domains of incident reporting that have yet to be developed or addressed by the AI governance community:

Voluntary, confidential or non-punitive reporting systems: Incident reporting systems similar to the Aviation Safety Reporting System (ASRS) as described previously do not yet exist. In particular, a substantial gap exists for a non-regulatory organization to focus on consolidating confidentially reported incidents, conducting independent safety evals, and publishing reports on best practices for the benefit of the entire AI safety community.

Voluntary, confidential or non-punitive reporting systems: Incident reporting systems similar to the Aviation Safety Reporting System (ASRS) as described previously do not yet exist. In particular, a substantial gap exists for a non-regulatory organization to focus on consolidating confidentially reported incidents, conducting independent safety evals, and publishing reports on best practices for the benefit of the entire AI safety community.

Voluntary, confidential or non-punitive reporting systems: Incident reporting systems similar to the Aviation Safety Reporting System (ASRS) as described previously do not yet exist. In particular, a substantial gap exists for a non-regulatory organization to focus on consolidating confidentially reported incidents, conducting independent safety evals, and publishing reports on best practices for the benefit of the entire AI safety community.

Near-miss reporting systems: Similarly, near-miss reporting involves disclosing incidents that could have resulted in injury, harm, or damage but were avoided. Such proactive reporting is a key tool to help organizations prevent “severe” incidents, by developing insight into the root causes behind safety issues before they occur. Given that AI systems are widely predicted to have the potential to cause catastrophically dangerous incidents, responsible disclosure of near-miss incidents remains a critical gap.

Near-miss reporting systems: Similarly, near-miss reporting involves disclosing incidents that could have resulted in injury, harm, or damage but were avoided. Such proactive reporting is a key tool to help organizations prevent “severe” incidents, by developing insight into the root causes behind safety issues before they occur. Given that AI systems are widely predicted to have the potential to cause catastrophically dangerous incidents, responsible disclosure of near-miss incidents remains a critical gap.

Near-miss reporting systems: Similarly, near-miss reporting involves disclosing incidents that could have resulted in injury, harm, or damage but were avoided. Such proactive reporting is a key tool to help organizations prevent “severe” incidents, by developing insight into the root causes behind safety issues before they occur. Given that AI systems are widely predicted to have the potential to cause catastrophically dangerous incidents, responsible disclosure of near-miss incidents remains a critical gap.

International coordination: Most incident reporting systems today are implemented on a national level. To promote the sharing of critical knowledge, key industries have developed bodies of international cooperation, such as the International Confidential Aviation Safety Systems (ICASS) Group or incident reporting systems managed by the International Atomic Energy Agency. Currently, there’s no legitimate international coordination proposals for AI incident reporting. We expect to see the development of these international bodies enter the discussion in the next ~2-3 years, after national regulatory bodies are created and standardized.

International coordination: Most incident reporting systems today are implemented on a national level. To promote the sharing of critical knowledge, key industries have developed bodies of international cooperation, such as the International Confidential Aviation Safety Systems (ICASS) Group or incident reporting systems managed by the International Atomic Energy Agency. Currently, there’s no legitimate international coordination proposals for AI incident reporting. We expect to see the development of these international bodies enter the discussion in the next ~2-3 years, after national regulatory bodies are created and standardized.

International coordination: Most incident reporting systems today are implemented on a national level. To promote the sharing of critical knowledge, key industries have developed bodies of international cooperation, such as the International Confidential Aviation Safety Systems (ICASS) Group or incident reporting systems managed by the International Atomic Energy Agency. Currently, there’s no legitimate international coordination proposals for AI incident reporting. We expect to see the development of these international bodies enter the discussion in the next ~2-3 years, after national regulatory bodies are created and standardized.