policy report

2024 State of the AI Regulatory Landscape

2024 State of the AI Regulatory Landscape

Published by Convergence Analysis, this series is designed to be a primer for policymakers, researchers, and individuals seeking to develop a high-level overview of the current state of AI regulation.

AI Model Registries

Last updated Mar 22, 2024

Last updated Mar 22, 2024

Author's Note

This report is one in a series of ~10 posts comprising a State of the AI Regulatory Landscape in 2024 Review, conducted by the Governance Research Program at Convergence Analysis. Each post will cover a specific domain of AI governance. We’ll provide an overview of existing regulations, focusing on the US, EU, and China as the leading governmental bodies currently developing AI legislation. Additionally, we’ll discuss the relevant context behind each domain and conduct a short analysis.

This series is intended to be a primer for policymakers, researchers, and individuals seeking to develop a high-level overview of the current AI governance space. We’ll be releasing a comprehensive report at the end of this series.

Author's Note

This report is one in a series of ~10 posts comprising a State of the AI Regulatory Landscape in 2024 Review, conducted by the Governance Research Program at Convergence Analysis. Each post will cover a specific domain of AI governance. We’ll provide an overview of existing regulations, focusing on the US, EU, and China as the leading governmental bodies currently developing AI legislation. Additionally, we’ll discuss the relevant context behind each domain and conduct a short analysis.

This series is intended to be a primer for policymakers, researchers, and individuals seeking to develop a high-level overview of the current AI governance space. We’ll be releasing a comprehensive report at the end of this series.

Author's Note

This report is one in a series of ~10 posts comprising a State of the AI Regulatory Landscape in 2024 Review, conducted by the Governance Research Program at Convergence Analysis. Each post will cover a specific domain of AI governance. We’ll provide an overview of existing regulations, focusing on the US, EU, and China as the leading governmental bodies currently developing AI legislation. Additionally, we’ll discuss the relevant context behind each domain and conduct a short analysis.

This series is intended to be a primer for policymakers, researchers, and individuals seeking to develop a high-level overview of the current AI governance space. We’ll be releasing a comprehensive report at the end of this series.

What are model registries? Why do they matter?

Model registries, in the context of AI regulation, are centralized governance databases of AI models intended to track and monitor AI systems in real-world use. These registries typically mandate the submission of a new algorithm or AI model to a governmental body prior to public release. 

Such registries will usually require basic information about each model, such as their purpose or primary functions, their computational size, and features of their underlying algorithms. In certain cases, they may request more detailed information, such as the model’s performance under particular benchmarks, a description of potential risks or hazards that could be caused by the model, or even certification that they have passed safety assessments designed to prove that the model will not cause harm.

Model registries allow governmental bodies to keep track of the AI industry, providing an overview of key models currently available to the public. Such registries also function as a foundational tool for AI governance – enabling future legislation targeted at specific AI models. 

These registries adhere to the governance model of “models as a point of entry”, allowing governments to focus their regulations on individual AI models rather than regulating the entire corporation, access to compute resources, or creating targeted regulations for specific algorithmic use cases.

As these model registries are an emerging form of AI governance with no direct precedents, the requirements, methods of reporting, and thresholds vary wildly between implementations. Some registries may be publicly accessible, providing greater accountability and transparency, whereas others may be limited to regulatory use only (e.g. when model data contains sensitive or dangerous information). Some may enforce reporting of certain classes of AI algorithms (such as China), whereas others may only require leading AI models with high compute requirements (such as the US).

Note

The phrase “model registry” may also often be used to refer to a (typically) private database of trained ML models, often used as a version control system for developers to compare different training runs. This is a separate topic from model registries for AI governance.

Note

The phrase “model registry” may also often be used to refer to a (typically) private database of trained ML models, often used as a version control system for developers to compare different training runs. This is a separate topic from model registries for AI governance.

Note

The phrase “model registry” may also often be used to refer to a (typically) private database of trained ML models, often used as a version control system for developers to compare different training runs. This is a separate topic from model registries for AI governance.

What are some precedents for mandatory government registries?

While algorithm and AI model registries are a new domain, many precedent policies exist for tracking the development and public release of novel public products. For example, reporting requirements for pharmaceuticals is a well-established and regulated process, as monitored by the Food and Drug Administration (FDA) in the US and the European Medicines Agency (EMA) in the EU. Such registries typically require:

Basic information, such as active ingredients, method of administration, recommended dosage, adverse effects, and contraindications.

Basic information, such as active ingredients, method of administration, recommended dosage, adverse effects, and contraindications.

Basic information, such as active ingredients, method of administration, recommended dosage, adverse effects, and contraindications.

Mandatory clinical testing demonstrating drug safety and efficacy before public release.

Mandatory clinical testing demonstrating drug safety and efficacy before public release.

Mandatory clinical testing demonstrating drug safety and efficacy before public release.

Postmarket surveillance, including requirements around incident reporting, potential investigations, and methods for drug recalls or relabeling.

Postmarket surveillance, including requirements around incident reporting, potential investigations, and methods for drug recalls or relabeling.

Postmarket surveillance, including requirements around incident reporting, potential investigations, and methods for drug recalls or relabeling.

Many of these structural requirements will transfer over directly to model reporting, including a focus on transparent reporting, pre-deployment safety testing by unbiased third-parties, and postmarket surveillance.

What are current regulatory policies around model registries?

China

The People’s Republic of China (PRC) announced the earliest and still the most comprehensive algorithm registry requirements in 2021, as part of its Algorithmic Recommendation Provisions. It has gone on to extend the scope of this registry, as its subsequent regulations covering deep synthesis and generative AI also require developers to register their AI models.

Algorithmic Recommendation Provisions: The PRC requires that algorithms with “public opinion properties or having social mobilization capabilities” shall report basic data such as the provider’s name, domain of application, and a self-assessment report to an algorithm registry within 10 days of publication. This requirement was primarily aimed at recommendation algorithms such as those used in TikTok or Instagram, but has later been expanded to include many different definitions of “algorithms”, including modern AI models.

Algorithmic Recommendation Provisions: The PRC requires that algorithms with “public opinion properties or having social mobilization capabilities” shall report basic data such as the provider’s name, domain of application, and a self-assessment report to an algorithm registry within 10 days of publication. This requirement was primarily aimed at recommendation algorithms such as those used in TikTok or Instagram, but has later been expanded to include many different definitions of “algorithms”, including modern AI models.

Algorithmic Recommendation Provisions: The PRC requires that algorithms with “public opinion properties or having social mobilization capabilities” shall report basic data such as the provider’s name, domain of application, and a self-assessment report to an algorithm registry within 10 days of publication. This requirement was primarily aimed at recommendation algorithms such as those used in TikTok or Instagram, but has later been expanded to include many different definitions of “algorithms”, including modern AI models.

Deep Synthesis Provisions, Article 19: The PRC additionally requires that algorithms that synthetically generate novel content such as voice, text, image, or video content must be similarly filed to the new algorithm registry.

Deep Synthesis Provisions, Article 19: The PRC additionally requires that algorithms that synthetically generate novel content such as voice, text, image, or video content must be similarly filed to the new algorithm registry.

Deep Synthesis Provisions, Article 19: The PRC additionally requires that algorithms that synthetically generate novel content such as voice, text, image, or video content must be similarly filed to the new algorithm registry.

Generative AI Measures, Article 17: Tthe PRC additionally requires that generative AI algorithms such as LLMs must be similarly filed to the new algorithm registry.

Generative AI Measures, Article 17: Tthe PRC additionally requires that generative AI algorithms such as LLMs must be similarly filed to the new algorithm registry.

Generative AI Measures, Article 17: Tthe PRC additionally requires that generative AI algorithms such as LLMs must be similarly filed to the new algorithm registry.

Of note, most of the algorithms regulated here were already covered by the 2022 deep synthesis provisions, but the new Generative AI Measures more specifically target LLMs and allows for the regulation of services that operate offline.

Of note, most of the algorithms regulated here were already covered by the 2022 deep synthesis provisions, but the new Generative AI Measures more specifically target LLMs and allows for the regulation of services that operate offline.

Of note, most of the algorithms regulated here were already covered by the 2022 deep synthesis provisions, but the new Generative AI Measures more specifically target LLMs and allows for the regulation of services that operate offline.

The EU

Via the EU AI Act, the EU has opted to categorize AI systems into tiers of risk by their use cases, notably splitting permitted AI systems into high-risk and limited-risk categorizations. In particular, it requires that high-risk AI systems must be entered into an EU database for tracking.

As specified in Article 60 & Annex VIII, this database is intended to be maintained by the European Commission and should contain primarily basic information such as the contact information for representatives for said AI system. It constitutes a fairly lightweight layer of tracking, and appears intended to be used primarily as a contact directory alongside other, much more extensive regulatory requirements for high-risk AI systems.

As specified in Article 60 & Annex VIII, this database is intended to be maintained by the European Commission and should contain primarily basic information such as the contact information for representatives for said AI system. It constitutes a fairly lightweight layer of tracking, and appears intended to be used primarily as a contact directory alongside other, much more extensive regulatory requirements for high-risk AI systems.

As specified in Article 60 & Annex VIII, this database is intended to be maintained by the European Commission and should contain primarily basic information such as the contact information for representatives for said AI system. It constitutes a fairly lightweight layer of tracking, and appears intended to be used primarily as a contact directory alongside other, much more extensive regulatory requirements for high-risk AI systems.

The US

The US has chosen to actively pursue “compute governance as an entry point” –- that is, it focuses on categorizing and regulating AI models by the compute power necessary to train them, rather than by the use-case of the AI model.

In particular, it has concentrated its binding AI regulations around restricting the export of high-end AI chips to China in preparation for a geopolitical AI arms race.

In particular, it has concentrated its binding AI regulations around restricting the export of high-end AI chips to China in preparation for a geopolitical AI arms race.

In particular, it has concentrated its binding AI regulations around restricting the export of high-end AI chips to China in preparation for a geopolitical AI arms race.

As of Biden’s 2023 Executive Order on AI, there is now a set of preliminary rules requiring the registration of models meeting a certain criteria of compute power. However, this threshold has currently been set beyond the compute power of any existing models, and as such is likely only to impact the next generation of LLMs.

As of Biden’s 2023 Executive Order on AI, there is now a set of preliminary rules requiring the registration of models meeting a certain criteria of compute power. However, this threshold has currently been set beyond the compute power of any existing models, and as such is likely only to impact the next generation of LLMs.

As of Biden’s 2023 Executive Order on AI, there is now a set of preliminary rules requiring the registration of models meeting a certain criteria of compute power. However, this threshold has currently been set beyond the compute power of any existing models, and as such is likely only to impact the next generation of LLMs.

Section 4.2.b specifies that the reporting requirements are enforced for models trained with greater than 1026 floating-point operations, or computing clusters with a theoretical maximum computing capacity of 1020 floating-point operations per second.

Section 4.2.b specifies that the reporting requirements are enforced for models trained with greater than 1026 floating-point operations, or computing clusters with a theoretical maximum computing capacity of 1020 floating-point operations per second.

Section 4.2.b specifies that the reporting requirements are enforced for models trained with greater than 1026 floating-point operations, or computing clusters with a theoretical maximum computing capacity of 1020 floating-point operations per second.

For comparison, GPT-4, one of today’s most advanced models, was likely trained with approximately 1025 floating-point operations.

For comparison, GPT-4, one of today’s most advanced models, was likely trained with approximately 1025 floating-point operations.

For comparison, GPT-4, one of today’s most advanced models, was likely trained with approximately 1025 floating-point operations.

Reporting requirements seem intentionally broad and extensive, specifying that qualifying companies must report on an ongoing basis:

Reporting requirements seem intentionally broad and extensive, specifying that qualifying companies must report on an ongoing basis:

Reporting requirements seem intentionally broad and extensive, specifying that qualifying companies must report on an ongoing basis:

Section 4.2.i.a: Any ongoing or planned activities related to training, developing, or producing dual-use foundation models, including the physical and cybersecurity protections taken to assure the integrity of that training process against sophisticated threats.

Section 4.2.i.a: Any ongoing or planned activities related to training, developing, or producing dual-use foundation models, including the physical and cybersecurity protections taken to assure the integrity of that training process against sophisticated threats.

Section 4.2.i.a: Any ongoing or planned activities related to training, developing, or producing dual-use foundation models, including the physical and cybersecurity protections taken to assure the integrity of that training process against sophisticated threats.

Section 4.2.i.b: The ownership and possession of the model weights of any dual-use foundation models, and the physical and cybersecurity measures taken to protect those model weights.

Section 4.2.i.b: The ownership and possession of the model weights of any dual-use foundation models, and the physical and cybersecurity measures taken to protect those model weights.

Section 4.2.i.b: The ownership and possession of the model weights of any dual-use foundation models, and the physical and cybersecurity measures taken to protect those model weights.

Section 4.2.i.c: The results of any developed dual-use foundation model’s performance in relevant AI red-team testing.

Section 4.2.i.c: The results of any developed dual-use foundation model’s performance in relevant AI red-team testing.

Section 4.2.i.c: The results of any developed dual-use foundation model’s performance in relevant AI red-team testing.

Convergence’s Analysis

Model registries appear to be a critical tool for governments to proactively enforce long-term control over AI development.

The US, EU, and China have now incorporated some form of a model registry as a supplement to their existing regulatory portfolio.

The US, EU, and China have now incorporated some form of a model registry as a supplement to their existing regulatory portfolio.

The US, EU, and China have now incorporated some form of a model registry as a supplement to their existing regulatory portfolio.

In particular, the types of models that each governmental body requires to be registered is a clear indicator of its longer-term priorities when it comes to AI regulation, as discussed below.

In particular, the types of models that each governmental body requires to be registered is a clear indicator of its longer-term priorities when it comes to AI regulation, as discussed below.

In particular, the types of models that each governmental body requires to be registered is a clear indicator of its longer-term priorities when it comes to AI regulation, as discussed below.

We should expect that additional safety assessments and recurring monitoring reports will be required for models from leading governmental bodies as AI capabilities accelerate.

We should expect that additional safety assessments and recurring monitoring reports will be required for models from leading governmental bodies as AI capabilities accelerate.

We should expect that additional safety assessments and recurring monitoring reports will be required for models from leading governmental bodies as AI capabilities accelerate.

The US, EU, and China are pursuing substantially differing goals in their approaches to model registries as an entry point to regulation.

In China, the model registry appears to be first and foremost a tool for aligning algorithms with the political and social agendas of the Chinese Communist Party. It’s focused largely on tracking algorithmic use cases that involve recommending and generating novel content to Chinese users, particularly those with “public opinion properties” or “social mobilization capabilities”.

In China, the model registry appears to be first and foremost a tool for aligning algorithms with the political and social agendas of the Chinese Communist Party. It’s focused largely on tracking algorithmic use cases that involve recommending and generating novel content to Chinese users, particularly those with “public opinion properties” or “social mobilization capabilities”.

In China, the model registry appears to be first and foremost a tool for aligning algorithms with the political and social agendas of the Chinese Communist Party. It’s focused largely on tracking algorithmic use cases that involve recommending and generating novel content to Chinese users, particularly those with “public opinion properties” or “social mobilization capabilities”.

In the EU, AI legislation is preoccupied primarily with protecting the rights and freedoms of its citizens. As a result, the high-risk AI systems for which it requires registration are confined primarily to use cases deemed dangerous in terms of reducing equity, justice, or access to basic resources such as healthcare or education.

In the EU, AI legislation is preoccupied primarily with protecting the rights and freedoms of its citizens. As a result, the high-risk AI systems for which it requires registration are confined primarily to use cases deemed dangerous in terms of reducing equity, justice, or access to basic resources such as healthcare or education.

In the EU, AI legislation is preoccupied primarily with protecting the rights and freedoms of its citizens. As a result, the high-risk AI systems for which it requires registration are confined primarily to use cases deemed dangerous in terms of reducing equity, justice, or access to basic resources such as healthcare or education.

The US government appears to have two primary goals: to control the potential risks and distribution of frontier AI models, and to avoid limiting the current rate of AI development.

The US government appears to have two primary goals: to control the potential risks and distribution of frontier AI models, and to avoid limiting the current rate of AI development.

The US government appears to have two primary goals: to control the potential risks and distribution of frontier AI models, and to avoid limiting the current rate of AI development.

In particular, it has decided to require registration for cutting-edge LLMs solely based on their raw performance metrics, rather than considering any specific use case, in contrast to both China and the EU.

In particular, it has decided to require registration for cutting-edge LLMs solely based on their raw performance metrics, rather than considering any specific use case, in contrast to both China and the EU.

In particular, it has decided to require registration for cutting-edge LLMs solely based on their raw performance metrics, rather than considering any specific use case, in contrast to both China and the EU.

Additionally, it appears to be placing a priority on protecting these models from external cybersecurity threats, requiring that organizations report the measures it has taken to protect these models from being accessed or stolen. Given its current position on the export of high-end AI chips and its long history with military IP theft, it’s clear that the US views the protection of cutting-edge AI models as a national security threat.

Additionally, it appears to be placing a priority on protecting these models from external cybersecurity threats, requiring that organizations report the measures it has taken to protect these models from being accessed or stolen. Given its current position on the export of high-end AI chips and its long history with military IP theft, it’s clear that the US views the protection of cutting-edge AI models as a national security threat.

Additionally, it appears to be placing a priority on protecting these models from external cybersecurity threats, requiring that organizations report the measures it has taken to protect these models from being accessed or stolen. Given its current position on the export of high-end AI chips and its long history with military IP theft, it’s clear that the US views the protection of cutting-edge AI models as a national security threat.

Finally, none of these model registry requirements will come into effect until the next generation of frontier AI models is released sometime in 2024 or 2025. To this point, the Biden administration has cautiously avoided creating any binding regulations that might impede the rate of AI capabilities development among leading American AI labs.

Finally, none of these model registry requirements will come into effect until the next generation of frontier AI models is released sometime in 2024 or 2025. To this point, the Biden administration has cautiously avoided creating any binding regulations that might impede the rate of AI capabilities development among leading American AI labs.

Finally, none of these model registry requirements will come into effect until the next generation of frontier AI models is released sometime in 2024 or 2025. To this point, the Biden administration has cautiously avoided creating any binding regulations that might impede the rate of AI capabilities development among leading American AI labs.

Model registries will serve as a foundational tool for governments to enact additional regulations around AI development.

Much in the same way drug registries are used as a foundational tool for the FDA to control the development and public usage of pharmaceuticals, model registries will be a critical component for governments to control public AI model usage.

Much in the same way drug registries are used as a foundational tool for the FDA to control the development and public usage of pharmaceuticals, model registries will be a critical component for governments to control public AI model usage.

Much in the same way drug registries are used as a foundational tool for the FDA to control the development and public usage of pharmaceuticals, model registries will be a critical component for governments to control public AI model usage.

Model registries will enable the creation and improved enforcement of regulations such as:

Model registries will enable the creation and improved enforcement of regulations such as:

Model registries will enable the creation and improved enforcement of regulations such as:

Mandating specific sets of pre-deployment safety assessments, or certification by certain organizations before public deployment

Mandating specific sets of pre-deployment safety assessments, or certification by certain organizations before public deployment

Mandating specific sets of pre-deployment safety assessments, or certification by certain organizations before public deployment

Transparency requirements for AI models such as disclosures

Transparency requirements for AI models such as disclosures

Transparency requirements for AI models such as disclosures

Incident reporting involving specific models and civil liabilities for damages caused by specific AI models

Incident reporting involving specific models and civil liabilities for damages caused by specific AI models

Incident reporting involving specific models and civil liabilities for damages caused by specific AI models

Postmarket surveillance such as post-deployment evaluations, regulatory investigations, and the potential disabling of non-compliant or risky models

Postmarket surveillance such as post-deployment evaluations, regulatory investigations, and the potential disabling of non-compliant or risky models

Postmarket surveillance such as post-deployment evaluations, regulatory investigations, and the potential disabling of non-compliant or risky models