policy report

2024 State of the AI Regulatory Landscape

2024 State of the AI Regulatory Landscape

Published by Convergence Analysis, this series is designed to be a primer for policymakers, researchers, and individuals seeking to develop a high-level overview of the current state of AI regulation.

Structure of AI Regulations

Deric Cheng

Governance Research Lead

Last updated May 07, 2024

Last updated May 07, 2024

Author's Note

This report is one in a series of ~10 posts comprising a State of the AI Regulatory Landscape in 2024 Review, conducted by the Governance Research Program at Convergence Analysis. Each post will cover a specific domain of AI governance. We’ll provide an overview of existing regulations, focusing on the US, EU, and China as the leading governmental bodies currently developing AI legislation. Additionally, we’ll discuss the relevant context behind each domain and conduct a short analysis.

This series is intended to be a primer for policymakers, researchers, and individuals seeking to develop a high-level overview of the current AI governance space. We’ll be releasing a comprehensive report at the end of this series.

Author's Note

This report is one in a series of ~10 posts comprising a State of the AI Regulatory Landscape in 2024 Review, conducted by the Governance Research Program at Convergence Analysis. Each post will cover a specific domain of AI governance. We’ll provide an overview of existing regulations, focusing on the US, EU, and China as the leading governmental bodies currently developing AI legislation. Additionally, we’ll discuss the relevant context behind each domain and conduct a short analysis.

This series is intended to be a primer for policymakers, researchers, and individuals seeking to develop a high-level overview of the current AI governance space. We’ll be releasing a comprehensive report at the end of this series.

Author's Note

This report is one in a series of ~10 posts comprising a State of the AI Regulatory Landscape in 2024 Review, conducted by the Governance Research Program at Convergence Analysis. Each post will cover a specific domain of AI governance. We’ll provide an overview of existing regulations, focusing on the US, EU, and China as the leading governmental bodies currently developing AI legislation. Additionally, we’ll discuss the relevant context behind each domain and conduct a short analysis.

This series is intended to be a primer for policymakers, researchers, and individuals seeking to develop a high-level overview of the current AI governance space. We’ll be releasing a comprehensive report at the end of this series.

In this section, we’ll discuss a multifaceted, high-level topic: How are current AI regulatory policies structured, and what are the advantages and disadvantages of their choices? By focusing on the existing regulatory choices of the EU, US, and China, we’ll compare and contrast key decisions in terms of classifying AI models and the organization of existing AI governance structures.

What are possible approaches to classify AI systems for governance?

Before passing any regulations, governments must answer for themselves several challenging, interrelated questions to lay the groundwork for their regulatory strategy:

How will we classify AI systems - by their capabilities, amount of compute, domain of application, risk level, underlying architecture, or otherwise?

How will we classify AI systems - by their capabilities, amount of compute, domain of application, risk level, underlying architecture, or otherwise?

How will we classify AI systems - by their capabilities, amount of compute, domain of application, risk level, underlying architecture, or otherwise?

Who will these regulations apply to – organizations, individuals, or companies?

Who will these regulations apply to – organizations, individuals, or companies?

Who will these regulations apply to – organizations, individuals, or companies?

Who will possess legal responsibility for harm generated by AI systems - the AI lab developing the core model, the enterprise business deploying it, or the customer using it?

Who will possess legal responsibility for harm generated by AI systems - the AI lab developing the core model, the enterprise business deploying it, or the customer using it?

Who will possess legal responsibility for harm generated by AI systems - the AI lab developing the core model, the enterprise business deploying it, or the customer using it?

What is the correct tradeoff between encouraging development & innovation and mitigating risks from AI systems?

What is the correct tradeoff between encouraging development & innovation and mitigating risks from AI systems?

What is the correct tradeoff between encouraging development & innovation and mitigating risks from AI systems?

Complicating the matter, even precisely defining what is an AI system is challenging: as a field, AI today encompasses many different forms of algorithms and structures. You’ll find overlapping and occasionally conflicting definitions on what constitutes “models”, “algorithms”, “AI”, “ML”, and more. In particular, the latest wave of foundational large-language models (LLMs such as ChatGPT) have varying names under different governance structures and contexts, such as general-purpose AI (GPAI), dual-use foundation models, frontier AI models, or simply generative AI

For the purposes of this review, we’ll rely on an extremely broad definition of AI systems from IBM: “A program that has been trained on a set of data to recognize certain patterns or make certain decisions without further human intervention.”

There are various viable approaches to classifying the development of AI models or algorithms into “regulatory boxes”. Many of these approaches may overlap with each other, or be layered to form a comprehensive, effective governance strategy. We’ll discuss some of them below:

Classifying AI models by application: This approach focuses on classifying and regulating AI models based on the intended domain of usage. For instance, AI models for improving healthcare for patients should fall under HIPAA regulations, AI models for filtering resumes should be protected from discrimination, and so on.

Classifying AI models by application: This approach focuses on classifying and regulating AI models based on the intended domain of usage. For instance, AI models for improving healthcare for patients should fall under HIPAA regulations, AI models for filtering resumes should be protected from discrimination, and so on.

Classifying AI models by application: This approach focuses on classifying and regulating AI models based on the intended domain of usage. For instance, AI models for improving healthcare for patients should fall under HIPAA regulations, AI models for filtering resumes should be protected from discrimination, and so on.

Though this is an intuitive strategy that is well supported by existing precedent regulation, it can have substantial gaps for novel uses of AI models that do not fit into existing applications.

Though this is an intuitive strategy that is well supported by existing precedent regulation, it can have substantial gaps for novel uses of AI models that do not fit into existing applications.

Though this is an intuitive strategy that is well supported by existing precedent regulation, it can have substantial gaps for novel uses of AI models that do not fit into existing applications.

This approach is facing significant challenges with the development of foundational LLMs, which can be effective tools in a variety of domains simultaneously. As a result, new regulatory frameworks often carve out a specific set of policies targeting these models separately, as was the case with the 2022 modifications to the EU AI Act defining “general-purpose AI (GPAI)”.

This approach is facing significant challenges with the development of foundational LLMs, which can be effective tools in a variety of domains simultaneously. As a result, new regulatory frameworks often carve out a specific set of policies targeting these models separately, as was the case with the 2022 modifications to the EU AI Act defining “general-purpose AI (GPAI)”.

This approach is facing significant challenges with the development of foundational LLMs, which can be effective tools in a variety of domains simultaneously. As a result, new regulatory frameworks often carve out a specific set of policies targeting these models separately, as was the case with the 2022 modifications to the EU AI Act defining “general-purpose AI (GPAI)”.

Classifying AI models by compute: This approach focuses primarily on the amount of computational power (often called “compute”) required to train or develop AI models. In practice, the capabilities of foundational AI models strongly correspond to the amount of training data and computational power used to generate the model, though this is a metric that is heavily impacted by technical research, algorithmic design, and data quality. Such an approach regards the models trained with the most compute as the most likely to cause harm, and therefore the most important to regulate.

Classifying AI models by compute: This approach focuses primarily on the amount of computational power (often called “compute”) required to train or develop AI models. In practice, the capabilities of foundational AI models strongly correspond to the amount of training data and computational power used to generate the model, though this is a metric that is heavily impacted by technical research, algorithmic design, and data quality. Such an approach regards the models trained with the most compute as the most likely to cause harm, and therefore the most important to regulate.

Classifying AI models by compute: This approach focuses primarily on the amount of computational power (often called “compute”) required to train or develop AI models. In practice, the capabilities of foundational AI models strongly correspond to the amount of training data and computational power used to generate the model, though this is a metric that is heavily impacted by technical research, algorithmic design, and data quality. Such an approach regards the models trained with the most compute as the most likely to cause harm, and therefore the most important to regulate.

Classifying AI models by risk level: This approach focuses on classifying AI models by the risk that they may pose to society, and applying regulations based on the measured level of risk. This may directly overlap with the previous strategies. Measuring this risk can be done in a number of ways:

Classifying AI models by risk level: This approach focuses on classifying AI models by the risk that they may pose to society, and applying regulations based on the measured level of risk. This may directly overlap with the previous strategies. Measuring this risk can be done in a number of ways:

Classifying AI models by risk level: This approach focuses on classifying AI models by the risk that they may pose to society, and applying regulations based on the measured level of risk. This may directly overlap with the previous strategies. Measuring this risk can be done in a number of ways:

A proposed governance framework (Responsible Scaling Policies) by Anthropic suggests that organizations should measure specific dangerous capabilities of their AI models, and impose limitations to development (either independently or via governmental regulation) based on their results.

A proposed governance framework (Responsible Scaling Policies) by Anthropic suggests that organizations should measure specific dangerous capabilities of their AI models, and impose limitations to development (either independently or via governmental regulation) based on their results.

A proposed governance framework (Responsible Scaling Policies) by Anthropic suggests that organizations should measure specific dangerous capabilities of their AI models, and impose limitations to development (either independently or via governmental regulation) based on their results.

As in the EU AI Act, certain applications of AI models may inherently be deemed high-risk, and therefore subject to a separate set of regulations.

As in the EU AI Act, certain applications of AI models may inherently be deemed high-risk, and therefore subject to a separate set of regulations.

As in the EU AI Act, certain applications of AI models may inherently be deemed high-risk, and therefore subject to a separate set of regulations.

As in the US Executive Order, a certain threshold of computational power of AI models may be deemed risky enough to regulate.

As in the US Executive Order, a certain threshold of computational power of AI models may be deemed risky enough to regulate.

As in the US Executive Order, a certain threshold of computational power of AI models may be deemed risky enough to regulate.

Considering AI models to be “algorithms”: As is currently the case in China, AI models may be considered just a subclass of “algorithms”, which more broadly includes computer programs such as recommendation algorithms, translation features, and more. By regulating algorithms as a whole, governments may include AI model governance as a component of a broader package of legislation around modern digital technology.

Considering AI models to be “algorithms”: As is currently the case in China, AI models may be considered just a subclass of “algorithms”, which more broadly includes computer programs such as recommendation algorithms, translation features, and more. By regulating algorithms as a whole, governments may include AI model governance as a component of a broader package of legislation around modern digital technology.

Considering AI models to be “algorithms”: As is currently the case in China, AI models may be considered just a subclass of “algorithms”, which more broadly includes computer programs such as recommendation algorithms, translation features, and more. By regulating algorithms as a whole, governments may include AI model governance as a component of a broader package of legislation around modern digital technology.

Certain regulatory approaches may involve a combination of two or more of these classifications. For example, the US Executive Order identifies a lower compute threshold for mandatory reporting for models trained on biological data, combining compute-level and application-level classifications.

Point of Regulation

Closely tied to this set of considerations is the concept of point of regulation – where in the supply chain governments decide to target their policies and requirements. Governments must identify the most effective regulatory approaches to achieve their objectives, considering factors such as their level of influence and the ease of enforcement at the selected point.

The way AI systems are classified under a government's regulatory framework directly informs the methods they employ for regulation. That is, the classification strategy and the point of regulation are interdependent decisions that shape a government’s overall regulatory strategy for AI.

As an example:

As American companies hold a 95% share of the high-end AI chip market, the US has found it effective to regulate physical exports of these chips to minimize Chinese access in pursuit of its geopolitical goals. As such, its primary point of regulation at this time targets high-end AI chip vendors, distributors, and exporters of AI chips. In contrast, it has little to no binding regulation regarding the design, sharing, or commercialization of AI models such as ChatGPT at this time.

As American companies hold a 95% share of the high-end AI chip market, the US has found it effective to regulate physical exports of these chips to minimize Chinese access in pursuit of its geopolitical goals. As such, its primary point of regulation at this time targets high-end AI chip vendors, distributors, and exporters of AI chips. In contrast, it has little to no binding regulation regarding the design, sharing, or commercialization of AI models such as ChatGPT at this time.

As American companies hold a 95% share of the high-end AI chip market, the US has found it effective to regulate physical exports of these chips to minimize Chinese access in pursuit of its geopolitical goals. As such, its primary point of regulation at this time targets high-end AI chip vendors, distributors, and exporters of AI chips. In contrast, it has little to no binding regulation regarding the design, sharing, or commercialization of AI models such as ChatGPT at this time.

Conversely, the EU has chosen to concentrate its binding regulation around regulating access to AI models, as their main priority is protecting the individual rights of citizens using these models. As such, it focuses on strict requirements regarding the behavior, transparency requirements, and reporting for AI models, to be met by the organizations publishing such models for commercial use.

Conversely, the EU has chosen to concentrate its binding regulation around regulating access to AI models, as their main priority is protecting the individual rights of citizens using these models. As such, it focuses on strict requirements regarding the behavior, transparency requirements, and reporting for AI models, to be met by the organizations publishing such models for commercial use.

Conversely, the EU has chosen to concentrate its binding regulation around regulating access to AI models, as their main priority is protecting the individual rights of citizens using these models. As such, it focuses on strict requirements regarding the behavior, transparency requirements, and reporting for AI models, to be met by the organizations publishing such models for commercial use.

Two important dimensions in designing regulatory structures for AI governance

How should a government structure its AI governance, and what factors might it depend on? We’ll mention several relevant considerations that will be further discussed regarding specific government’s approaches to legislation.

Centralized vs. Decentralized Enforcement

In a centralized AI governance system, a single agency or regulatory body may be responsible for implementing, monitoring, and enforcing legislation. Such a body may be able to operate more efficiently by consolidating technical expertise, resources, and jurisdiction. For example, a single agency could coordinate more easily with AI labs to design a single framework for regulating multi-functional LLMs, or be able to better fund technically complex safety evaluations by hiring leading safety researchers.

However, such an agency may fail to effectively account for the varied uses of AI technology, or lean too far towards “one-size-fits-all” regulatory strategies. For example, a single agency may be unable to simultaneously effectively regulate use-cases of LLMs in healthcare (e.g. complying with HIPAA regulations), content creation (e.g. preventing deepfakes), and employment (e.g. preventing discriminatory hiring practices), as it may become resource constrained and lack domain expertise. A single agency may also be more susceptible to regulatory capture from AI labs.

In contrast, decentralized enforcement may spread ownership of AI regulation across a variety of agencies or organizations focused on different concerns, such as the domain of application or method of oversight. This approach might significantly improve the application of governance to specific AI use-cases, but risks stretching agencies thin as they struggle to independently evaluate and regulate rapidly-developing technologies. 

Decentralized governmental bodies may not take ownership of novel AI technologies without clear precedent (such as deepfakes), and key issues may “slip between the gaps” of different regulatory agencies. Alternatively, they might alternatively attempt to overfit existing regulatory structures onto novel technologies with disastrous outcomes for innovation. For example, the SEC’s attempt to map emerging cryptocurrencies onto its existing definition of securities has led to it declaring that the majority of cryptocurrency projects are unlicensed securities subject to shutdown.

Vertical vs Horizontal Regulations

A very similar set of arguments can be applied to the regulations themselves. A horizontally-integrated AI governance effort (such as the EU AI Act) applies new legislation to all use cases of AI, effectively forcing any AI models in existence to comply with a wide-ranging and non-specific set of regulations. Such an approach can provide a comprehensive, clearly defined structure for new AI development, simplifying compliance. However, horizontally-integrated policies can also be criticized for “overreaching” in scope, by applying regulations too broadly before legislators have developed expertise in managing a new field, and potentially stifling innovation as a result.

In contrast, vertical regulations may be able to target a single domain of interest precisely, focusing on a narrow domain like “recommendation algorithms”, “deepfakes”, or “text generation” as demonstrated by China’s recent AI regulatory policies. Such vertical regulations can be more straightforward to implement and enforce than a broad set of horizontal regulations, and can allow legislators to concentrate on effectively managing a narrow set of use cases and considerations. However, they may not account effectively for AI technologies that span multiple domains, and could eventually lead to piecemeal, conflicting results as different vertical “slices” take disjointed approaches to regulating AI technologies.

How are leading governments approaching AI Governance?

China

Over the past three years, China has passed a series of vertical regulations targeting specific domains of AI applications, led by the Cyberspace Administration of China (CAC). The three most relevant pieces of legislation include:

1

Algorithmic Recommendation Provisions: Initially published in August 2021, these provisions enforce a series of regulations targeting recommendation algorithms, such as those that provide personalized rankings, search filters, decision making, or “services with public opinion properties or social mobilization capabilities”. Notably, it created a mandatory algorithm registry requiring all qualifying algorithms by Chinese organizations to be registered within 10 days of public launch.

1

Algorithmic Recommendation Provisions: Initially published in August 2021, these provisions enforce a series of regulations targeting recommendation algorithms, such as those that provide personalized rankings, search filters, decision making, or “services with public opinion properties or social mobilization capabilities”. Notably, it created a mandatory algorithm registry requiring all qualifying algorithms by Chinese organizations to be registered within 10 days of public launch.

1

Algorithmic Recommendation Provisions: Initially published in August 2021, these provisions enforce a series of regulations targeting recommendation algorithms, such as those that provide personalized rankings, search filters, decision making, or “services with public opinion properties or social mobilization capabilities”. Notably, it created a mandatory algorithm registry requiring all qualifying algorithms by Chinese organizations to be registered within 10 days of public launch.

2

Deep Synthesis Provisions: Initially published in November 2022, this creates a series of regulations for the use of algorithms that synthetically generate content such as text, voice, images, or videos. It was intended to combat the rise of “deepfakes”, and requires labeling, user identification, and providers to prevent “misuse” as broadly defined by the Chinese government.

2

Deep Synthesis Provisions: Initially published in November 2022, this creates a series of regulations for the use of algorithms that synthetically generate content such as text, voice, images, or videos. It was intended to combat the rise of “deepfakes”, and requires labeling, user identification, and providers to prevent “misuse” as broadly defined by the Chinese government.

2

Deep Synthesis Provisions: Initially published in November 2022, this creates a series of regulations for the use of algorithms that synthetically generate content such as text, voice, images, or videos. It was intended to combat the rise of “deepfakes”, and requires labeling, user identification, and providers to prevent “misuse” as broadly defined by the Chinese government.

3

Interim Generative AI Measures: Initially published in July 2023, this set of regulations was a direct response to the announcement and ensuing wave of excitement caused by ChatGPT’s release in late 2022. It expands on the policies proposed in the Deep Synthesis Provisions to better encompass multi-use LLMs, strengthening provisions such as discrimination requirements, requirements for training data, and alignment with national interests.

3

Interim Generative AI Measures: Initially published in July 2023, this set of regulations was a direct response to the announcement and ensuing wave of excitement caused by ChatGPT’s release in late 2022. It expands on the policies proposed in the Deep Synthesis Provisions to better encompass multi-use LLMs, strengthening provisions such as discrimination requirements, requirements for training data, and alignment with national interests.

3

Interim Generative AI Measures: Initially published in July 2023, this set of regulations was a direct response to the announcement and ensuing wave of excitement caused by ChatGPT’s release in late 2022. It expands on the policies proposed in the Deep Synthesis Provisions to better encompass multi-use LLMs, strengthening provisions such as discrimination requirements, requirements for training data, and alignment with national interests.

The language used by these AI regulations is typically broad, high-level, and non-specific. For example, Article 5 of the Interim Generative AI Measures states that providers should “Encourage the innovative application of generative AI technology in each industry and field [and] generate exceptional content that is positive, healthy, and uplifting”. In practice, this wording extends greater control to the CAC, allowing it to interpret its regulations as necessary to enforce its desired outcomes.

Notably, China created the first national algorithm registry in its 2021 Algorithmic Recommendation Provisions, focusing initially on capturing all recommendation algorithms used by consumers in China. By defining the concept of “algorithm” quite broadly, this registry often requires that organizations submit many separate, detailed reports for various algorithms in use by its systems. In subsequent legislation, the CAC has continually expanded the scope of this algorithm registry to include updated forms of AI, including all LLMs and AI models capable of generating content.

What are the key traits of China’s AI governance strategy?

China’s governance strategy is focused on tracking and managing algorithms by their domain of use:

In particular, the CAC is developing legislation regulating all types of algorithms in use by Chinese citizens, not just LLMs or AI models. Based on their track record, we can expect that China will continue to expand the algorithm registry to include a broader scope of algorithms over time.

In particular, the CAC is developing legislation regulating all types of algorithms in use by Chinese citizens, not just LLMs or AI models. Based on their track record, we can expect that China will continue to expand the algorithm registry to include a broader scope of algorithms over time.

In particular, the CAC is developing legislation regulating all types of algorithms in use by Chinese citizens, not just LLMs or AI models. Based on their track record, we can expect that China will continue to expand the algorithm registry to include a broader scope of algorithms over time.

China is taking a vertical, iterative approach to developing progressively more comprehensive legislation, by passing targeted regulations concentrating on a specific category of algorithms at a time:

The CAC has tended to focus on current domains in AI, drafting legislation when a new domain becomes socially relevant. In contrast to the US or EU, it appears to have deprioritized many domains outside of this scope, such as regulating AI for healthcare, employment, law enforcement, judicial systems and more.

The CAC has tended to focus on current domains in AI, drafting legislation when a new domain becomes socially relevant. In contrast to the US or EU, it appears to have deprioritized many domains outside of this scope, such as regulating AI for healthcare, employment, law enforcement, judicial systems and more.

The CAC has tended to focus on current domains in AI, drafting legislation when a new domain becomes socially relevant. In contrast to the US or EU, it appears to have deprioritized many domains outside of this scope, such as regulating AI for healthcare, employment, law enforcement, judicial systems and more.

These iterative regulations appear to be predecessors building towards a more comprehensive piece of legislation: an Artificial Intelligence Law, proposed in a legislative plan released in June 2023. This law is not expected to be published until late 2024, but will likely cover many domains of AI use, horizontally integrating China’s AI regulations.

These iterative regulations appear to be predecessors building towards a more comprehensive piece of legislation: an Artificial Intelligence Law, proposed in a legislative plan released in June 2023. This law is not expected to be published until late 2024, but will likely cover many domains of AI use, horizontally integrating China’s AI regulations.

These iterative regulations appear to be predecessors building towards a more comprehensive piece of legislation: an Artificial Intelligence Law, proposed in a legislative plan released in June 2023. This law is not expected to be published until late 2024, but will likely cover many domains of AI use, horizontally integrating China’s AI regulations.

China has demonstrated clear precedent for this model of passing iterative legislation in preparation for a comprehensive, all-encompassing law. In particular, it followed a similar process for internet regulation in the 2000s, capped by an all-encompassing Cybersecurity Law passed in 2017.

China has demonstrated clear precedent for this model of passing iterative legislation in preparation for a comprehensive, all-encompassing law. In particular, it followed a similar process for internet regulation in the 2000s, capped by an all-encompassing Cybersecurity Law passed in 2017.

China has demonstrated clear precedent for this model of passing iterative legislation in preparation for a comprehensive, all-encompassing law. In particular, it followed a similar process for internet regulation in the 2000s, capped by an all-encompassing Cybersecurity Law passed in 2017.

China strongly prioritizes social control and alignment in its AI regulations:

In particular, the domains of AI technology selected for legislation clearly indicate the priorities of the Chinese government. Each of the provisions includes references to upholding “Core Socialist Values”, and contains more specific direction such as requirements to “respect social mores and ethics, and adhere to the correct political direction, public opinion orientation, and values trends, to promote progress and improvement” (Article 4, Deep Synthesis Provisions). The broad nature of its requirements allows for broad and perhaps arbitrary enforcement.

In particular, the domains of AI technology selected for legislation clearly indicate the priorities of the Chinese government. Each of the provisions includes references to upholding “Core Socialist Values”, and contains more specific direction such as requirements to “respect social mores and ethics, and adhere to the correct political direction, public opinion orientation, and values trends, to promote progress and improvement” (Article 4, Deep Synthesis Provisions). The broad nature of its requirements allows for broad and perhaps arbitrary enforcement.

In particular, the domains of AI technology selected for legislation clearly indicate the priorities of the Chinese government. Each of the provisions includes references to upholding “Core Socialist Values”, and contains more specific direction such as requirements to “respect social mores and ethics, and adhere to the correct political direction, public opinion orientation, and values trends, to promote progress and improvement” (Article 4, Deep Synthesis Provisions). The broad nature of its requirements allows for broad and perhaps arbitrary enforcement.

China has demonstrated an inward focus on regulating Chinese organizations and citizens:

As a result of China’s restrictive policies via the Great Firewall preventing many leading Western technology services from operating in China, these regulations primarily apply to Chinese technology companies serving Chinese citizens.

As a result of China’s restrictive policies via the Great Firewall preventing many leading Western technology services from operating in China, these regulations primarily apply to Chinese technology companies serving Chinese citizens.

As a result of China’s restrictive policies via the Great Firewall preventing many leading Western technology services from operating in China, these regulations primarily apply to Chinese technology companies serving Chinese citizens.

Major leading AI labs such as OpenAI, Anthropic, and Google do not actively serve Chinese consumers, in part because they are unwilling to comply with China’s censorship policies.

Major leading AI labs such as OpenAI, Anthropic, and Google do not actively serve Chinese consumers, in part because they are unwilling to comply with China’s censorship policies.

Major leading AI labs such as OpenAI, Anthropic, and Google do not actively serve Chinese consumers, in part because they are unwilling to comply with China’s censorship policies.

In many ways, Chinese AI governance operates on a parallel and disjoint basis to Western AI governance.

In many ways, Chinese AI governance operates on a parallel and disjoint basis to Western AI governance.

In many ways, Chinese AI governance operates on a parallel and disjoint basis to Western AI governance.

The EU

The European Union (EU) has conducted almost all of its AI governance initiatives within a single piece of legislation: the EU AI Act, formally adopted in March 2024. Initially proposed in 2021, this comprehensive legislation aims to regulate AI systems based on their potential risks and safeguard the rights of EU citizens.

At the core of the EU AI Act is a risk-based approach to AI regulation. The act classifies AI systems into four categories: unacceptable risk, high risk, limited risk, and minimal risk. Unacceptable risk AI systems, such as those that manipulate human behavior or exploit vulnerabilities, are banned outright. High-risk AI systems, including those used in critical infrastructure, education, and employment, are subject to strict requirements and oversight. Limited risk AI systems require transparency measures, while minimal risk AI systems are largely unregulated.

In direct response to the publicization of foundational AI models in 2022 starting with the launch of ChatGPT, the Act includes clauses specifically addressing the challenges posed by general purpose AI (GPAI). GPAI systems, which can be adapted for a wide range of tasks, are subject to additional requirements, including being categorized as high-risk systems depending on their intended domain of use.

What are the key traits of the EU’s AI governance strategy?

The EU AI Act is a horizontally integrated, comprehensive piece of legislation implemented by a centralized body:

The EU AI Act classifies all AI systems used within the EU into four distinct risk levels, and assigns clear requirements for each set of AI systems. As a result, it’s the most comprehensive legal framework for AI systems today. Though it has generally been well-received, it’s also received criticism by member countries for being overly restrictive and potentially stifling AI innovation within the EU.

The EU AI Act classifies all AI systems used within the EU into four distinct risk levels, and assigns clear requirements for each set of AI systems. As a result, it’s the most comprehensive legal framework for AI systems today. Though it has generally been well-received, it’s also received criticism by member countries for being overly restrictive and potentially stifling AI innovation within the EU.

The EU AI Act classifies all AI systems used within the EU into four distinct risk levels, and assigns clear requirements for each set of AI systems. As a result, it’s the most comprehensive legal framework for AI systems today. Though it has generally been well-received, it’s also received criticism by member countries for being overly restrictive and potentially stifling AI innovation within the EU.

To oversee the implementation and enforcement of the EU AI Act, the legislation establishes the European AI Office. This dedicated body is responsible for coordinating compliance, providing guidance to businesses and organizations, and enforcing the rules set out in the act. As the leading agency enforcing binding AI rules on a multinational coalition, it will shape the development and governance of AI globally, much as the GDPR led to an international restructuring of internet privacy standards.

To oversee the implementation and enforcement of the EU AI Act, the legislation establishes the European AI Office. This dedicated body is responsible for coordinating compliance, providing guidance to businesses and organizations, and enforcing the rules set out in the act. As the leading agency enforcing binding AI rules on a multinational coalition, it will shape the development and governance of AI globally, much as the GDPR led to an international restructuring of internet privacy standards.

To oversee the implementation and enforcement of the EU AI Act, the legislation establishes the European AI Office. This dedicated body is responsible for coordinating compliance, providing guidance to businesses and organizations, and enforcing the rules set out in the act. As the leading agency enforcing binding AI rules on a multinational coalition, it will shape the development and governance of AI globally, much as the GDPR led to an international restructuring of internet privacy standards.

The EU has demonstrated a clear prioritization for the protection of citizen’s rights:

The EU AI Act’s core approach to categorizing risk levels is designed primarily around measuring the ability of AI systems to infringe on the rights of EU citizens.

The EU AI Act’s core approach to categorizing risk levels is designed primarily around measuring the ability of AI systems to infringe on the rights of EU citizens.

The EU AI Act’s core approach to categorizing risk levels is designed primarily around measuring the ability of AI systems to infringe on the rights of EU citizens.

This can be observed in the list of use cases deemed to be high-risk, such as educational or vocational training, employment, migration & asylum, and administration of justice or democratic processes.

This can be observed in the list of use cases deemed to be high-risk, such as educational or vocational training, employment, migration & asylum, and administration of justice or democratic processes.

This can be observed in the list of use cases deemed to be high-risk, such as educational or vocational training, employment, migration & asylum, and administration of justice or democratic processes.

This is in direct contrast to China’s AI governance strategy, which is designed largely to give the government greater control over generated content and recommendations.

This is in direct contrast to China’s AI governance strategy, which is designed largely to give the government greater control over generated content and recommendations.

This is in direct contrast to China’s AI governance strategy, which is designed largely to give the government greater control over generated content and recommendations.

Most of the requirements are designed with the common citizen in mind, such as transparency and reporting requirements, the ability of any citizen to lodge a complaint with a market surveillance authority, prohibitions on social scoring systems, and discrimination requirements.

Most of the requirements are designed with the common citizen in mind, such as transparency and reporting requirements, the ability of any citizen to lodge a complaint with a market surveillance authority, prohibitions on social scoring systems, and discrimination requirements.

Most of the requirements are designed with the common citizen in mind, such as transparency and reporting requirements, the ability of any citizen to lodge a complaint with a market surveillance authority, prohibitions on social scoring systems, and discrimination requirements.

Few protections are included for corporations or organizations running AI systems. The fines for non-compliance are quite high, ranging from 1.5% to 7% of a firm’s global sales turnover or millions of euros, whichever is greater.

Few protections are included for corporations or organizations running AI systems. The fines for non-compliance are quite high, ranging from 1.5% to 7% of a firm’s global sales turnover or millions of euros, whichever is greater.

Few protections are included for corporations or organizations running AI systems. The fines for non-compliance are quite high, ranging from 1.5% to 7% of a firm’s global sales turnover or millions of euros, whichever is greater.

The EU AI Act implements strict and binding requirements for high-risk AI systems:

This can be observed in the list of use cases deemed to be high-risk, such as educational or vocational training, employment, migration & asylum, and administration of justice or democratic processes.

This can be observed in the list of use cases deemed to be high-risk, such as educational or vocational training, employment, migration & asylum, and administration of justice or democratic processes.

This can be observed in the list of use cases deemed to be high-risk, such as educational or vocational training, employment, migration & asylum, and administration of justice or democratic processes.

Low-risk AI systems face significantly less stringent compliance requirements, but have binding transparency requirements mandating that AI systems must inform humans when sharing or distributing generated content.

Low-risk AI systems face significantly less stringent compliance requirements, but have binding transparency requirements mandating that AI systems must inform humans when sharing or distributing generated content.

Low-risk AI systems face significantly less stringent compliance requirements, but have binding transparency requirements mandating that AI systems must inform humans when sharing or distributing generated content.

The US

In large part due to legislative gridlock in the US Congress, the United States has taken an approach to AI governance centered around executive orders and non-binding declarations by the Biden administration. Though this approach has key limitations, such as the inability to allocate budget for additional programs, it has resulted in a significant amount of executive action over the past year. 

Three key executive actions stand out in shaping the US approach:

1

US / China Semiconductor Export Controls: Launched on Oct 7, 2022, these export controls (and subsequent updates) on high-end semiconductors used to train AI models mark a significant escalation in US efforts to restrict China's access to advanced computing and AI technologies. The rules, issued by the Bureau of Industry and Security (BIS), ban the export of advanced chips, chip-making equipment, and semiconductor expertise to China. They aim to drastically slow China's AI development and protect US national security by targeting the hardware essential to develop powerful AI models.

1

US / China Semiconductor Export Controls: Launched on Oct 7, 2022, these export controls (and subsequent updates) on high-end semiconductors used to train AI models mark a significant escalation in US efforts to restrict China's access to advanced computing and AI technologies. The rules, issued by the Bureau of Industry and Security (BIS), ban the export of advanced chips, chip-making equipment, and semiconductor expertise to China. They aim to drastically slow China's AI development and protect US national security by targeting the hardware essential to develop powerful AI models.

1

US / China Semiconductor Export Controls: Launched on Oct 7, 2022, these export controls (and subsequent updates) on high-end semiconductors used to train AI models mark a significant escalation in US efforts to restrict China's access to advanced computing and AI technologies. The rules, issued by the Bureau of Industry and Security (BIS), ban the export of advanced chips, chip-making equipment, and semiconductor expertise to China. They aim to drastically slow China's AI development and protect US national security by targeting the hardware essential to develop powerful AI models.

2

Blueprint for an AI Bill of Rights: ​​Released in October 2022, this blueprint outlines five principles to guide the design, use, and deployment of automated systems to protect the rights of the American public. These principles include safe and effective systems, algorithmic discrimination protections, data privacy, notice and explanation, and human alternatives, consideration, and fallback. While non-binding, the blueprint aims to inform policy decisions and align action across all levels of government.

2

Blueprint for an AI Bill of Rights: ​​Released in October 2022, this blueprint outlines five principles to guide the design, use, and deployment of automated systems to protect the rights of the American public. These principles include safe and effective systems, algorithmic discrimination protections, data privacy, notice and explanation, and human alternatives, consideration, and fallback. While non-binding, the blueprint aims to inform policy decisions and align action across all levels of government.

2

Blueprint for an AI Bill of Rights: ​​Released in October 2022, this blueprint outlines five principles to guide the design, use, and deployment of automated systems to protect the rights of the American public. These principles include safe and effective systems, algorithmic discrimination protections, data privacy, notice and explanation, and human alternatives, consideration, and fallback. While non-binding, the blueprint aims to inform policy decisions and align action across all levels of government.

3

The Executive Order on Artificial Intelligence: Issued in October 2023, this order directs various federal agencies to act to promote the responsible development and use of AI. It calls for these agencies to develop AI risk management frameworks, develop AI standards and technical guidance, create better systems for AI oversight, and foster public-private partnerships. It marks the first comprehensive and coordinated effort to shape AI governance across the federal government, but lacks binding regulation or specific details as it primarily orders individual agencies to publish reports on next steps.

3

The Executive Order on Artificial Intelligence: Issued in October 2023, this order directs various federal agencies to act to promote the responsible development and use of AI. It calls for these agencies to develop AI risk management frameworks, develop AI standards and technical guidance, create better systems for AI oversight, and foster public-private partnerships. It marks the first comprehensive and coordinated effort to shape AI governance across the federal government, but lacks binding regulation or specific details as it primarily orders individual agencies to publish reports on next steps.

3

The Executive Order on Artificial Intelligence: Issued in October 2023, this order directs various federal agencies to act to promote the responsible development and use of AI. It calls for these agencies to develop AI risk management frameworks, develop AI standards and technical guidance, create better systems for AI oversight, and foster public-private partnerships. It marks the first comprehensive and coordinated effort to shape AI governance across the federal government, but lacks binding regulation or specific details as it primarily orders individual agencies to publish reports on next steps.

What are the key traits of the US’ AI governance strategy?

The US’ initial binding regulations focus on classifying AI models by compute ability and regulating hardware:

The US has taken a distinctive approach to AI governance by controlling the hardware and computational power required to train and develop AI models. It is uniquely positioned to leverage this compute-based approach to regulation, as it is home to all leading vendors of high-end AI chips (Nvidia, AMD, Intel) and consequently has direct legislative control over these chips.

The US has taken a distinctive approach to AI governance by controlling the hardware and computational power required to train and develop AI models. It is uniquely positioned to leverage this compute-based approach to regulation, as it is home to all leading vendors of high-end AI chips (Nvidia, AMD, Intel) and consequently has direct legislative control over these chips.

The US has taken a distinctive approach to AI governance by controlling the hardware and computational power required to train and develop AI models. It is uniquely positioned to leverage this compute-based approach to regulation, as it is home to all leading vendors of high-end AI chips (Nvidia, AMD, Intel) and consequently has direct legislative control over these chips.

This is exemplified by the US-China export controls, which aim to restrict China's access to the high-end AI chips necessary for developing advanced AI systems by setting limits on the processing power & performance density of exportable chips.

This is exemplified by the US-China export controls, which aim to restrict China's access to the high-end AI chips necessary for developing advanced AI systems by setting limits on the processing power & performance density of exportable chips.

This is exemplified by the US-China export controls, which aim to restrict China's access to the high-end AI chips necessary for developing advanced AI systems by setting limits on the processing power & performance density of exportable chips.

This focus can also be seen in the Executive Order’s reporting requirements for AI models, which have thresholds for computing capacity or model training measured in floating-point operations per second (FLOP/s).

This focus can also be seen in the Executive Order’s reporting requirements for AI models, which have thresholds for computing capacity or model training measured in floating-point operations per second (FLOP/s).

This focus can also be seen in the Executive Order’s reporting requirements for AI models, which have thresholds for computing capacity or model training measured in floating-point operations per second (FLOP/s).

Beyond export controls, the US appears to be pursuing a decentralized, largely non-binding approach relying on executive action:

Due to structural challenges in passing binding legislation through a divided Congress, the US has relied primarily on executive orders and agency actions to shape its AI governance strategy, which don’t require any congressional approval. It has chosen to decentralize its research and regulatory process by distributing such work among selected agencies.

Due to structural challenges in passing binding legislation through a divided Congress, the US has relied primarily on executive orders and agency actions to shape its AI governance strategy, which don’t require any congressional approval. It has chosen to decentralize its research and regulatory process by distributing such work among selected agencies.

Due to structural challenges in passing binding legislation through a divided Congress, the US has relied primarily on executive orders and agency actions to shape its AI governance strategy, which don’t require any congressional approval. It has chosen to decentralize its research and regulatory process by distributing such work among selected agencies.

Instead of including specific binding requirements in the US Executive Order on AI, the Biden administration has preferred to task various federal agencies with developing their own frameworks, standards, and oversight mechanisms. Most of these upcoming standards are still being developed and are not yet public.

Instead of including specific binding requirements in the US Executive Order on AI, the Biden administration has preferred to task various federal agencies with developing their own frameworks, standards, and oversight mechanisms. Most of these upcoming standards are still being developed and are not yet public.

Instead of including specific binding requirements in the US Executive Order on AI, the Biden administration has preferred to task various federal agencies with developing their own frameworks, standards, and oversight mechanisms. Most of these upcoming standards are still being developed and are not yet public.

Such executive orders are limited first and foremost by the lack of jurisdiction to allocate more budget for specific policy implementations, a power controlled by Congress.

Such executive orders are limited first and foremost by the lack of jurisdiction to allocate more budget for specific policy implementations, a power controlled by Congress.

Such executive orders are limited first and foremost by the lack of jurisdiction to allocate more budget for specific policy implementations, a power controlled by Congress.

A secondary limitation is that executive orders are easy to repeal or reverse when the US presidency changes every 4 years, meaning that even binding executive orders may not be enforced long-term.

A secondary limitation is that executive orders are easy to repeal or reverse when the US presidency changes every 4 years, meaning that even binding executive orders may not be enforced long-term.

A secondary limitation is that executive orders are easy to repeal or reverse when the US presidency changes every 4 years, meaning that even binding executive orders may not be enforced long-term.

The Blueprint for an AI Bill of Rights and the Executive Order on AI provide high-level guidance and principles but lack the binding force of law. They serve more as a framework for agencies to develop their own policies and practices, rather than a centralized, comprehensive regulatory regime like the EU AI Act.

The Blueprint for an AI Bill of Rights and the Executive Order on AI provide high-level guidance and principles but lack the binding force of law. They serve more as a framework for agencies to develop their own policies and practices, rather than a centralized, comprehensive regulatory regime like the EU AI Act.

The Blueprint for an AI Bill of Rights and the Executive Order on AI provide high-level guidance and principles but lack the binding force of law. They serve more as a framework for agencies to develop their own policies and practices, rather than a centralized, comprehensive regulatory regime like the EU AI Act.

US AI policy is strongly prioritizing its geopolitical AI arms race with China:

The US AI governance strategy is heavily influenced by the perceived threat of China's rapid advancements in AI and the potential implications for national security and the global balance of power. The only binding actions taken by the US (enforcing semiconductor export controls) are explicitly designed to counter China's AI ambitions and maintain the US' technological and military superiority.

The US AI governance strategy is heavily influenced by the perceived threat of China's rapid advancements in AI and the potential implications for national security and the global balance of power. The only binding actions taken by the US (enforcing semiconductor export controls) are explicitly designed to counter China's AI ambitions and maintain the US' technological and military superiority.

The US AI governance strategy is heavily influenced by the perceived threat of China's rapid advancements in AI and the potential implications for national security and the global balance of power. The only binding actions taken by the US (enforcing semiconductor export controls) are explicitly designed to counter China's AI ambitions and maintain the US' technological and military superiority.

This geopolitical focus sets the US apart from the EU, which has prioritized the protection of individual rights and the ethical development of AI, or China, which has prioritized internal social control and alignment with party values. The US strategy appears to be more concerned with the strategic implications of AI and ensuring that the technology aligns with US interests in the global arena.

This geopolitical focus sets the US apart from the EU, which has prioritized the protection of individual rights and the ethical development of AI, or China, which has prioritized internal social control and alignment with party values. The US strategy appears to be more concerned with the strategic implications of AI and ensuring that the technology aligns with US interests in the global arena.

This geopolitical focus sets the US apart from the EU, which has prioritized the protection of individual rights and the ethical development of AI, or China, which has prioritized internal social control and alignment with party values. The US strategy appears to be more concerned with the strategic implications of AI and ensuring that the technology aligns with US interests in the global arena.