policy report

The 2024 China Regulatory Landscape

The 2024 China Regulatory Landscape

Published by Convergence Analysis, this series is designed to be a primer for policymakers, researchers, and individuals seeking to develop a high-level overview of the current state of AI regulation.

Updated: April 28, 2024

Structure of AI Regulations

Over the past three years, China has passed a series of vertical regulations targeting specific domains of AI applications, led by the Cyberspace Administration of China (CAC). The three most relevant pieces of legislation include:

1

Algorithmic Recommendation Provisions: Initially published in August 2021, these provisions enforce a series of regulations targeting recommendation algorithms, such as those that provide personalized rankings, search filters, decision making, or “services with public opinion properties or social mobilization capabilities”. Notably, it created a mandatory algorithm registry requiring all qualifying algorithms by Chinese organizations to be registered within 10 days of public launch.

1

Algorithmic Recommendation Provisions: Initially published in August 2021, these provisions enforce a series of regulations targeting recommendation algorithms, such as those that provide personalized rankings, search filters, decision making, or “services with public opinion properties or social mobilization capabilities”. Notably, it created a mandatory algorithm registry requiring all qualifying algorithms by Chinese organizations to be registered within 10 days of public launch.

1

Algorithmic Recommendation Provisions: Initially published in August 2021, these provisions enforce a series of regulations targeting recommendation algorithms, such as those that provide personalized rankings, search filters, decision making, or “services with public opinion properties or social mobilization capabilities”. Notably, it created a mandatory algorithm registry requiring all qualifying algorithms by Chinese organizations to be registered within 10 days of public launch.

2

Deep Synthesis Provisions: Initially published in November 2022, this creates a series of regulations for the use of algorithms that synthetically generate content such as text, voice, images, or videos. It was intended to combat the rise of “deepfakes”, and requires labeling, user identification, and providers to prevent “misuse” as broadly defined by the Chinese government.

2

Deep Synthesis Provisions: Initially published in November 2022, this creates a series of regulations for the use of algorithms that synthetically generate content such as text, voice, images, or videos. It was intended to combat the rise of “deepfakes”, and requires labeling, user identification, and providers to prevent “misuse” as broadly defined by the Chinese government.

2

Deep Synthesis Provisions: Initially published in November 2022, this creates a series of regulations for the use of algorithms that synthetically generate content such as text, voice, images, or videos. It was intended to combat the rise of “deepfakes”, and requires labeling, user identification, and providers to prevent “misuse” as broadly defined by the Chinese government.

3

Interim Generative AI Measures: Initially published in July 2023, this set of regulations was a direct response to the announcement and ensuing wave of excitement caused by ChatGPT’s release in late 2022. It expands on the policies proposed in the Deep Synthesis Provisions to better encompass multi-use LLMs, strengthening provisions such as discrimination requirements, requirements for training data, and alignment with national interests.

3

Interim Generative AI Measures: Initially published in July 2023, this set of regulations was a direct response to the announcement and ensuing wave of excitement caused by ChatGPT’s release in late 2022. It expands on the policies proposed in the Deep Synthesis Provisions to better encompass multi-use LLMs, strengthening provisions such as discrimination requirements, requirements for training data, and alignment with national interests.

3

Interim Generative AI Measures: Initially published in July 2023, this set of regulations was a direct response to the announcement and ensuing wave of excitement caused by ChatGPT’s release in late 2022. It expands on the policies proposed in the Deep Synthesis Provisions to better encompass multi-use LLMs, strengthening provisions such as discrimination requirements, requirements for training data, and alignment with national interests.

The language used by these AI regulations is typically broad, high-level, and non-specific. For example, Article 5 of the Interim Generative AI Measures states that providers should “Encourage the innovative application of generative AI technology in each industry and field [and] generate exceptional content that is positive, healthy, and uplifting”. In practice, this wording extends greater control to the CAC, allowing it to interpret its regulations as necessary to enforce its desired outcomes.

Notably, China created the first national algorithm registry in its 2021 Algorithmic Recommendation Provisions, focusing initially on capturing all recommendation algorithms used by consumers in China. By defining the concept of “algorithm” quite broadly, this registry often requires that organizations submit many separate, detailed reports for various algorithms in use by its systems. In subsequent legislation, the CAC has continually expanded the scope of this algorithm registry to include updated forms of AI, including all LLMs and AI models capable of generating content.

What are the key traits of China’s AI governance strategy?

China’s governance strategy is focused on tracking and managing algorithms by their domain of use:

In particular, the CAC is developing legislation regulating all types of algorithms in use by Chinese citizens, not just LLMs or AI models. Based on their track record, we can expect that China will continue to expand the algorithm registry to include a broader scope of algorithms over time.

In particular, the CAC is developing legislation regulating all types of algorithms in use by Chinese citizens, not just LLMs or AI models. Based on their track record, we can expect that China will continue to expand the algorithm registry to include a broader scope of algorithms over time.

In particular, the CAC is developing legislation regulating all types of algorithms in use by Chinese citizens, not just LLMs or AI models. Based on their track record, we can expect that China will continue to expand the algorithm registry to include a broader scope of algorithms over time.

China is taking a vertical, iterative approach to developing progressively more comprehensive legislation, by passing targeted regulations concentrating on a specific category of algorithms at a time:

The CAC has tended to focus on current domains in AI, drafting legislation when a new domain becomes socially relevant. In contrast to the US or EU, it appears to have deprioritized many domains outside of this scope, such as regulating AI for healthcare, employment, law enforcement, judicial systems and more.

The CAC has tended to focus on current domains in AI, drafting legislation when a new domain becomes socially relevant. In contrast to the US or EU, it appears to have deprioritized many domains outside of this scope, such as regulating AI for healthcare, employment, law enforcement, judicial systems and more.

The CAC has tended to focus on current domains in AI, drafting legislation when a new domain becomes socially relevant. In contrast to the US or EU, it appears to have deprioritized many domains outside of this scope, such as regulating AI for healthcare, employment, law enforcement, judicial systems and more.

These iterative regulations appear to be predecessors building towards a more comprehensive piece of legislation: an Artificial Intelligence Law, proposed in a legislative plan released in June 2023. This law is not expected to be published until late 2024, but will likely cover many domains of AI use, horizontally integrating China’s AI regulations.

These iterative regulations appear to be predecessors building towards a more comprehensive piece of legislation: an Artificial Intelligence Law, proposed in a legislative plan released in June 2023. This law is not expected to be published until late 2024, but will likely cover many domains of AI use, horizontally integrating China’s AI regulations.

These iterative regulations appear to be predecessors building towards a more comprehensive piece of legislation: an Artificial Intelligence Law, proposed in a legislative plan released in June 2023. This law is not expected to be published until late 2024, but will likely cover many domains of AI use, horizontally integrating China’s AI regulations.

China has demonstrated clear precedent for this model of passing iterative legislation in preparation for a comprehensive, all-encompassing law. In particular, it followed a similar process for internet regulation in the 2000s, capped by an all-encompassing Cybersecurity Law passed in 2017.

China has demonstrated clear precedent for this model of passing iterative legislation in preparation for a comprehensive, all-encompassing law. In particular, it followed a similar process for internet regulation in the 2000s, capped by an all-encompassing Cybersecurity Law passed in 2017.

China has demonstrated clear precedent for this model of passing iterative legislation in preparation for a comprehensive, all-encompassing law. In particular, it followed a similar process for internet regulation in the 2000s, capped by an all-encompassing Cybersecurity Law passed in 2017.

China strongly prioritizes social control and alignment in its AI regulations:

In particular, the domains of AI technology selected for legislation clearly indicate the priorities of the Chinese government. Each of the provisions includes references to upholding “Core Socialist Values”, and contains more specific direction such as requirements to “respect social mores and ethics, and adhere to the correct political direction, public opinion orientation, and values trends, to promote progress and improvement” (Article 4, Deep Synthesis Provisions). The broad nature of its requirements allows for broad and perhaps arbitrary enforcement.

In particular, the domains of AI technology selected for legislation clearly indicate the priorities of the Chinese government. Each of the provisions includes references to upholding “Core Socialist Values”, and contains more specific direction such as requirements to “respect social mores and ethics, and adhere to the correct political direction, public opinion orientation, and values trends, to promote progress and improvement” (Article 4, Deep Synthesis Provisions). The broad nature of its requirements allows for broad and perhaps arbitrary enforcement.

In particular, the domains of AI technology selected for legislation clearly indicate the priorities of the Chinese government. Each of the provisions includes references to upholding “Core Socialist Values”, and contains more specific direction such as requirements to “respect social mores and ethics, and adhere to the correct political direction, public opinion orientation, and values trends, to promote progress and improvement” (Article 4, Deep Synthesis Provisions). The broad nature of its requirements allows for broad and perhaps arbitrary enforcement.

China has demonstrated an inward focus on regulating Chinese organizations and citizens:

As a result of China’s restrictive policies via the Great Firewall preventing many leading Western technology services from operating in China, these regulations primarily apply to Chinese technology companies serving Chinese citizens.

As a result of China’s restrictive policies via the Great Firewall preventing many leading Western technology services from operating in China, these regulations primarily apply to Chinese technology companies serving Chinese citizens.

As a result of China’s restrictive policies via the Great Firewall preventing many leading Western technology services from operating in China, these regulations primarily apply to Chinese technology companies serving Chinese citizens.

Major leading AI labs such as OpenAI, Anthropic, and Google do not actively serve Chinese consumers, in part because they are unwilling to comply with China’s censorship policies.

Major leading AI labs such as OpenAI, Anthropic, and Google do not actively serve Chinese consumers, in part because they are unwilling to comply with China’s censorship policies.

Major leading AI labs such as OpenAI, Anthropic, and Google do not actively serve Chinese consumers, in part because they are unwilling to comply with China’s censorship policies.

In many ways, Chinese AI governance operates on a parallel and disjoint basis to Western AI governance.

In many ways, Chinese AI governance operates on a parallel and disjoint basis to Western AI governance.

In many ways, Chinese AI governance operates on a parallel and disjoint basis to Western AI governance.

AI Evaluation & Risk Assessments

China’s Interim Measures for the Management of Generative AI Services don’t include risk assessments or evaluations of AI models (though generative AI providers are responsible for harms rather than AI users, which may incentivise voluntary risk assessments). 

There are mandatory “security assessments”, but we haven’t been able to discover their content or standards.  In particular, these measures, plus both the 2021 regulations and 2022 rules for deep synthesis, require AI developers to submit information to China’s algorithm registry, including passing a security self-assessment. AI providers add their algorithms to the registry along with some publicly available categorical data about the algorithm and a PDF file for their “algorithm security self-assessment”. These uploaded PDFs aren’t available to the public, so “we do not know exactly what information is required in it or how security is defined”. 

Note also that these provisions only apply to public-facing generative AI within China, excluding internal services used by organizations.

AI Model Registries

The People’s Republic of China (PRC) announced the earliest and still the most comprehensive algorithm registry requirements in 2021, as part of its Algorithmic Recommendation Provisions. It has gone on to extend the scope of this registry, as its subsequent regulations covering deep synthesis and generative AI also require developers to register their AI models.

Algorithmic Recommendation Provisions: The PRC requires that algorithms with “public opinion properties or having social mobilization capabilities” shall report basic data such as the provider’s name, domain of application, and a self-assessment report to an algorithm registry within 10 days of publication. This requirement was primarily aimed at recommendation algorithms such as those used in TikTok or Instagram, but has later been expanded to include many different definitions of “algorithms”, including modern AI models.

Algorithmic Recommendation Provisions: The PRC requires that algorithms with “public opinion properties or having social mobilization capabilities” shall report basic data such as the provider’s name, domain of application, and a self-assessment report to an algorithm registry within 10 days of publication. This requirement was primarily aimed at recommendation algorithms such as those used in TikTok or Instagram, but has later been expanded to include many different definitions of “algorithms”, including modern AI models.

Algorithmic Recommendation Provisions: The PRC requires that algorithms with “public opinion properties or having social mobilization capabilities” shall report basic data such as the provider’s name, domain of application, and a self-assessment report to an algorithm registry within 10 days of publication. This requirement was primarily aimed at recommendation algorithms such as those used in TikTok or Instagram, but has later been expanded to include many different definitions of “algorithms”, including modern AI models.

Deep Synthesis Provisions, Article 19: The PRC additionally requires that algorithms that synthetically generate novel content such as voice, text, image, or video content must be similarly filed to the new algorithm registry.

Deep Synthesis Provisions, Article 19: The PRC additionally requires that algorithms that synthetically generate novel content such as voice, text, image, or video content must be similarly filed to the new algorithm registry.

Deep Synthesis Provisions, Article 19: The PRC additionally requires that algorithms that synthetically generate novel content such as voice, text, image, or video content must be similarly filed to the new algorithm registry.

Generative AI Measures, Article 17: Tthe PRC additionally requires that generative AI algorithms such as LLMs must be similarly filed to the new algorithm registry.

Generative AI Measures, Article 17: Tthe PRC additionally requires that generative AI algorithms such as LLMs must be similarly filed to the new algorithm registry.

Generative AI Measures, Article 17: Tthe PRC additionally requires that generative AI algorithms such as LLMs must be similarly filed to the new algorithm registry.

Of note, most of the algorithms regulated here were already covered by the 2022 deep synthesis provisions, but the new Generative AI Measures more specifically target LLMs and allows for the regulation of services that operate offline.

Of note, most of the algorithms regulated here were already covered by the 2022 deep synthesis provisions, but the new Generative AI Measures more specifically target LLMs and allows for the regulation of services that operate offline.

Of note, most of the algorithms regulated here were already covered by the 2022 deep synthesis provisions, but the new Generative AI Measures more specifically target LLMs and allows for the regulation of services that operate offline.

AI Incident Reporting

The PRC is developing a governmental incident reporting database, as announced in the Draft Measures on the Reporting of Cybersecurity Incidents on Dec 20th, 2023. This proposed legislation categorize cybersecurity incidents into four categories of severity (“Extremely Severe”, “Severe”, “Relatively Severe”, and “General”), and requires that the top three levels (“Critical Incidents”) are reported to governmental authorities within one hour of occurrence. The criteria for meeting the level of “Critical” incidents include the following:

Interruption of overall operation of critical information infrastructure for more than 30 minutes, or its main function for more than two hours;

Interruption of overall operation of critical information infrastructure for more than 30 minutes, or its main function for more than two hours;

Interruption of overall operation of critical information infrastructure for more than 30 minutes, or its main function for more than two hours;

Incidents affecting the work and life of more than 10% of the population in a single city-level administrative region;

Incidents affecting the work and life of more than 10% of the population in a single city-level administrative region;

Incidents affecting the work and life of more than 10% of the population in a single city-level administrative region;

Incidents affecting the water, electricity, gas, oil, heating or transportation usage of more than 100,000 people;

Incidents affecting the water, electricity, gas, oil, heating or transportation usage of more than 100,000 people;

Incidents affecting the water, electricity, gas, oil, heating or transportation usage of more than 100,000 people;

Incidents causing direct economic losses of more than RMB 5 million (around $694k USD)

Incidents causing direct economic losses of more than RMB 5 million (around $694k USD)

Incidents causing direct economic losses of more than RMB 5 million (around $694k USD)

Though this set of measures does not directly mention frontier AI models as a target for enforcement, any of the negative outcomes above resulting from the use of frontier AI models would be reported under the same framework. This draft measure can be understood as the Cyberspace Administration of China (CAC) pursuing two major goals:

Consolidating disparate reporting requirements across various laws regarding cybersecurity incidents.

Consolidating disparate reporting requirements across various laws regarding cybersecurity incidents.

Consolidating disparate reporting requirements across various laws regarding cybersecurity incidents.

Developing regulatory infrastructure in preparation for an evolving cybersecurity landscape, particularly with respect to advanced AI.

Developing regulatory infrastructure in preparation for an evolving cybersecurity landscape, particularly with respect to advanced AI.

Developing regulatory infrastructure in preparation for an evolving cybersecurity landscape, particularly with respect to advanced AI.

Elsewhere, leading Chinese AI regulatory measures make reference to reporting key events (specifically the distribution of unlawful information) to the Chinese government, but none of them have specific requirements for the creation of an incident reporting database:

Algorithmic Recommendation Provisions, Article 7: Service providers shall…establish and complete management systems and technical measures…[such as] security assessment and monitoring and security incident response and handling.

Algorithmic Recommendation Provisions, Article 7: Service providers shall…establish and complete management systems and technical measures…[such as] security assessment and monitoring and security incident response and handling.

Algorithmic Recommendation Provisions, Article 7: Service providers shall…establish and complete management systems and technical measures…[such as] security assessment and monitoring and security incident response and handling.

Article 9: Where unlawful information is discovered…a report shall be made to the cybersecurity and informatization department and relevant departments.

Article 9: Where unlawful information is discovered…a report shall be made to the cybersecurity and informatization department and relevant departments.

Article 9: Where unlawful information is discovered…a report shall be made to the cybersecurity and informatization department and relevant departments.

Deep Synthesis Provisions, Article 10: Where deep synthesis service providers discover illegal or negative information, they shall…promptly make a report to the telecommunications department or relevant departments in charge.

Deep Synthesis Provisions, Article 10: Where deep synthesis service providers discover illegal or negative information, they shall…promptly make a report to the telecommunications department or relevant departments in charge.

Deep Synthesis Provisions, Article 10: Where deep synthesis service providers discover illegal or negative information, they shall…promptly make a report to the telecommunications department or relevant departments in charge.

Generative AI Measures, Article 14: Where providers discover illegal content they shall promptly employ measures to address it such as stopping generation, stopping transmission, and removal, employ measures such as model optimization training to make corrections and report to the relevant departments in charge.

Generative AI Measures, Article 14: Where providers discover illegal content they shall promptly employ measures to address it such as stopping generation, stopping transmission, and removal, employ measures such as model optimization training to make corrections and report to the relevant departments in charge.

Generative AI Measures, Article 14: Where providers discover illegal content they shall promptly employ measures to address it such as stopping generation, stopping transmission, and removal, employ measures such as model optimization training to make corrections and report to the relevant departments in charge.

Open-Source AI Models

There is no mention of open-source models in China’s regulations between 2019 and 2023; open-source models are neither exempt from any aspects of the legislation, nor under any additional restrictions or responsibilities.

Cybersecurity of Frontier AI Models

China maintains a complex, detailed, and thorough set of data privacy requirements developed over the past two decades via legislation such as the PRC Cybersecurity Law, the PRC Data Security Law, and the PRC Personal Information Protection Law. Together, they constitute strong protections mandating the confidential treatment and encryption of personal data stored by Chinese corporations. Additionally, the PRC Cybersecurity Law has requirements regarding data localization that mandate that the user data of Chinese citizens be stored on servers in mainland China, ensuring that the Chinese government has more direct methods to access and control the usage of this data. All of these laws apply to data collected from users of LLM models in China. 

China’s existing AI-specific regulations largely mirror the data privacy policies laid out in previous legislation, and often refer directly to such legislation for specific requirements. In particular, they extend data privacy requirements to the training data collected by Chinese organizations. However, they do not introduce any specific requirements for the cybersecurity of frontier AI models, such as properly securing model weights or codebases. 

China’s Deep Synthesis Provisions include the following:

Article 7: Requires service providers to implement primary responsibility for information security, such as data security, personal information protection, and technical safeguards.

Article 7: Requires service providers to implement primary responsibility for information security, such as data security, personal information protection, and technical safeguards.

Article 7: Requires service providers to implement primary responsibility for information security, such as data security, personal information protection, and technical safeguards.

Article 14: Requires service providers to strengthen the management and security of training data, especially personal information included in training data.

Article 14: Requires service providers to strengthen the management and security of training data, especially personal information included in training data.

Article 14: Requires service providers to strengthen the management and security of training data, especially personal information included in training data.

China’s Interim Generative AI Measures mention the following:

Article 7: Requires service providers to handle training data in accordance with the Cybersecurity Law and Data Security Law when carrying out pre-training and optimization of models.

Article 7: Requires service providers to handle training data in accordance with the Cybersecurity Law and Data Security Law when carrying out pre-training and optimization of models.

Article 7: Requires service providers to handle training data in accordance with the Cybersecurity Law and Data Security Law when carrying out pre-training and optimization of models.

Article 9: Requires that service providers bear responsibility for fulfilling online information security obligations in accordance with the law.

Article 9: Requires that service providers bear responsibility for fulfilling online information security obligations in accordance with the law.

Article 9: Requires that service providers bear responsibility for fulfilling online information security obligations in accordance with the law.

Article 11: Requires providers to keep user input information and usage records confidential and not illegally retain or provide such data to others.

Article 11: Requires providers to keep user input information and usage records confidential and not illegally retain or provide such data to others.

Article 11: Requires providers to keep user input information and usage records confidential and not illegally retain or provide such data to others.

Article 17: Mandates security assessments for AI services with public opinion properties or social mobilization capabilities.

Article 17: Mandates security assessments for AI services with public opinion properties or social mobilization capabilities.

Article 17: Mandates security assessments for AI services with public opinion properties or social mobilization capabilities.

AI Discrimination Requirements

Two major pieces of Chinese legislation have made references to combating AI discrimination. Though the language around discrimination was scrapped in the first, the 2023 generative AI regulations include binding but non-specific language requiring compliance with anti-discrimination policies for AI training and inference.

Algorithmic Recommendation Provisions, Article 10: The initial interim draft of this legislation prohibited the use of “discriminatory or biased user tags” in algorithmic recommendation systems. However, this language was removed in the final version effective in March 2022.

Algorithmic Recommendation Provisions, Article 10: The initial interim draft of this legislation prohibited the use of “discriminatory or biased user tags” in algorithmic recommendation systems. However, this language was removed in the final version effective in March 2022.

Generative AI Measures, Article 4.2: This draft calls for the following: “During processes such as algorithm design, the selection of training data, model generation and optimization, and the provision of services, effective measures are to be employed to prevent the creation of discrimination such as by race, ethnicity, faith, nationality, region, sex, age, profession, or health”.

Generative AI Measures, Article 4.2: This draft calls for the following: “During processes such as algorithm design, the selection of training data, model generation and optimization, and the provision of services, effective measures are to be employed to prevent the creation of discrimination such as by race, ethnicity, faith, nationality, region, sex, age, profession, or health”.

AI Disclosures

China’s 2022 rules for deep synthesis, which addresses the online provision and use of deep fakes and similar technology, requires providers to watermark and conspicuously label deep fakes. The regulation also requires the notification and consent of any individual whose biometric information is edited (e.g. whose voice or face is edited or added to audio or visual media). 

The 2023 Interim Measures for the Management of Generative AI Services, which addresses public-facing generative AI in mainland China, requires content created by generative AI to be conspicuously labeled as such and digitally watermarked. Developers must also label the data they use in training AI clearly, and disclose the users and user groups of their services.

AI and Chemical, Biological, Radiological, & Nuclear Hazards

China’s three most important AI regulations do not contain any specific provisions for CBRN hazards.