policy report
Published by Convergence Analysis, this series is designed to be a primer for policymakers, researchers, and individuals seeking to develop a high-level overview of the current state of AI regulation.
Updated: April 28, 2024
Structure of AI Regulations
Over the past three years, China has passed a series of vertical regulations targeting specific domains of AI applications, led by the Cyberspace Administration of China (CAC). The three most relevant pieces of legislation include:
The language used by these AI regulations is typically broad, high-level, and non-specific. For example, Article 5 of the Interim Generative AI Measures states that providers should “Encourage the innovative application of generative AI technology in each industry and field [and] generate exceptional content that is positive, healthy, and uplifting”. In practice, this wording extends greater control to the CAC, allowing it to interpret its regulations as necessary to enforce its desired outcomes.
Notably, China created the first national algorithm registry in its 2021 Algorithmic Recommendation Provisions, focusing initially on capturing all recommendation algorithms used by consumers in China. By defining the concept of “algorithm” quite broadly, this registry often requires that organizations submit many separate, detailed reports for various algorithms in use by its systems. In subsequent legislation, the CAC has continually expanded the scope of this algorithm registry to include updated forms of AI, including all LLMs and AI models capable of generating content.
What are the key traits of China’s AI governance strategy?
China’s governance strategy is focused on tracking and managing algorithms by their domain of use:
China is taking a vertical, iterative approach to developing progressively more comprehensive legislation, by passing targeted regulations concentrating on a specific category of algorithms at a time:
China strongly prioritizes social control and alignment in its AI regulations:
China has demonstrated an inward focus on regulating Chinese organizations and citizens:
AI Evaluation & Risk Assessments
China’s Interim Measures for the Management of Generative AI Services don’t include risk assessments or evaluations of AI models (though generative AI providers are responsible for harms rather than AI users, which may incentivise voluntary risk assessments).
There are mandatory “security assessments”, but we haven’t been able to discover their content or standards. In particular, these measures, plus both the 2021 regulations and 2022 rules for deep synthesis, require AI developers to submit information to China’s algorithm registry, including passing a security self-assessment. AI providers add their algorithms to the registry along with some publicly available categorical data about the algorithm and a PDF file for their “algorithm security self-assessment”. These uploaded PDFs aren’t available to the public, so “we do not know exactly what information is required in it or how security is defined”.
Note also that these provisions only apply to public-facing generative AI within China, excluding internal services used by organizations.
AI Model Registries
The People’s Republic of China (PRC) announced the earliest and still the most comprehensive algorithm registry requirements in 2021, as part of its Algorithmic Recommendation Provisions. It has gone on to extend the scope of this registry, as its subsequent regulations covering deep synthesis and generative AI also require developers to register their AI models.
AI Incident Reporting
The PRC is developing a governmental incident reporting database, as announced in the Draft Measures on the Reporting of Cybersecurity Incidents on Dec 20th, 2023. This proposed legislation categorize cybersecurity incidents into four categories of severity (“Extremely Severe”, “Severe”, “Relatively Severe”, and “General”), and requires that the top three levels (“Critical Incidents”) are reported to governmental authorities within one hour of occurrence. The criteria for meeting the level of “Critical” incidents include the following:
Though this set of measures does not directly mention frontier AI models as a target for enforcement, any of the negative outcomes above resulting from the use of frontier AI models would be reported under the same framework. This draft measure can be understood as the Cyberspace Administration of China (CAC) pursuing two major goals:
Elsewhere, leading Chinese AI regulatory measures make reference to reporting key events (specifically the distribution of unlawful information) to the Chinese government, but none of them have specific requirements for the creation of an incident reporting database:
Open-Source AI Models
There is no mention of open-source models in China’s regulations between 2019 and 2023; open-source models are neither exempt from any aspects of the legislation, nor under any additional restrictions or responsibilities.
Cybersecurity of Frontier AI Models
China maintains a complex, detailed, and thorough set of data privacy requirements developed over the past two decades via legislation such as the PRC Cybersecurity Law, the PRC Data Security Law, and the PRC Personal Information Protection Law. Together, they constitute strong protections mandating the confidential treatment and encryption of personal data stored by Chinese corporations. Additionally, the PRC Cybersecurity Law has requirements regarding data localization that mandate that the user data of Chinese citizens be stored on servers in mainland China, ensuring that the Chinese government has more direct methods to access and control the usage of this data. All of these laws apply to data collected from users of LLM models in China.
China’s existing AI-specific regulations largely mirror the data privacy policies laid out in previous legislation, and often refer directly to such legislation for specific requirements. In particular, they extend data privacy requirements to the training data collected by Chinese organizations. However, they do not introduce any specific requirements for the cybersecurity of frontier AI models, such as properly securing model weights or codebases.
China’s Deep Synthesis Provisions include the following:
China’s Interim Generative AI Measures mention the following:
AI Discrimination Requirements
Two major pieces of Chinese legislation have made references to combating AI discrimination. Though the language around discrimination was scrapped in the first, the 2023 generative AI regulations include binding but non-specific language requiring compliance with anti-discrimination policies for AI training and inference.
AI Disclosures
China’s 2022 rules for deep synthesis, which addresses the online provision and use of deep fakes and similar technology, requires providers to watermark and conspicuously label deep fakes. The regulation also requires the notification and consent of any individual whose biometric information is edited (e.g. whose voice or face is edited or added to audio or visual media).
The 2023 Interim Measures for the Management of Generative AI Services, which addresses public-facing generative AI in mainland China, requires content created by generative AI to be conspicuously labeled as such and digitally watermarked. Developers must also label the data they use in training AI clearly, and disclose the users and user groups of their services.
AI and Chemical, Biological, Radiological, & Nuclear Hazards
China’s three most important AI regulations do not contain any specific provisions for CBRN hazards.