policy report

2024 State of the AI Regulatory Landscape

2024 State of the AI Regulatory Landscape

Published by Convergence Analysis, this series is designed to be a primer for policymakers, researchers, and individuals seeking to develop a high-level overview of the current state of AI regulation.

AI and Chemical, Biological, Radiological, & Nuclear Hazards

Elliot McKernon

Writer-Researcher

Last updated May 10, 2024

Last updated May 10, 2024

Author's Note

This report is one in a series of ~10 posts comprising a State of the AI Regulatory Landscape in 2024 Review, conducted by the Governance Research Program at Convergence Analysis. Each post will cover a specific domain of AI governance. We’ll provide an overview of existing regulations, focusing on the US, EU, and China as the leading governmental bodies currently developing AI legislation. Additionally, we’ll discuss the relevant context behind each domain and conduct a short analysis.

This series is intended to be a primer for policymakers, researchers, and individuals seeking to develop a high-level overview of the current AI governance space. We’ll be releasing a comprehensive report at the end of this series.

Author's Note

This report is one in a series of ~10 posts comprising a State of the AI Regulatory Landscape in 2024 Review, conducted by the Governance Research Program at Convergence Analysis. Each post will cover a specific domain of AI governance. We’ll provide an overview of existing regulations, focusing on the US, EU, and China as the leading governmental bodies currently developing AI legislation. Additionally, we’ll discuss the relevant context behind each domain and conduct a short analysis.

This series is intended to be a primer for policymakers, researchers, and individuals seeking to develop a high-level overview of the current AI governance space. We’ll be releasing a comprehensive report at the end of this series.

Author's Note

This report is one in a series of ~10 posts comprising a State of the AI Regulatory Landscape in 2024 Review, conducted by the Governance Research Program at Convergence Analysis. Each post will cover a specific domain of AI governance. We’ll provide an overview of existing regulations, focusing on the US, EU, and China as the leading governmental bodies currently developing AI legislation. Additionally, we’ll discuss the relevant context behind each domain and conduct a short analysis.

This series is intended to be a primer for policymakers, researchers, and individuals seeking to develop a high-level overview of the current AI governance space. We’ll be releasing a comprehensive report at the end of this series.

What are CBRN hazards? How could they be affected by AI models?

Humanity has developed technologies capable of mass destruction, and we need to be especially cautious about AI in relation to these technologies. These technologies and associated risks commonly fall into four main categories, collectively known as CBRN:

Chemical hazards: Toxic chemical substances that can cause significant harm to people or the environment, such as chemical warfare agents or toxic industrial chemicals.

Chemical hazards: Toxic chemical substances that can cause significant harm to people or the environment, such as chemical warfare agents or toxic industrial chemicals.

Chemical hazards: Toxic chemical substances that can cause significant harm to people or the environment, such as chemical warfare agents or toxic industrial chemicals.

Biological hazards: Toxins and infectious agents like bacteria, viruses, and other pathogens that can cause disease in humans, animals or plants.

Biological hazards: Toxins and infectious agents like bacteria, viruses, and other pathogens that can cause disease in humans, animals or plants.

Biological hazards: Toxins and infectious agents like bacteria, viruses, and other pathogens that can cause disease in humans, animals or plants.

Radiological hazards: Radioactive materials that emit ionizing radiation which can harm human health, such as waste from nuclear power stations.

Radiological hazards: Radioactive materials that emit ionizing radiation which can harm human health, such as waste from nuclear power stations.

Radiological hazards: Radioactive materials that emit ionizing radiation which can harm human health, such as waste from nuclear power stations.

Nuclear hazards: Materials related to nuclear fission or fusion that can release tremendous destructive energy, such as nuclear weapons and nuclear power plant accidents.

Nuclear hazards: Materials related to nuclear fission or fusion that can release tremendous destructive energy, such as nuclear weapons and nuclear power plant accidents.

Nuclear hazards: Materials related to nuclear fission or fusion that can release tremendous destructive energy, such as nuclear weapons and nuclear power plant accidents.

In this section, we’ll briefly contextualize current and upcoming examples of each of these types of hazards in the context of AI technologies.

What are potential chemical hazards arising from the increase in AI capabilities?

In particular, a prominent concern of experts is the potential for AI to lower the barrier of entry for non-experts to generate CBRN harms. That is, AI could make it easier for malicious or naive actors to build dangerous weapons, such as chemical agents with deadly properties.

For example, pharmaceutical researchers use machine learning models to identify new therapeutic drugs. In this study, a deep learning model was trained on ~2,500 molecules and their antibiotic activity. When shown chemicals outside that training set, the model could predict whether they would function as antibiotics. 

However, training a model to generate novel safe and harmless medications is very close to, if not equivalent to, training a model to generate chemical weapons. This is an example of the Waluigi Effect; the underlying model is simply learning to predict toxicity, and this can be used to rule out harmful chemicals, or generate a list of them, ranked by deadliness. This was demonstrated by the Swiss Federal Institute for Nuclear, Biological, and Chemical Protection (see here for a non-paywalled summary). By telling the same model to generate harmful molecules, it generated a list of 40,000 such molecules in under 6 hours. These included deadly nerve agents such as VX, as well as previously undiscovered molecules that it ranked as more deadly than VX. To quote the researchers:

Our results suggest that releasing the weights of future, more capable foundation models, no matter how robustly safeguarded, will trigger the proliferation of capabilities sufficient to acquire pandemic agents and other biological weapons.

Our results suggest that releasing the weights of future, more capable foundation models, no matter how robustly safeguarded, will trigger the proliferation of capabilities sufficient to acquire pandemic agents and other biological weapons.

Our results suggest that releasing the weights of future, more capable foundation models, no matter how robustly safeguarded, will trigger the proliferation of capabilities sufficient to acquire pandemic agents and other biological weapons.

As AI models become more deeply integrated into the process of developing chemicals used for industrial and medical purposes, it will become increasingly accessible for malicious parties to use these models for dangerous means.

What are biological hazards arising from the increase in AI capabilities?

Recent papers have shown that large language models (LLMs) may lower barriers to biological misuse, by enabling the weaponization of biological agents. In particular, this may occur from the increasing application of LLMs as biological design tools (BDTs), such as multimodal lab assistants and autonomous science tools. These BDTs make it easier and faster to conduct laboratory work, supporting the work of non-experts and expanding the capabilities of sophisticated actors. Such abilities may produce “pandemic pathogens substantially more devastating than anything seen to date and could enable forms of more predictable and targeted biological weapons”. Further, the risks posed by LLMs and by custom AI trained for biological research can exacerbate each other by increasing the amount of harm an individual can do while providing access to those tools to a larger pool of individuals.

It’s important to note these risks are still unlikely with today’s cutting-edge LLMs, though this may not hold true for much longer. Two recent studies from RAND and OpenAI have found that current LLMs are not more prone to misuse than standard internet searches regarding biological and chemical weapons.

Another leading biological hazard of concern is synthetic biology – the genetic modification of individual cells or organisms, as well as the manufacture of synthetic DNA or RNA strands called synthetic nucleic acids

This field poses a particularly urgent risk because existing infrastructure could theoretically be used by malicious actors to produce an extremely deadly pathogen, for example. Researchers are able to order custom DNA or RNA to be generated and mailed to them, a crucial step towards turning a theoretical pandemic-level design into an infectious reality. Currently, we urgently need mandatory screening of ordered material to ensure it won’t be harmful. 

Some researchers are developing tools specifically to measure and reduce the capacity of AI models to lower barriers of entry for CBRN weapons and hazards, with a particular focus on biohazards. For example, OpenAI is developing “an early warning system for LLM-aided biological threat creation”, and a recent collaboration between several leading research organizations produced a practical policy proposal titled Towards Responsible Governance of Biological Design Tools. The Centre for AI Safety has also released the “Weapons of Mass Destruction Proxy”, which measures how particular LLMs can lower the barrier of entry for CBRN hazards more broadly. Tools and proposals such as these, developed with expert knowledge of CBRN hazards and AI engineering, are likely to be a crucial complement to legislative efforts. 

For more context on these potential pandemic-level biological hazards, you can read:

The White House Office of Science and Technology Policy’s Framework for Nucleic Acid Synthesis Screening, published in April 2024 as directed by the Executive Order (an update to the ASPR’s 2023 framework).

The White House Office of Science and Technology Policy’s Framework for Nucleic Acid Synthesis Screening, published in April 2024 as directed by the Executive Order (an update to the ASPR’s 2023 framework).

The White House Office of Science and Technology Policy’s Framework for Nucleic Acid Synthesis Screening, published in April 2024 as directed by the Executive Order (an update to the ASPR’s 2023 framework).

What are radiological and nuclear hazards arising from the increase in AI capabilities?

A prominent and existential concern from many AI safety researchers is the risk of integrating AI technologies in the chain-of-command of nuclear weapons or nuclear power plants. As one example, it’s been proposed that AI could be used to monitor and maintain the activity of nuclear power plants.

Elsewhere, The Atlantic cites the Soviet Union’s Dead Hand as evidence that militaries could be tempted to use AI in the nuclear chain-of-command. Dead Hand is a system developed in 1985 that, if activated, would automatically launch a nuclear strike against the US if a command-and-control center stopped receiving communications from the Kremlin and detected radiation in Moscow’s atmosphere (a system which may still be operational).

As the reasoning of AI technology is still poorly understood and AI models have unpredictable decision-making abilities, it’s quite likely that such an integration may lead to unexpected and dangerous failure modes, which for nuclear technologies have catastrophic worst-case outcomes. As a result, many researchers argue that the risk of loss-of-control means we shouldn’t permit the usage of AI anywhere near nuclear technologies, such as decision-making regarding the nuclear launch codes or the storage and maintenance of nuclear weapons

In proposed legislation, some policymakers have pushed for banning AI in nuclear arms development, such as a proposed pact from a UK MP and Senator Mitt Romney’s recent letter to the Senate AI working group. Romney’s letter proposes a framework to mitigate extreme risks by requiring powerful AIs to be licensed if they’re intended for chemical/bio-engineering or nuclear development. However, nothing binding has been passed into law. There have also been reports that the US and China are having discussions on limiting the use of AI in areas including nuclear weapons.

Current regulatory landscape

The US

The Executive Order on AI has several sections on CBRN hazards: several department secretaries are required to implement plans, reports, and proposals analyzing CBRN risks and how to mitigate them, and Section 4.4 specifically focuses on analyzing biological weapon risks and how to reduce them in the short-term. In full:

Section 3(k): The term “dual-use foundation model” is defined as AI that, among other criteria, exhibits or could be modified to exhibit high performance at tasks that pose serious risks, such as substantially lowering the barrier of entry for non-experts to design, synthesize, acquire, or use CBRN weapons.

Section 3(k): The term “dual-use foundation model” is defined as AI that, among other criteria, exhibits or could be modified to exhibit high performance at tasks that pose serious risks, such as substantially lowering the barrier of entry for non-experts to design, synthesize, acquire, or use CBRN weapons.

Section 3(k): The term “dual-use foundation model” is defined as AI that, among other criteria, exhibits or could be modified to exhibit high performance at tasks that pose serious risks, such as substantially lowering the barrier of entry for non-experts to design, synthesize, acquire, or use CBRN weapons.

4.1(b): The Secretary of Energy must coordinate with Sector Risk Management Agencies to develop and implement a plan for developing AI model evaluation tools and testbeds, and at a minimum, to develop tools to evaluate AI capabilities to generate outputs that may represent nuclear, nonproliferation, biological, chemical, critical infrastructure, and energy-security threats or hazards and must develop model guardrails that reduce such risks.

4.1(b): The Secretary of Energy must coordinate with Sector Risk Management Agencies to develop and implement a plan for developing AI model evaluation tools and testbeds, and at a minimum, to develop tools to evaluate AI capabilities to generate outputs that may represent nuclear, nonproliferation, biological, chemical, critical infrastructure, and energy-security threats or hazards and must develop model guardrails that reduce such risks.

4.1(b): The Secretary of Energy must coordinate with Sector Risk Management Agencies to develop and implement a plan for developing AI model evaluation tools and testbeds, and at a minimum, to develop tools to evaluate AI capabilities to generate outputs that may represent nuclear, nonproliferation, biological, chemical, critical infrastructure, and energy-security threats or hazards and must develop model guardrails that reduce such risks.

4.2(a)(i)(C): The Secretary of Commerce must require companies developing dual-use foundation models to provide continuous information and reports on the results of any red-team testing related to lowering the barrier to entry for the development, acquisition, and use of biological weapons by non-state actors.

4.2(a)(i)(C): The Secretary of Commerce must require companies developing dual-use foundation models to provide continuous information and reports on the results of any red-team testing related to lowering the barrier to entry for the development, acquisition, and use of biological weapons by non-state actors.

4.2(a)(i)(C): The Secretary of Commerce must require companies developing dual-use foundation models to provide continuous information and reports on the results of any red-team testing related to lowering the barrier to entry for the development, acquisition, and use of biological weapons by non-state actors.

4.2(b)(i): Any model that primarily uses biological sequence data and that was trained using at least 1023 FLOPs must comply with 4.2(a) until proper technical conditions are developed.

4.2(b)(i): Any model that primarily uses biological sequence data and that was trained using at least 1023 FLOPs must comply with 4.2(a) until proper technical conditions are developed.

4.2(b)(i): Any model that primarily uses biological sequence data and that was trained using at least 1023 FLOPs must comply with 4.2(a) until proper technical conditions are developed.

The following points are all part of 4.4, which is devoted to Reducing Risks at the Intersection of AI and CBRN Threats, with a particular focus on biological weapons:

4.4(a)(i): The Secretary of Homeland Security must evaluate the potential for AI to be misused to enable the development or production of CBRN threats, while also considering the benefits and application of AI to counter these threats.

4.4(a)(i): The Secretary of Homeland Security must evaluate the potential for AI to be misused to enable the development or production of CBRN threats, while also considering the benefits and application of AI to counter these threats.

4.4(a)(i): The Secretary of Homeland Security must evaluate the potential for AI to be misused to enable the development or production of CBRN threats, while also considering the benefits and application of AI to counter these threats.

4.4(a)(i): The Secretary of Homeland Security must evaluate the potential for AI to be misused to enable the development or production of CBRN threats, while also considering the benefits and application of AI to counter these threats.

4.4(a)(i): The Secretary of Homeland Security must evaluate the potential for AI to be misused to enable the development or production of CBRN threats, while also considering the benefits and application of AI to counter these threats.

4.4(a)(i): The Secretary of Homeland Security must evaluate the potential for AI to be misused to enable the development or production of CBRN threats, while also considering the benefits and application of AI to counter these threats.

(A) This will be done in consultation with experts in AI and CBRN issues from the DoE, private AI labs, academia, and third-party model evaluators, for the sole purpose of guarding against CBRN threats.

(A) This will be done in consultation with experts in AI and CBRN issues from the DoE, private AI labs, academia, and third-party model evaluators, for the sole purpose of guarding against CBRN threats.

(A) This will be done in consultation with experts in AI and CBRN issues from the DoE, private AI labs, academia, and third-party model evaluators, for the sole purpose of guarding against CBRN threats.

(B) The Secretary of Homeland Security will submit a report to the president describing progress, including an assessment of the types of AI models that may present CBRN risks to the United States and recommendations for regulating their training and use, including requirements for safety evaluations and guardrails for mitigating potential threats to national security

(B) The Secretary of Homeland Security will submit a report to the president describing progress, including an assessment of the types of AI models that may present CBRN risks to the United States and recommendations for regulating their training and use, including requirements for safety evaluations and guardrails for mitigating potential threats to national security

(B) The Secretary of Homeland Security will submit a report to the president describing progress, including an assessment of the types of AI models that may present CBRN risks to the United States and recommendations for regulating their training and use, including requirements for safety evaluations and guardrails for mitigating potential threats to national security

4.4(a)(ii): The Secretary of Defense must enter a contract with the National Academies of Sciences, Engineering, and Medicine to conduct and submit a study that:

4.4(a)(ii): The Secretary of Defense must enter a contract with the National Academies of Sciences, Engineering, and Medicine to conduct and submit a study that:

4.4(a)(ii): The Secretary of Defense must enter a contract with the National Academies of Sciences, Engineering, and Medicine to conduct and submit a study that:

(A) assesses how AI can increase biosecurity risks, and makes recommendations on mitigating such risks;

(A) assesses how AI can increase biosecurity risks, and makes recommendations on mitigating such risks;

(A) assesses how AI can increase biosecurity risks, and makes recommendations on mitigating such risks;

(B) considers the national security implications of the use of data associated with pathogens and omics¹ studies that the government funds or owns for the training of generative AI, and makes recommendations on mitigating such risks;

(B) considers the national security implications of the use of data associated with pathogens and omics¹ studies that the government funds or owns for the training of generative AI, and makes recommendations on mitigating such risks;

(B) considers the national security implications of the use of data associated with pathogens and omics¹ studies that the government funds or owns for the training of generative AI, and makes recommendations on mitigating such risks;

(C) assesses how AI can be used to reduce biosecurity risks;

(C) assesses how AI can be used to reduce biosecurity risks;

(C) assesses how AI can be used to reduce biosecurity risks;

(D) considers additional concerns and opportunities at the intersection of AI and synthetic biology.

(D) considers additional concerns and opportunities at the intersection of AI and synthetic biology.

(D) considers additional concerns and opportunities at the intersection of AI and synthetic biology.

4.4(b): To reduce the risk of misuse of synthetic nucleic acids²:

4.4(b): To reduce the risk of misuse of synthetic nucleic acids²:

4.4(b): To reduce the risk of misuse of synthetic nucleic acids²:

(i) The director of OSTP, in consultation with several secretaries, shall establish a framework to encourage providers of synthetic nucleic acid sequences to implement comprehensive, scalable, and verifiable synthetic nucleic acid procurement screening mechanisms. As part of this framework, the director shall:

(i) The director of OSTP, in consultation with several secretaries, shall establish a framework to encourage providers of synthetic nucleic acid sequences to implement comprehensive, scalable, and verifiable synthetic nucleic acid procurement screening mechanisms. As part of this framework, the director shall:

(i) The director of OSTP, in consultation with several secretaries, shall establish a framework to encourage providers of synthetic nucleic acid sequences to implement comprehensive, scalable, and verifiable synthetic nucleic acid procurement screening mechanisms. As part of this framework, the director shall:

(A) establish criteria for ongoing identification of biological sequences that could be pose a risk to national security; and

(A) establish criteria for ongoing identification of biological sequences that could be pose a risk to national security; and

(A) establish criteria for ongoing identification of biological sequences that could be pose a risk to national security; and

(B) determine standard methodologies for conducting & verifying the performance of sequence synthesis procurement screening, including customer screening approaches to support due diligence with respect to managing security risks posed by purchasers of biological sequences identified in (A) and processes for the reporting of concerning activity.

(B) determine standard methodologies for conducting & verifying the performance of sequence synthesis procurement screening, including customer screening approaches to support due diligence with respect to managing security risks posed by purchasers of biological sequences identified in (A) and processes for the reporting of concerning activity.

(B) determine standard methodologies for conducting & verifying the performance of sequence synthesis procurement screening, including customer screening approaches to support due diligence with respect to managing security risks posed by purchasers of biological sequences identified in (A) and processes for the reporting of concerning activity.

(ii) The secretary of commerce, acting through NIST and in coordination with others, shall initiate an effort to engage with industry and relevant stakeholders, informed by the framework of 4.4(b)(i), to develop and refine:

(ii) The secretary of commerce, acting through NIST and in coordination with others, shall initiate an effort to engage with industry and relevant stakeholders, informed by the framework of 4.4(b)(i), to develop and refine:

(ii) The secretary of commerce, acting through NIST and in coordination with others, shall initiate an effort to engage with industry and relevant stakeholders, informed by the framework of 4.4(b)(i), to develop and refine:

(A) Specifications for effective nucleus synthesis procurement screening;

(A) Specifications for effective nucleus synthesis procurement screening;

(A) Specifications for effective nucleus synthesis procurement screening;

(B) Best practices, including security and access controls, for managing sequence-of-concern databases to support screening

(B) Best practices, including security and access controls, for managing sequence-of-concern databases to support screening

(B) Best practices, including security and access controls, for managing sequence-of-concern databases to support screening

(C) technical implementation guides for effective screening; and

(C) technical implementation guides for effective screening; and

(C) technical implementation guides for effective screening; and

(D) conformity-assessment best practices and mechanisms.

(D) conformity-assessment best practices and mechanisms.

(D) conformity-assessment best practices and mechanisms.

(iii) All agencies that fund life-sciences research shall establish as a requirement of funding that synthetic nucleic acid procurement is conducted through providers or manufacturers that adhere to the framework of 4.4(b)(i). The Assistant to the President for National Security Affairs and Director of OSTP shall coordinate the process of reviewing such funding requirements to facilitate consistency in implementation.

(iii) All agencies that fund life-sciences research shall establish as a requirement of funding that synthetic nucleic acid procurement is conducted through providers or manufacturers that adhere to the framework of 4.4(b)(i). The Assistant to the President for National Security Affairs and Director of OSTP shall coordinate the process of reviewing such funding requirements to facilitate consistency in implementation.

(iii) All agencies that fund life-sciences research shall establish as a requirement of funding that synthetic nucleic acid procurement is conducted through providers or manufacturers that adhere to the framework of 4.4(b)(i). The Assistant to the President for National Security Affairs and Director of OSTP shall coordinate the process of reviewing such funding requirements to facilitate consistency in implementation.

(iv) To facilitate effective implementation of the measures of 4.4(b)(i)-(iii), the Secretary of Homeland Security shall, with consultation:

(iv) To facilitate effective implementation of the measures of 4.4(b)(i)-(iii), the Secretary of Homeland Security shall, with consultation:

(iv) To facilitate effective implementation of the measures of 4.4(b)(i)-(iii), the Secretary of Homeland Security shall, with consultation:

(A) Develop a framework to conduct structured evaluation and stress testing of nucleic acid synthesis procurement screening [...];

(A) Develop a framework to conduct structured evaluation and stress testing of nucleic acid synthesis procurement screening [...];

(A) Develop a framework to conduct structured evaluation and stress testing of nucleic acid synthesis procurement screening [...];

(B) Submit an annual report [...] on any results of the activities conducted pursuant to 4.4(b)(iv)(A), including recommendations on how to strengthen procurement screening.

(B) Submit an annual report [...] on any results of the activities conducted pursuant to 4.4(b)(iv)(A), including recommendations on how to strengthen procurement screening.

(B) Submit an annual report [...] on any results of the activities conducted pursuant to 4.4(b)(iv)(A), including recommendations on how to strengthen procurement screening.

China

China’s three most important AI regulations do not contain any specific provisions for CBRN hazards.

The EU

The EU AI Act does not contain any specific provisions for CBRN hazards, though article (60m) on the category of “general purpose AI that could pose systemic risks” includes the following mention of CBRN: “international approaches have so far identified the need to devote attention to risks from [...]  chemical, biological, radiological, and nuclear risks, such as the ways in which barriers to entry can be lowered, including for weapons development, design acquisition, or use”.

Convergence’s Analysis

Mitigating catastrophic risks from AI-enabled CBRN hazards should be a top global priority.

CBRN hazards present arguably the shortest and most immediate path for AI to lead to catastrophic harm.

CBRN hazards present arguably the shortest and most immediate path for AI to lead to catastrophic harm.

CBRN hazards present arguably the shortest and most immediate path for AI to lead to catastrophic harm.

AI is demonstrably already capable of lowering the barrier to entry of generating biological and chemical weapons. This lowering is likely to get more dramatic in the near future.

AI is demonstrably already capable of lowering the barrier to entry of generating biological and chemical weapons. This lowering is likely to get more dramatic in the near future.

AI is demonstrably already capable of lowering the barrier to entry of generating biological and chemical weapons. This lowering is likely to get more dramatic in the near future.

When paired with the existing and under-regulated infrastructure for biology labs generating custom genetic code on demand, this could plausibly lead to the accidental or deliberate release of an unprecedented pandemic pathogen within the next decade.

When paired with the existing and under-regulated infrastructure for biology labs generating custom genetic code on demand, this could plausibly lead to the accidental or deliberate release of an unprecedented pandemic pathogen within the next decade.

When paired with the existing and under-regulated infrastructure for biology labs generating custom genetic code on demand, this could plausibly lead to the accidental or deliberate release of an unprecedented pandemic pathogen within the next decade.

Despite this, current legislation on AI and CBRN hazards is wholly insufficient given the scale of potential risks.

The EU and China currently have no specific binding requirements regarding the development of AI models capable of enabling the development of CBRN weapons.

The EU and China currently have no specific binding requirements regarding the development of AI models capable of enabling the development of CBRN weapons.

The EU and China currently have no specific binding requirements regarding the development of AI models capable of enabling the development of CBRN weapons.

The current US legislation initiates important studies and reports on the intersection of AI and CBRN weapons, particularly focusing on biosecurity risks. However, these are largely non-binding and exploratory. More concrete regulation, such as mandatory safety and security requirements for dual-use models, is needed.

The current US legislation initiates important studies and reports on the intersection of AI and CBRN weapons, particularly focusing on biosecurity risks. However, these are largely non-binding and exploratory. More concrete regulation, such as mandatory safety and security requirements for dual-use models, is needed.

The current US legislation initiates important studies and reports on the intersection of AI and CBRN weapons, particularly focusing on biosecurity risks. However, these are largely non-binding and exploratory. More concrete regulation, such as mandatory safety and security requirements for dual-use models, is needed.

Effective regulation of CBRN and AI will require close collaboration between AI experts, domain experts, and policymakers.

The development of legislation regarding CBRN weapons requires an unusually high level of specialized technical expertise, and so regulators will need to work closely with leading researchers in the fields of AI, biology, chemistry, and cybersecurity to identify and mitigate key risks.

The development of legislation regarding CBRN weapons requires an unusually high level of specialized technical expertise, and so regulators will need to work closely with leading researchers in the fields of AI, biology, chemistry, and cybersecurity to identify and mitigate key risks.

The development of legislation regarding CBRN weapons requires an unusually high level of specialized technical expertise, and so regulators will need to work closely with leading researchers in the fields of AI, biology, chemistry, and cybersecurity to identify and mitigate key risks.

It is difficult to impossible to develop effective model evaluations without substantial input from both AI experts and domain experts. Long-term, close collaboration between these parties is a critical aspect of identifying key CBRN risks.

It is difficult to impossible to develop effective model evaluations without substantial input from both AI experts and domain experts. Long-term, close collaboration between these parties is a critical aspect of identifying key CBRN risks.

It is difficult to impossible to develop effective model evaluations without substantial input from both AI experts and domain experts. Long-term, close collaboration between these parties is a critical aspect of identifying key CBRN risks.

Several teams of researchers have been developing tools and proposals tailored to CBRN-related AI risk (though these have not yet been adopted by any legislatures), such as:

Several teams of researchers have been developing tools and proposals tailored to CBRN-related AI risk (though these have not yet been adopted by any legislatures), such as:

Several teams of researchers have been developing tools and proposals tailored to CBRN-related AI risk (though these have not yet been adopted by any legislatures), such as:

The Centre for AI Safety’s Weapons of Mass Destruction Proxy;

The Centre for AI Safety’s Weapons of Mass Destruction Proxy;

The Centre for AI Safety’s Weapons of Mass Destruction Proxy;

Towards Responsible Governance of Biological Design Tools, a collaboration between leading AI, governance, and risk research organizations.

Towards Responsible Governance of Biological Design Tools, a collaboration between leading AI, governance, and risk research organizations.

Towards Responsible Governance of Biological Design Tools, a collaboration between leading AI, governance, and risk research organizations.

AI governance in other high-risk domains like cybersecurity and the military has major implications for CBRN risks.

Multiple militaries around the world possess stockpiles of chemical, biological, and nuclear weapons, and nuclear power plants and biocontainment facilities can also present CBRN hazards. If advanced AI is trained for cybersecurity attacks, these stockpiles and other hazardous systems could be targeted with devastating outcomes.

Multiple militaries around the world possess stockpiles of chemical, biological, and nuclear weapons, and nuclear power plants and biocontainment facilities can also present CBRN hazards. If advanced AI is trained for cybersecurity attacks, these stockpiles and other hazardous systems could be targeted with devastating outcomes.

Multiple militaries around the world possess stockpiles of chemical, biological, and nuclear weapons, and nuclear power plants and biocontainment facilities can also present CBRN hazards. If advanced AI is trained for cybersecurity attacks, these stockpiles and other hazardous systems could be targeted with devastating outcomes.

The increasing adoption of AI by militaries - such as the first confirmed deployment of fully autonomous military drones and the several hundred US military AI projects disclosed by the Pentagon - leads many to fear that AI will become increasingly involved in the decision-making and chain-of-command of CBRN weapons. The involvement of AI here will require exceptional value alignment, as even slight misalignment in goals and values between human and AI operators could lead to catastrophic harm.

The increasing adoption of AI by militaries - such as the first confirmed deployment of fully autonomous military drones and the several hundred US military AI projects disclosed by the Pentagon - leads many to fear that AI will become increasingly involved in the decision-making and chain-of-command of CBRN weapons. The involvement of AI here will require exceptional value alignment, as even slight misalignment in goals and values between human and AI operators could lead to catastrophic harm.

The increasing adoption of AI by militaries - such as the first confirmed deployment of fully autonomous military drones and the several hundred US military AI projects disclosed by the Pentagon - leads many to fear that AI will become increasingly involved in the decision-making and chain-of-command of CBRN weapons. The involvement of AI here will require exceptional value alignment, as even slight misalignment in goals and values between human and AI operators could lead to catastrophic harm.