policy report

The 2024 EU Regulatory Landscape

The 2024 EU Regulatory Landscape

Published by Convergence Analysis, this series is designed to be a primer for policymakers, researchers, and individuals seeking to develop a high-level overview of the current state of AI regulation.

Updated: April 28, 2024

Structure of AI Regulations

The European Union (EU) has conducted almost all of its AI governance initiatives within a single piece of legislation: the EU AI Act, formally adopted in March 2024. Initially proposed in 2021, this comprehensive legislation aims to regulate AI systems based on their potential risks and safeguard the rights of EU citizens.

At the core of the EU AI Act is a risk-based approach to AI regulation. The act classifies AI systems into four categories: unacceptable risk, high risk, limited risk, and minimal risk. Unacceptable risk AI systems, such as those that manipulate human behavior or exploit vulnerabilities, are banned outright. High-risk AI systems, including those used in critical infrastructure, education, and employment, are subject to strict requirements and oversight. Limited risk AI systems require transparency measures, while minimal risk AI systems are largely unregulated.

In direct response to the publicization of foundational AI models in 2022 starting with the launch of ChatGPT, the Act includes clauses specifically addressing the challenges posed by general purpose AI (GPAI). GPAI systems, which can be adapted for a wide range of tasks, are subject to additional requirements, including being categorized as high-risk systems depending on their intended domain of use.

What are the key traits of the US’ AI governance strategy?

The EU AI Act is a horizontally integrated, comprehensive piece of legislation implemented by a centralized body:

The EU AI Act classifies all AI systems used within the EU into four distinct risk levels, and assigns clear requirements for each set of AI systems. As a result, it’s the most comprehensive legal framework for AI systems today. Though it has generally been well-received, it’s also received criticism by member countries for being overly restrictive and potentially stifling AI innovation within the EU.

The EU AI Act classifies all AI systems used within the EU into four distinct risk levels, and assigns clear requirements for each set of AI systems. As a result, it’s the most comprehensive legal framework for AI systems today. Though it has generally been well-received, it’s also received criticism by member countries for being overly restrictive and potentially stifling AI innovation within the EU.

The EU AI Act classifies all AI systems used within the EU into four distinct risk levels, and assigns clear requirements for each set of AI systems. As a result, it’s the most comprehensive legal framework for AI systems today. Though it has generally been well-received, it’s also received criticism by member countries for being overly restrictive and potentially stifling AI innovation within the EU.

To oversee the implementation and enforcement of the EU AI Act, the legislation establishes the European AI Office. This dedicated body is responsible for coordinating compliance, providing guidance to businesses and organizations, and enforcing the rules set out in the act. As the leading agency enforcing binding AI rules on a multinational coalition, it will shape the development and governance of AI globally, much as the GDPR led to an international restructuring of internet privacy standards.

To oversee the implementation and enforcement of the EU AI Act, the legislation establishes the European AI Office. This dedicated body is responsible for coordinating compliance, providing guidance to businesses and organizations, and enforcing the rules set out in the act. As the leading agency enforcing binding AI rules on a multinational coalition, it will shape the development and governance of AI globally, much as the GDPR led to an international restructuring of internet privacy standards.

To oversee the implementation and enforcement of the EU AI Act, the legislation establishes the European AI Office. This dedicated body is responsible for coordinating compliance, providing guidance to businesses and organizations, and enforcing the rules set out in the act. As the leading agency enforcing binding AI rules on a multinational coalition, it will shape the development and governance of AI globally, much as the GDPR led to an international restructuring of internet privacy standards.

The EU has demonstrated a clear prioritization for the protection of citizen’s rights:

The EU AI Act’s core approach to categorizing risk levels is designed primarily around measuring the ability of AI systems to infringe on the rights of EU citizens.

The EU AI Act’s core approach to categorizing risk levels is designed primarily around measuring the ability of AI systems to infringe on the rights of EU citizens.

The EU AI Act’s core approach to categorizing risk levels is designed primarily around measuring the ability of AI systems to infringe on the rights of EU citizens.

This can be observed in the list of use cases deemed to be high-risk, such as educational or vocational training, employment, migration & asylum, and administration of justice or democratic processes.

This can be observed in the list of use cases deemed to be high-risk, such as educational or vocational training, employment, migration & asylum, and administration of justice or democratic processes.

This can be observed in the list of use cases deemed to be high-risk, such as educational or vocational training, employment, migration & asylum, and administration of justice or democratic processes.

This is in direct contrast to China’s AI governance strategy, which is designed largely to give the government greater control over generated content and recommendations.

This is in direct contrast to China’s AI governance strategy, which is designed largely to give the government greater control over generated content and recommendations.

This is in direct contrast to China’s AI governance strategy, which is designed largely to give the government greater control over generated content and recommendations.

Most of the requirements are designed with the common citizen in mind, such as transparency and reporting requirements, the ability of any citizen to lodge a complaint with a market surveillance authority, prohibitions on social scoring systems, and discrimination requirements.

Most of the requirements are designed with the common citizen in mind, such as transparency and reporting requirements, the ability of any citizen to lodge a complaint with a market surveillance authority, prohibitions on social scoring systems, and discrimination requirements.

Most of the requirements are designed with the common citizen in mind, such as transparency and reporting requirements, the ability of any citizen to lodge a complaint with a market surveillance authority, prohibitions on social scoring systems, and discrimination requirements.

Few protections are included for corporations or organizations running AI systems. The fines for non-compliance are quite high, ranging from 1.5% to 7% of a firm’s global sales turnover or millions of euros, whichever is greater.

Few protections are included for corporations or organizations running AI systems. The fines for non-compliance are quite high, ranging from 1.5% to 7% of a firm’s global sales turnover or millions of euros, whichever is greater.

Few protections are included for corporations or organizations running AI systems. The fines for non-compliance are quite high, ranging from 1.5% to 7% of a firm’s global sales turnover or millions of euros, whichever is greater.

The EU AI Act implements strict and binding requirements for high-risk AI systems:

This can be observed in the list of use cases deemed to be high-risk, such as educational or vocational training, employment, migration & asylum, and administration of justice or democratic processes.

This can be observed in the list of use cases deemed to be high-risk, such as educational or vocational training, employment, migration & asylum, and administration of justice or democratic processes.

This can be observed in the list of use cases deemed to be high-risk, such as educational or vocational training, employment, migration & asylum, and administration of justice or democratic processes.

Low-risk AI systems face significantly less stringent compliance requirements, but have binding transparency requirements mandating that AI systems must inform humans when sharing or distributing generated content.

Low-risk AI systems face significantly less stringent compliance requirements, but have binding transparency requirements mandating that AI systems must inform humans when sharing or distributing generated content.

Low-risk AI systems face significantly less stringent compliance requirements, but have binding transparency requirements mandating that AI systems must inform humans when sharing or distributing generated content.

AI Evaluation & Risk Assessments

The EU’s draft AI Act has mandated some safety and risk assessments for high-risk AI and, in more recent iterations, frontier AI. 

As summarized here, the act classifies models by risk, and higher risk AI has stricter requirements, including for assessment. Developers must determine the risk category of their AI, and may self-assess and self-certify their models by adopting upcoming standards or justifying their own (or be fined at least €20 million). High-risk models must undergo a third-party “conformity assessment” before they can be released to the public, which includes conforming to requirements regarding “risk management system”, “human oversight”, and “accuracy, robustness, and cybersecurity”. 

In earlier versions, general-purpose AI such as ChatGPT would not have been considered high-risk. However, since the release of ChatGPT in 2022, EU legislators have developed new provisions to account for similar general purpose models (see more on the changes here). Article 4b introduces a new category of “general-purpose AI” (GPAI) that must follow a lighter set of restrictions than high-risk AI. However, GPAI models in high-risk contexts count as high-risk, and powerful GPAI must undergo the conformity assessment described above. 

Title VIII of the act, on post-market monitoring, information sharing, and market surveillance, includes the following:

Article 65: AI systems that present a risk at national level (according to 3.19 of Regulation (EU) 2019/1020) should undergo evaluation by the relevant market surveillance authority, with particular attention paid to AI that presents a risk to vulnerable groups. If the model isn’t compliant with the regulations, the developer must take corrective action or withdraw/recall it from the market.

Article 65: AI systems that present a risk at national level (according to 3.19 of Regulation (EU) 2019/1020) should undergo evaluation by the relevant market surveillance authority, with particular attention paid to AI that presents a risk to vulnerable groups. If the model isn’t compliant with the regulations, the developer must take corrective action or withdraw/recall it from the market.

Article 65: AI systems that present a risk at national level (according to 3.19 of Regulation (EU) 2019/1020) should undergo evaluation by the relevant market surveillance authority, with particular attention paid to AI that presents a risk to vulnerable groups. If the model isn’t compliant with the regulations, the developer must take corrective action or withdraw/recall it from the market.

Article 68j: The AI Office can conduct evaluations of GPAI models to assess compliance and to investigate systemic risks, either directly or through independent experts. The details of the evaluation will be outlined in an implementing act.

Article 68j: The AI Office can conduct evaluations of GPAI models to assess compliance and to investigate systemic risks, either directly or through independent experts. The details of the evaluation will be outlined in an implementing act.

Article 68j: The AI Office can conduct evaluations of GPAI models to assess compliance and to investigate systemic risks, either directly or through independent experts. The details of the evaluation will be outlined in an implementing act.

Articles 60h, 49, and 15.2: 1 also discuss evaluations and benchmarking. Article 60h points out the lack of expertise in conformity assessment, and the under-development in third-party auditing methods, suggesting that industry research (such as the development of model evaluation and red-teaming) may be useful for governance. Therefore,  The AI Office is to coordinate with experts to establish standards and non-binding guidance on risk measurement and benchmarking.

Articles 60h, 49, and 15.2: 1 also discuss evaluations and benchmarking. Article 60h points out the lack of expertise in conformity assessment, and the under-development in third-party auditing methods, suggesting that industry research (such as the development of model evaluation and red-teaming) may be useful for governance. Therefore,  The AI Office is to coordinate with experts to establish standards and non-binding guidance on risk measurement and benchmarking.

Articles 60h, 49, and 15.2: 1 also discuss evaluations and benchmarking. Article 60h points out the lack of expertise in conformity assessment, and the under-development in third-party auditing methods, suggesting that industry research (such as the development of model evaluation and red-teaming) may be useful for governance. Therefore,  The AI Office is to coordinate with experts to establish standards and non-binding guidance on risk measurement and benchmarking.

AI Model Registries

Via the EU AI Act, the EU has opted to categorize AI systems into tiers of risk by their use cases, notably splitting permitted AI systems into high-risk and limited-risk categorizations. In particular, it requires that high-risk AI systems must be entered into an EU database for tracking.

As specified in Article 60 & Annex VIII, this database is intended to be maintained by the European Commission and should contain primarily basic information such as the contact information for representatives for said AI system. It constitutes a fairly lightweight layer of tracking, and appears intended to be used primarily as a contact directory alongside other, much more extensive regulatory requirements for high-risk AI systems.

As specified in Article 60 & Annex VIII, this database is intended to be maintained by the European Commission and should contain primarily basic information such as the contact information for representatives for said AI system. It constitutes a fairly lightweight layer of tracking, and appears intended to be used primarily as a contact directory alongside other, much more extensive regulatory requirements for high-risk AI systems.

As specified in Article 60 & Annex VIII, this database is intended to be maintained by the European Commission and should contain primarily basic information such as the contact information for representatives for said AI system. It constitutes a fairly lightweight layer of tracking, and appears intended to be used primarily as a contact directory alongside other, much more extensive regulatory requirements for high-risk AI systems.

AI Incident Reporting

The EU AI Act requires that developers of both high-risk AI systems and general purpose AI (“GPAI”) systems set up internal tracking and reporting systems for “serious incidents” as part of their post-market monitoring infrastructure.

As defined in Article 3(44), a serious incident is: 

Any incident or malfunctioning of an AI system that directly or indirectly leads to any of the following:

(a) the death of a person or serious damage to a person’s health

(a) the death of a person or serious damage to a person’s health

(a) the death of a person or serious damage to a person’s health

(b) a serious and irreversible disruption of the management and operation of critical infrastructure

(b) a serious and irreversible disruption of the management and operation of critical infrastructure

(b) a serious and irreversible disruption of the management and operation of critical infrastructure

(ba) breach of obligations under Union law intended to protect fundamental rights

(ba) breach of obligations under Union law intended to protect fundamental rights

(ba) breach of obligations under Union law intended to protect fundamental rights

(bb) serious damage to property or the environment.

(bb) serious damage to property or the environment.

(bb) serious damage to property or the environment.

In the event that such an incident occurs, Article 62 requires that the developer reports the incident to the relevant authorities (specifically the European Data Protection Supervisor) and cooperate with them on an investigation, risk assessment, and corrective action. It specifies time limits for reporting and specific reporting obligations.

Open-Source AI Models

The EU AI Act states that open-sourcing can increase innovation and economic growth. The act therefore exempts open-source models and developers from some restrictions and responsibilities placed on other models and developers. Note though that these exemptions do not apply to foundation models (meaning generative AI like ChatGPT), or if the open-source software is monetized or is a component in high-risk software.

Section 57: Places responsibilities on providers throughout the “AI value chain”, i.e. anyone developing components or software that’s used in AI. Third parties should be exempt if their products are open-source, though it encourages open-source developers to implement documentation practices, such as model cards and data sheets.

Section 57: Places responsibilities on providers throughout the “AI value chain”, i.e. anyone developing components or software that’s used in AI. Third parties should be exempt if their products are open-source, though it encourages open-source developers to implement documentation practices, such as model cards and data sheets.

Section 57: Places responsibilities on providers throughout the “AI value chain”, i.e. anyone developing components or software that’s used in AI. Third parties should be exempt if their products are open-source, though it encourages open-source developers to implement documentation practices, such as model cards and data sheets.

Section 60i & i+1: Clarifies that GPAI models released under free and open-source licenses count as satisfying “high levels of transparency and openness” if their parameters are made publicly available, and a licence should be considered free and open-source when users can run, copy, distribute, study, change, and improve the software and data. This exception for open-source components does not apply if the component is monetized in any way.

Section 60i & i+1: Clarifies that GPAI models released under free and open-source licenses count as satisfying “high levels of transparency and openness” if their parameters are made publicly available, and a licence should be considered free and open-source when users can run, copy, distribute, study, change, and improve the software and data. This exception for open-source components does not apply if the component is monetized in any way.

Section 60i & i+1: Clarifies that GPAI models released under free and open-source licenses count as satisfying “high levels of transparency and openness” if their parameters are made publicly available, and a licence should be considered free and open-source when users can run, copy, distribute, study, change, and improve the software and data. This exception for open-source components does not apply if the component is monetized in any way.

Section 60f: Exempts providers of open-source GPAI models from the transparency requirements unless they present a systemic risk. This does not exempt GPAI developers from the obligation to produce a summary about training data or to enact a copyright policy.

Section 60f: Exempts providers of open-source GPAI models from the transparency requirements unless they present a systemic risk. This does not exempt GPAI developers from the obligation to produce a summary about training data or to enact a copyright policy.

Section 60f: Exempts providers of open-source GPAI models from the transparency requirements unless they present a systemic risk. This does not exempt GPAI developers from the obligation to produce a summary about training data or to enact a copyright policy.

Section 60o: Specifies that developers of GPAI models should notify the AI Office if they’re developing a GPAI model that exceeds certain thresholds (and therefore counting as conferring systemic risk).

Section 60o: Specifies that developers of GPAI models should notify the AI Office if they’re developing a GPAI model that exceeds certain thresholds (and therefore counting as conferring systemic risk).

Section 60o: Specifies that developers of GPAI models should notify the AI Office if they’re developing a GPAI model that exceeds certain thresholds (and therefore counting as conferring systemic risk).

Article 2(5g): States that obligations shall not apply to AI systems released under free and open-source licenses unless they are placed on the market or put into service as high-risk AI systems.

Article 2(5g): States that obligations shall not apply to AI systems released under free and open-source licenses unless they are placed on the market or put into service as high-risk AI systems.

Article 2(5g): States that obligations shall not apply to AI systems released under free and open-source licenses unless they are placed on the market or put into service as high-risk AI systems.

Article 28(2b): States that providers of high-risk AI systems and third parties providing components for such systems have a written agreement on what information the provider will need to comply with the act. However, third parties publishing “AI components other than GPAI models under a free and open licence” are exempt from this.

Article 28(2b): States that providers of high-risk AI systems and third parties providing components for such systems have a written agreement on what information the provider will need to comply with the act. However, third parties publishing “AI components other than GPAI models under a free and open licence” are exempt from this.

Article 28(2b): States that providers of high-risk AI systems and third parties providing components for such systems have a written agreement on what information the provider will need to comply with the act. However, third parties publishing “AI components other than GPAI models under a free and open licence” are exempt from this.

Article 52c(-2) & 52ca(5): Exempt providers of AI models under a free and open licence that publicly release the weights and information on their model from (1) the obligations in 52c(a) and 52c(b) to draw up technical documentation (unless the GPAI model has systemic risks) and (2) from   from article 52ca, the requirement to appoint an authorized representative in the EU, unless the GPAI model has systemic risks.

Article 52c(-2) & 52ca(5): Exempt providers of AI models under a free and open licence that publicly release the weights and information on their model from (1) the obligations in 52c(a) and 52c(b) to draw up technical documentation (unless the GPAI model has systemic risks) and (2) from   from article 52ca, the requirement to appoint an authorized representative in the EU, unless the GPAI model has systemic risks.

Article 52c(-2) & 52ca(5): Exempt providers of AI models under a free and open licence that publicly release the weights and information on their model from (1) the obligations in 52c(a) and 52c(b) to draw up technical documentation (unless the GPAI model has systemic risks) and (2) from   from article 52ca, the requirement to appoint an authorized representative in the EU, unless the GPAI model has systemic risks.

Cybersecurity of Frontier AI Models

The EU has a comprehensive data privacy and security law that applies to all organizations operating in the EU or handling the personal data of EU citizens: the General Data Protection Regulation (GDPR). Passed in 2018, it does not contain language specific to AI systems, but provides a strong base of privacy requirements for collecting user data, such as mandatory disclosures, purpose limitations, security, and rights to access one’s personal data.

The EU AI Act includes some cybersecurity requirements for organizations running “high-risk AI systems” or “general purpose AI models with systemic risk”. It generally identifies specific attack vectors that organizations should protect against, but provides little to no specificity about how an organization might protect against these attack vectors or what level of security is required.

Sections discussing cybersecurity for AI models include:

Article 15: High-risk AI systems should be resilient against attacks by third-parties against system vulnerabilities. Specific vulnerabilities include:

Article 15: High-risk AI systems should be resilient against attacks by third-parties against system vulnerabilities. Specific vulnerabilities include:

Article 15: High-risk AI systems should be resilient against attacks by third-parties against system vulnerabilities. Specific vulnerabilities include:

Attacks trying to manipulate the training dataset (‘data poisoning’)

Attacks trying to manipulate the training dataset (‘data poisoning’)

Attacks trying to manipulate the training dataset (‘data poisoning’)

Attacks on pre-trained components used in training (‘model poisoning’)

Attacks on pre-trained components used in training (‘model poisoning’)

Attacks on pre-trained components used in training (‘model poisoning’)

Inputs designed to cause the model to make a mistake (‘adversarial examples’ or ‘model evasion’)

Inputs designed to cause the model to make a mistake (‘adversarial examples’ or ‘model evasion’)

Inputs designed to cause the model to make a mistake (‘adversarial examples’ or ‘model evasion’)

Confidentiality attacks or model flaws

Confidentiality attacks or model flaws

Confidentiality attacks or model flaws

Article 52d: Providers of general-purpose AI models with systemic risk shall:

Article 52d: Providers of general-purpose AI models with systemic risk shall:

Article 52d: Providers of general-purpose AI models with systemic risk shall:

Conduct adversarial testing of the model to identify and mitigate systemic risk

Conduct adversarial testing of the model to identify and mitigate systemic risk

Conduct adversarial testing of the model to identify and mitigate systemic risk

Assess and mitigate systemic risks from the development, market introduction, or use of the model

Assess and mitigate systemic risks from the development, market introduction, or use of the model

Assess and mitigate systemic risks from the development, market introduction, or use of the model

Document and report serious cybersecurity incidents

Document and report serious cybersecurity incidents

Document and report serious cybersecurity incidents

Ensure an adequate level of cybersecurity protection

Ensure an adequate level of cybersecurity protection

Ensure an adequate level of cybersecurity protection

AI Discrimination Requirements

The EU AI Act directly addresses discriminatory practices classified by the use cases of AI systems considered. In particular, it classifies all AI systems with potential discriminatory practices as high-risk systems and bars them from discrimination, including:

AI systems that could produce adverse outcomes to health and safety of persons, and could cause discriminatory practices.

AI systems that could produce adverse outcomes to health and safety of persons, and could cause discriminatory practices.

AI systems used in education or vocational training, “notably for determining access to educational…institutions or to evaluate persons on tests...as a precondition for their education”.

AI systems used in education or vocational training, “notably for determining access to educational…institutions or to evaluate persons on tests...as a precondition for their education”.

AI systems used in employment, “notably for recruitment…for making decisions on promotion and termination and for task allocation, monitoring or evaluation of persons in work-related contractual relationships”.

AI systems used in employment, “notably for recruitment…for making decisions on promotion and termination and for task allocation, monitoring or evaluation of persons in work-related contractual relationships”.

AI systems used to evaluate the credit score or creditworthiness of natural persons, or for allocating public assistance benefits

AI systems used to evaluate the credit score or creditworthiness of natural persons, or for allocating public assistance benefits

AI systems used in migration, asylum and border control management

AI systems used in migration, asylum and border control management

In particular, AI systems that provide social scoring of natural persons (which pose a significant discriminatory risk) are deemed unacceptable systems and are banned.

AI Disclosures

Article 52 of the EU AI Act lists the transparency obligations for AI developers. These largely relate to AI systems “intended to directly interact with natural persons”, where natural persons are individual people (excluding legal persons, which can include businesses). For concision, we will just call these “public-facing” AIs. Notably, the following requirements have exemptions for AI used to detect, prevent, investigate, or prosecute crimes (assuming other laws and rights are observed).

Article 52.1: Requires developers to ensure users of public-facing AI are informed or obviously aware that they are interacting with an AI.

Article 52.1: Requires developers to ensure users of public-facing AI are informed or obviously aware that they are interacting with an AI.

Article 52.1a: Requires AI-generated content to be watermarked (with an exemption for AI assisting in standard editing or which doesn’t substantially alter input data).

Article 52.1a: Requires AI-generated content to be watermarked (with an exemption for AI assisting in standard editing or which doesn’t substantially alter input data).

Article 52.2: Requires developers of AI that recognizes emotions or categorizes biometric data (e.g. distinguishing children from adults in video footage) to inform the people being processed.

Article 52.2: Requires developers of AI that recognizes emotions or categorizes biometric data (e.g. distinguishing children from adults in video footage) to inform the people being processed.

Article 52.3: Requires deep fakes to be labeled as AI-generated (with a partial exemption for use in art, satire, etc, in which case developers can disclose the existence of the deep fake less intrusively). AI-generated text designed to inform on matters of public interest must disclose that it’s AI-generated, unless the text undergoes human review, and someone takes editorial responsibility.

Article 52.3: Requires deep fakes to be labeled as AI-generated (with a partial exemption for use in art, satire, etc, in which case developers can disclose the existence of the deep fake less intrusively). AI-generated text designed to inform on matters of public interest must disclose that it’s AI-generated, unless the text undergoes human review, and someone takes editorial responsibility.

Article 52b: Requires developers of general purpose AI with systemic risk to notify the EU Commission within 2 weeks of meeting any of the following requirements defined in article 52a.1:

Article 52b: Requires developers of general purpose AI with systemic risk to notify the EU Commission within 2 weeks of meeting any of the following requirements defined in article 52a.1:

Possessing “high impact capabilities”, as evaluated by appropriate technical tools.

Possessing “high impact capabilities”, as evaluated by appropriate technical tools.

By decision of the Commission, if they believe a general purpose AI has capabilities or impact equivalent to “high impact capabilities”.

By decision of the Commission, if they believe a general purpose AI has capabilities or impact equivalent to “high impact capabilities”.

Article 52c: Requires providers of GPAI to publish a summary of the content used for training the model, and 60f and 60k require developers to disclose any copyrighted material in their training data in their summary.

Article 52c: Requires providers of GPAI to publish a summary of the content used for training the model, and 60f and 60k require developers to disclose any copyrighted material in their training data in their summary.

AI and Chemical, Biological, Radiological, & Nuclear Hazards

The EU AI Act does not contain any specific provisions for CBRN hazards, though article (60m) on the category of “general purpose AI that could pose systemic risks” includes the following mention of CBRN: “international approaches have so far identified the need to devote attention to risks from [...]  chemical, biological, radiological, and nuclear risks, such as the ways in which barriers to entry can be lowered, including for weapons development, design acquisition, or use”.