policy report

2024 State of the AI Regulatory Landscape

2024 State of the AI Regulatory Landscape

Published by Convergence Analysis, this series is designed to be a primer for policymakers, researchers, and individuals seeking to develop a high-level overview of the current state of AI regulation.

AI Discrimination Requirements

Deric Cheng

Governance Research Lead

Last updated Apr 04, 2024

Last updated Apr 04, 2024

Author's Note

This report is one in a series of ~10 posts comprising a State of the AI Regulatory Landscape in 2024 Review, conducted by the Governance Research Program at Convergence Analysis. Each post will cover a specific domain of AI governance. We’ll provide an overview of existing regulations, focusing on the US, EU, and China as the leading governmental bodies currently developing AI legislation. Additionally, we’ll discuss the relevant context behind each domain and conduct a short analysis.

This series is intended to be a primer for policymakers, researchers, and individuals seeking to develop a high-level overview of the current AI governance space. We’ll be releasing a comprehensive report at the end of this series.

Author's Note

This report is one in a series of ~10 posts comprising a State of the AI Regulatory Landscape in 2024 Review, conducted by the Governance Research Program at Convergence Analysis. Each post will cover a specific domain of AI governance. We’ll provide an overview of existing regulations, focusing on the US, EU, and China as the leading governmental bodies currently developing AI legislation. Additionally, we’ll discuss the relevant context behind each domain and conduct a short analysis.

This series is intended to be a primer for policymakers, researchers, and individuals seeking to develop a high-level overview of the current AI governance space. We’ll be releasing a comprehensive report at the end of this series.

Author's Note

This report is one in a series of ~10 posts comprising a State of the AI Regulatory Landscape in 2024 Review, conducted by the Governance Research Program at Convergence Analysis. Each post will cover a specific domain of AI governance. We’ll provide an overview of existing regulations, focusing on the US, EU, and China as the leading governmental bodies currently developing AI legislation. Additionally, we’ll discuss the relevant context behind each domain and conduct a short analysis.

This series is intended to be a primer for policymakers, researchers, and individuals seeking to develop a high-level overview of the current AI governance space. We’ll be releasing a comprehensive report at the end of this series.

What are discrimination requirements for AI? Why do they matter?

Discrimination requirements for AI are rules and guidelines aimed at preventing AI systems from perpetuating or amplifying societal biases and unfairly disadvantaging certain groups of people based on protected characteristics like race, gender, age, religion, disability status, or sexual orientation. As AI increasingly powers high-stakes decision making in areas like hiring, lending, healthcare, criminal justice, and public benefits, these systems are likely to adversely impact certain subsets of the population without algorithmic bias management. 

For example, an algorithm designed to identify strong resumes for a job application is likely to predict correlations between the sex of a candidate and the quality of their resume, reflecting existing societal biases (and therefore perpetuating them). As a result, certain classes of individuals may be adversely impacted by an algorithm that contains inherently discriminatory word associations.

Other examples for algorithmic discrimination include:

Biases in the type of online ads presented to website users

Biases in the type of online ads presented to website users

Biases in the type of online ads presented to website users

Biases in the error rates of facial recognition technology by race and gender

Biases in the error rates of facial recognition technology by race and gender

Biases in the error rates of facial recognition technology by race and gender

Biases in algorithms designed to predict risk in criminal justice

Biases in algorithms designed to predict risk in criminal justice

Biases in algorithms designed to predict risk in criminal justice

The usage of discriminatory factors such as sex, ethnicity, or age has been expressly prohibited by longstanding anti-discriminatory legislation around the globe, such as Title VII of the US Civil Right Act of 1964, the U.N.’s ILO Convention 111, or Article 21 of the EU Charter of Fundamental Rights. As enforced by most developed countries, such legislation typically protects citizens of a governmental body from employment or occupational discrimination based on these factors.

To expand these legislative precedents to the rapidly developing domain of algorithmic and AI discrimination, a new crop of anti-discrimination legislation is being passed by leading governmental bodies. This new wave of legislation focuses on regulating the behavior of the algorithms underlying certain protected use cases, such as resume screening, creditworthiness evaluations, or public benefit allocations. 

As the momentum grows to address AI bias, governments are starting to pass laws and release guidance aimed at preventing automated discrimination. But this is still an emerging area where much more work is needed to translate principles into practice. Active areas of research and policy development include both technical and non-technical measures such as:

De-biasing dataset frameworks: Dataset managers can carefully curate more balanced and representative training data by adjusting the significance of specific data points to correct for known imbalances or using autonomous testing methods to identify and correct for dataset biases. For instance, a revised dataset allowed Microsoft to reduce the face recognition error ratio between men and women with darker skin tones by 20-fold.

De-biasing dataset frameworks: Dataset managers can carefully curate more balanced and representative training data by adjusting the significance of specific data points to correct for known imbalances or using autonomous testing methods to identify and correct for dataset biases. For instance, a revised dataset allowed Microsoft to reduce the face recognition error ratio between men and women with darker skin tones by 20-fold.

De-biasing dataset frameworks: Dataset managers can carefully curate more balanced and representative training data by adjusting the significance of specific data points to correct for known imbalances or using autonomous testing methods to identify and correct for dataset biases. For instance, a revised dataset allowed Microsoft to reduce the face recognition error ratio between men and women with darker skin tones by 20-fold.

Algorithmic & dataset transparency: Organizations can implement public processes around measuring and reporting bias. For example, Google has introduced a Model Card reporting system that explains the employed data and algorithm, details performance evaluations, and disclose intended use cases. Such transparency encourages public review and accountability.

Algorithmic & dataset transparency: Organizations can implement public processes around measuring and reporting bias. For example, Google has introduced a Model Card reporting system that explains the employed data and algorithm, details performance evaluations, and disclose intended use cases. Such transparency encourages public review and accountability.

Algorithmic & dataset transparency: Organizations can implement public processes around measuring and reporting bias. For example, Google has introduced a Model Card reporting system that explains the employed data and algorithm, details performance evaluations, and disclose intended use cases. Such transparency encourages public review and accountability.

Third-party evaluations: A standardized system of review for AI algorithms would force organizations to adhere to comprehensive requirements for reducing discrimination. Various high-level solutions have been proposed by major organizations like the OECD and the European Convention on Human Rights, but no industry standards for measuring bias have been agreed upon.

Third-party evaluations: A standardized system of review for AI algorithms would force organizations to adhere to comprehensive requirements for reducing discrimination. Various high-level solutions have been proposed by major organizations like the OECD and the European Convention on Human Rights, but no industry standards for measuring bias have been agreed upon.

Third-party evaluations: A standardized system of review for AI algorithms would force organizations to adhere to comprehensive requirements for reducing discrimination. Various high-level solutions have been proposed by major organizations like the OECD and the European Convention on Human Rights, but no industry standards for measuring bias have been agreed upon.

What are current regulatory policies around discrimination requirements for AI?

China

Two major pieces of Chinese legislation have made references to combating AI discrimination. Though the language around discrimination was scrapped in the first, the 2023 generative AI regulations include binding but non-specific language requiring compliance with anti-discrimination policies for AI training and inference.

Algorithmic Recommendation Provisions, Article 10: The initial interim draft of this legislation prohibited the use of “discriminatory or biased user tags” in algorithmic recommendation systems. However, this language was removed in the final version effective in March 2022.

Algorithmic Recommendation Provisions, Article 10: The initial interim draft of this legislation prohibited the use of “discriminatory or biased user tags” in algorithmic recommendation systems. However, this language was removed in the final version effective in March 2022.

Algorithmic Recommendation Provisions, Article 10: The initial interim draft of this legislation prohibited the use of “discriminatory or biased user tags” in algorithmic recommendation systems. However, this language was removed in the final version effective in March 2022.

Generative AI Measures, Article 4.2: This draft calls for the following: “During processes such as algorithm design, the selection of training data, model generation and optimization, and the provision of services, effective measures are to be employed to prevent the creation of discrimination such as by race, ethnicity, faith, nationality, region, sex, age, profession, or health”.

Generative AI Measures, Article 4.2: This draft calls for the following: “During processes such as algorithm design, the selection of training data, model generation and optimization, and the provision of services, effective measures are to be employed to prevent the creation of discrimination such as by race, ethnicity, faith, nationality, region, sex, age, profession, or health”.

Generative AI Measures, Article 4.2: This draft calls for the following: “During processes such as algorithm design, the selection of training data, model generation and optimization, and the provision of services, effective measures are to be employed to prevent the creation of discrimination such as by race, ethnicity, faith, nationality, region, sex, age, profession, or health”.

The EU

The EU AI Act directly addresses discriminatory practices classified by the use cases of AI systems considered. In particular, it classifies all AI systems with potential discriminatory practices as high-risk systems and bars them from discrimination, including:

AI systems that could produce adverse outcomes to health and safety of persons, and could cause discriminatory practices.

AI systems that could produce adverse outcomes to health and safety of persons, and could cause discriminatory practices.

AI systems that could produce adverse outcomes to health and safety of persons, and could cause discriminatory practices.

AI systems used in education or vocational training, “notably for determining access to educational…institutions or to evaluate persons on tests...as a precondition for their education”.

AI systems used in education or vocational training, “notably for determining access to educational…institutions or to evaluate persons on tests...as a precondition for their education”.

AI systems used in education or vocational training, “notably for determining access to educational…institutions or to evaluate persons on tests...as a precondition for their education”.

AI systems used in employment, “notably for recruitment…for making decisions on promotion and termination and for task allocation, monitoring or evaluation of persons in work-related contractual relationships”.

AI systems used in employment, “notably for recruitment…for making decisions on promotion and termination and for task allocation, monitoring or evaluation of persons in work-related contractual relationships”.

AI systems used in employment, “notably for recruitment…for making decisions on promotion and termination and for task allocation, monitoring or evaluation of persons in work-related contractual relationships”.

AI systems used to evaluate the credit score or creditworthiness of natural persons, or for allocating public assistance benefits

AI systems used to evaluate the credit score or creditworthiness of natural persons, or for allocating public assistance benefits

AI systems used to evaluate the credit score or creditworthiness of natural persons, or for allocating public assistance benefits

AI systems used in migration, asylum and border control management

AI systems used in migration, asylum and border control management

AI systems used in migration, asylum and border control management

In particular, AI systems that provide social scoring of natural persons (which pose a significant discriminatory risk) are deemed unacceptable systems and are banned.

The US

The US government is actively addressing AI discrimination via two primary initiatives by the executive branch. However, both of these initiatives are non-binding and non-specific in nature: in particular, the Executive Order directs several agencies to publish guidelines, but doesn’t identify any specific requirements or enforcement mechanisms.

1

The AI Bill of Rights contains an entire section on Algorithmic Discrimination Protections. In particular, it emphasizes that consumers should be protected from discrimination based on their “race, color, ethnicity, sex (including pregnancy, childbirth, and related medical conditions, gender identity, intersex status, and sexual orientation), religion, age, national origin, disability, veteran status, genetic information, or any other classification protected by law.” Though this bill is non-binding, it sets a general principle for enforcement by the US executive branch for more specific regulations.

1

The AI Bill of Rights contains an entire section on Algorithmic Discrimination Protections. In particular, it emphasizes that consumers should be protected from discrimination based on their “race, color, ethnicity, sex (including pregnancy, childbirth, and related medical conditions, gender identity, intersex status, and sexual orientation), religion, age, national origin, disability, veteran status, genetic information, or any other classification protected by law.” Though this bill is non-binding, it sets a general principle for enforcement by the US executive branch for more specific regulations.

1

The AI Bill of Rights contains an entire section on Algorithmic Discrimination Protections. In particular, it emphasizes that consumers should be protected from discrimination based on their “race, color, ethnicity, sex (including pregnancy, childbirth, and related medical conditions, gender identity, intersex status, and sexual orientation), religion, age, national origin, disability, veteran status, genetic information, or any other classification protected by law.” Though this bill is non-binding, it sets a general principle for enforcement by the US executive branch for more specific regulations.

2

The Executive Order on AI directs various executive agencies to publish reports or guidance on preventing discrimination within their respective domains within the 90–180 days after its publication. These include the following directly responsible parties:

2

The Executive Order on AI directs various executive agencies to publish reports or guidance on preventing discrimination within their respective domains within the 90–180 days after its publication. These include the following directly responsible parties:

2

The Executive Order on AI directs various executive agencies to publish reports or guidance on preventing discrimination within their respective domains within the 90–180 days after its publication. These include the following directly responsible parties:

a

Section 7.1: “The Attorney General of the Criminal Justice System, and the Assistant Attorney General in charge of the Civil Rights Division will publish guidance preventing discrimination in automated systems.”

a

Section 7.1: “The Attorney General of the Criminal Justice System, and the Assistant Attorney General in charge of the Civil Rights Division will publish guidance preventing discrimination in automated systems.”

a

Section 7.1: “The Attorney General of the Criminal Justice System, and the Assistant Attorney General in charge of the Civil Rights Division will publish guidance preventing discrimination in automated systems.”

b

Section 7.2.b.i: “The Secretary of HHS (The Department of Health and Human Services) will publish guidance regarding non-discrimination in allocating public benefits.”

b

Section 7.2.b.i: “The Secretary of HHS (The Department of Health and Human Services) will publish guidance regarding non-discrimination in allocating public benefits.”

b

Section 7.2.b.i: “The Secretary of HHS (The Department of Health and Human Services) will publish guidance regarding non-discrimination in allocating public benefits.”

c

Section 7.2.b.ii: “The Secretary of Agriculture will publish guidance regarding non-discrimination in allocating public benefits.”

c

Section 7.2.b.ii: “The Secretary of Agriculture will publish guidance regarding non-discrimination in allocating public benefits.”

c

Section 7.2.b.ii: “The Secretary of Agriculture will publish guidance regarding non-discrimination in allocating public benefits.”

d

Section 7.3: “The Secretary of Labor will publish guidance regarding non-discrimination in hiring involving AI.”

d

Section 7.3: “The Secretary of Labor will publish guidance regarding non-discrimination in hiring involving AI.”

d

Section 7.3: “The Secretary of Labor will publish guidance regarding non-discrimination in hiring involving AI.”

Convergence’s Analysis

The effectiveness of de-biasing techniques is highly variable, and depends heavily on the quality of the data.

Unfair datasets are the root cause of algorithmic bias. However, it can be extraordinarily difficult to acquire more equitable data. Rebalancing datasets to mitigate bias will typically lead to lower overall performance, as rebalancing techniques may discard or deprioritize data to optimize for unbiased results.

Unfair datasets are the root cause of algorithmic bias. However, it can be extraordinarily difficult to acquire more equitable data. Rebalancing datasets to mitigate bias will typically lead to lower overall performance, as rebalancing techniques may discard or deprioritize data to optimize for unbiased results.

Unfair datasets are the root cause of algorithmic bias. However, it can be extraordinarily difficult to acquire more equitable data. Rebalancing datasets to mitigate bias will typically lead to lower overall performance, as rebalancing techniques may discard or deprioritize data to optimize for unbiased results.

Many underlying sources of bias can be difficult to mitigate. An Amazon study found that even after removing direct causes of gender bias from a hiring algorithm (such as making the algorithm neutral to phrases like “women's chess club captain”) the algorithm still found implicit male associations with phrases such as "executed" and "captured" on resumes.

Many underlying sources of bias can be difficult to mitigate. An Amazon study found that even after removing direct causes of gender bias from a hiring algorithm (such as making the algorithm neutral to phrases like “women's chess club captain”) the algorithm still found implicit male associations with phrases such as "executed" and "captured" on resumes.

Many underlying sources of bias can be difficult to mitigate. An Amazon study found that even after removing direct causes of gender bias from a hiring algorithm (such as making the algorithm neutral to phrases like “women's chess club captain”) the algorithm still found implicit male associations with phrases such as "executed" and "captured" on resumes.

Given access to underlying algorithms, it is substantially easier to prove discriminatory bias with an algorithm than it is with human-driven systems.

Proving discrimination in hiring practices against a corporation typically requires a high bar of evidence.

Proving discrimination in hiring practices against a corporation typically requires a high bar of evidence.

Proving discrimination in hiring practices against a corporation typically requires a high bar of evidence.

According to the McDonnell Douglas framework for discrimination in the US, the accuser must prove that the employer’s reason for firing or reducing employment was a pretext for discrimination – often requiring a direct comparison to a comparable, non-discriminated party within the same organization.

According to the McDonnell Douglas framework for discrimination in the US, the accuser must prove that the employer’s reason for firing or reducing employment was a pretext for discrimination – often requiring a direct comparison to a comparable, non-discriminated party within the same organization.

According to the McDonnell Douglas framework for discrimination in the US, the accuser must prove that the employer’s reason for firing or reducing employment was a pretext for discrimination – often requiring a direct comparison to a comparable, non-discriminated party within the same organization.

Meanwhile, algorithmic discrimination cases would likely produce demonstrable evidence primarily via access to the algorithm’s API and a multivariate analysis by a statistician. Studies involving human participation (which have complicated ethical challenges and time-scales), complicated judicial processes, and the impact of random chance may be easier to avoid.

Meanwhile, algorithmic discrimination cases would likely produce demonstrable evidence primarily via access to the algorithm’s API and a multivariate analysis by a statistician. Studies involving human participation (which have complicated ethical challenges and time-scales), complicated judicial processes, and the impact of random chance may be easier to avoid.

Meanwhile, algorithmic discrimination cases would likely produce demonstrable evidence primarily via access to the algorithm’s API and a multivariate analysis by a statistician. Studies involving human participation (which have complicated ethical challenges and time-scales), complicated judicial processes, and the impact of random chance may be easier to avoid.

There are no established required practices or judicial precedents to evaluate the level of discriminatory bias across AI algorithms.

Nearly all examples of bias discovered in AI algorithms have been identified by the efforts of independent teams of researchers unaffiliated with governmental legal or judicial systems. Because AI discrimination is only beginning to be legislated, there are few court cases and even fewer judicial rulings on how to prove algorithmic bias.

Nearly all examples of bias discovered in AI algorithms have been identified by the efforts of independent teams of researchers unaffiliated with governmental legal or judicial systems. Because AI discrimination is only beginning to be legislated, there are few court cases and even fewer judicial rulings on how to prove algorithmic bias.

Nearly all examples of bias discovered in AI algorithms have been identified by the efforts of independent teams of researchers unaffiliated with governmental legal or judicial systems. Because AI discrimination is only beginning to be legislated, there are few court cases and even fewer judicial rulings on how to prove algorithmic bias.

As a result, it’s currently very unclear to developers where the legal boundaries are between discrimination and predictive learning. An example: will resume evaluation algorithms need to scrub potentially gendered phrases from their dataset prior to training to ensure neutrality, such as participation in organizations like “Girls Who Code”? What about subtly biasing phrases, such as “NAACP”, or “beauty pageant”?

As a result, it’s currently very unclear to developers where the legal boundaries are between discrimination and predictive learning. An example: will resume evaluation algorithms need to scrub potentially gendered phrases from their dataset prior to training to ensure neutrality, such as participation in organizations like “Girls Who Code”? What about subtly biasing phrases, such as “NAACP”, or “beauty pageant”?

As a result, it’s currently very unclear to developers where the legal boundaries are between discrimination and predictive learning. An example: will resume evaluation algorithms need to scrub potentially gendered phrases from their dataset prior to training to ensure neutrality, such as participation in organizations like “Girls Who Code”? What about subtly biasing phrases, such as “NAACP”, or “beauty pageant”?

It is likely that the required practices to evaluate discriminatory bias will be established in the judicial system.

Judicial frameworks have typically been established over time via landmark or precedent-setting discrimination cases. For example, the McDonnell Douglas Burden-Shifting Framework and the Mixed Motive Framework are two separate judicial approaches to establish workplace discrimination. These developed independently to handle different forms of discrimination lawsuits.

Judicial frameworks have typically been established over time via landmark or precedent-setting discrimination cases. For example, the McDonnell Douglas Burden-Shifting Framework and the Mixed Motive Framework are two separate judicial approaches to establish workplace discrimination. These developed independently to handle different forms of discrimination lawsuits.

Judicial frameworks have typically been established over time via landmark or precedent-setting discrimination cases. For example, the McDonnell Douglas Burden-Shifting Framework and the Mixed Motive Framework are two separate judicial approaches to establish workplace discrimination. These developed independently to handle different forms of discrimination lawsuits.

We expect that in the next five years, we’ll begin to see class-action lawsuits against corporations running high-risk systems (as defined by the EU) that may be discriminatory. Accordingly, we’ll expect to see the creation of one or more standardized frameworks for evaluating biased algorithms emerging from a US court.

We expect that in the next five years, we’ll begin to see class-action lawsuits against corporations running high-risk systems (as defined by the EU) that may be discriminatory. Accordingly, we’ll expect to see the creation of one or more standardized frameworks for evaluating biased algorithms emerging from a US court.

We expect that in the next five years, we’ll begin to see class-action lawsuits against corporations running high-risk systems (as defined by the EU) that may be discriminatory. Accordingly, we’ll expect to see the creation of one or more standardized frameworks for evaluating biased algorithms emerging from a US court.