Programs

About

Publications

policy report

2024 State of the AI Regulatory Landscape

Published by Convergence Analysis, this series is designed to be a primer for policymakers, researchers, and individuals seeking to develop a high-level overview of the current state of AI regulation.

AI Discrimination Requirements

Deric Cheng

Governance Research Lead

Last updated Apr 04, 2024

Author's Note

This report is one in a series of ~10 posts comprising a State of the AI Regulatory Landscape in 2024 Review, conducted by the Governance Research Program at Convergence Analysis. Each post will cover a specific domain of AI governance. We’ll provide an overview of existing regulations, focusing on the US, EU, and China as the leading governmental bodies currently developing AI legislation. Additionally, we’ll discuss the relevant context behind each domain and conduct a short analysis.

This series is intended to be a primer for policymakers, researchers, and individuals seeking to develop a high-level overview of the current AI governance space. We’ll be releasing a comprehensive report at the end of this series.

What are discrimination requirements for AI? Why do they matter?

Discrimination requirements for AI are rules and guidelines aimed at preventing AI systems from perpetuating or amplifying societal biases and unfairly disadvantaging certain groups of people based on protected characteristics like race, gender, age, religion, disability status, or sexual orientation. As AI increasingly powers high-stakes decision making in areas like hiring, lending, healthcare, criminal justice, and public benefits, these systems are likely to adversely impact certain subsets of the population without algorithmic bias management. 

For example, an algorithm designed to identify strong resumes for a job application is likely to predict correlations between the sex of a candidate and the quality of their resume, reflecting existing societal biases (and therefore perpetuating them). As a result, certain classes of individuals may be adversely impacted by an algorithm that contains inherently discriminatory word associations.

Other examples for algorithmic discrimination include:

Biases in the type of online ads presented to website users

Biases in the error rates of facial recognition technology by race and gender

Biases in algorithms designed to predict risk in criminal justice

The usage of discriminatory factors such as sex, ethnicity, or age has been expressly prohibited by longstanding anti-discriminatory legislation around the globe, such as Title VII of the US Civil Right Act of 1964, the U.N.’s ILO Convention 111, or Article 21 of the EU Charter of Fundamental Rights. As enforced by most developed countries, such legislation typically protects citizens of a governmental body from employment or occupational discrimination based on these factors.

To expand these legislative precedents to the rapidly developing domain of algorithmic and AI discrimination, a new crop of anti-discrimination legislation is being passed by leading governmental bodies. This new wave of legislation focuses on regulating the behavior of the algorithms underlying certain protected use cases, such as resume screening, creditworthiness evaluations, or public benefit allocations. 

As the momentum grows to address AI bias, governments are starting to pass laws and release guidance aimed at preventing automated discrimination. But this is still an emerging area where much more work is needed to translate principles into practice. Active areas of research and policy development include both technical and non-technical measures such as:

De-biasing dataset frameworks: Dataset managers can carefully curate more balanced and representative training data by adjusting the significance of specific data points to correct for known imbalances or using autonomous testing methods to identify and correct for dataset biases. For instance, a revised dataset allowed Microsoft to reduce the face recognition error ratio between men and women with darker skin tones by 20-fold.

Algorithmic & dataset transparency: Organizations can implement public processes around measuring and reporting bias. For example, Google has introduced a Model Card reporting system that explains the employed data and algorithm, details performance evaluations, and disclose intended use cases. Such transparency encourages public review and accountability.

Third-party evaluations: A standardized system of review for AI algorithms would force organizations to adhere to comprehensive requirements for reducing discrimination. Various high-level solutions have been proposed by major organizations like the OECD and the European Convention on Human Rights, but no industry standards for measuring bias have been agreed upon.

What are current regulatory policies around discrimination requirements for AI?

China

Two major pieces of Chinese legislation have made references to combating AI discrimination. Though the language around discrimination was scrapped in the first, the 2023 generative AI regulations include binding but non-specific language requiring compliance with anti-discrimination policies for AI training and inference.

Algorithmic Recommendation Provisions, Article 10: The initial interim draft of this legislation prohibited the use of “discriminatory or biased user tags” in algorithmic recommendation systems. However, this language was removed in the final version effective in March 2022.

Generative AI Measures, Article 4.2: This draft calls for the following: “During processes such as algorithm design, the selection of training data, model generation and optimization, and the provision of services, effective measures are to be employed to prevent the creation of discrimination such as by race, ethnicity, faith, nationality, region, sex, age, profession, or health”.

The EU

The EU AI Act directly addresses discriminatory practices classified by the use cases of AI systems considered. In particular, it classifies all AI systems with potential discriminatory practices as high-risk systems and bars them from discrimination, including:

AI systems that could produce adverse outcomes to health and safety of persons, and could cause discriminatory practices.

AI systems used in education or vocational training, “notably for determining access to educational…institutions or to evaluate persons on tests...as a precondition for their education”.

AI systems used in employment, “notably for recruitment…for making decisions on promotion and termination and for task allocation, monitoring or evaluation of persons in work-related contractual relationships”.

AI systems used to evaluate the credit score or creditworthiness of natural persons, or for allocating public assistance benefits

AI systems used in migration, asylum and border control management

In particular, AI systems that provide social scoring of natural persons (which pose a significant discriminatory risk) are deemed unacceptable systems and are banned.

The US

The US government is actively addressing AI discrimination via two primary initiatives by the executive branch. However, both of these initiatives are non-binding and non-specific in nature: in particular, the Executive Order directs several agencies to publish guidelines, but doesn’t identify any specific requirements or enforcement mechanisms.

1

The AI Bill of Rights contains an entire section on Algorithmic Discrimination Protections. In particular, it emphasizes that consumers should be protected from discrimination based on their “race, color, ethnicity, sex (including pregnancy, childbirth, and related medical conditions, gender identity, intersex status, and sexual orientation), religion, age, national origin, disability, veteran status, genetic information, or any other classification protected by law.” Though this bill is non-binding, it sets a general principle for enforcement by the US executive branch for more specific regulations.

2

The Executive Order on AI directs various executive agencies to publish reports or guidance on preventing discrimination within their respective domains within the 90–180 days after its publication. These include the following directly responsible parties:

a

Section 7.1: “The Attorney General of the Criminal Justice System, and the Assistant Attorney General in charge of the Civil Rights Division will publish guidance preventing discrimination in automated systems.”

b

Section 7.2.b.i: “The Secretary of HHS (The Department of Health and Human Services) will publish guidance regarding non-discrimination in allocating public benefits.”

c

Section 7.2.b.ii: “The Secretary of Agriculture will publish guidance regarding non-discrimination in allocating public benefits.”

d

Section 7.3: “The Secretary of Labor will publish guidance regarding non-discrimination in hiring involving AI.”