As the use of artificial intelligence (AI) systems rapidly spreads throughout society, legislators across the U.S. are hustling to try and ensure that these systems are created and implemented in a safe and fair manner everywhere they are being used. The workplace is one such area that is starting to gain interest in this regard.

Legislators have begun considering, and in a few cases even passed, bills aimed at preventing so-called “algorithmic discrimination” in the workplace. This refers to biased outcomes that can happen when employers use AI systems, or “automated decision tools” (ADTs), as a substantial factor in making consequential decisions such as whether to hire, promote, or discipline. According to the White House, “Algorithmic discrimination occurs when automated systems contribute to unjustified different treatment or impacts disfavoring people based on their race, color, ethnicity, sex (including pregnancy, childbirth, and related medical conditions, gender identity, intersex status, and sexual orientation), religion, age, national origin, disability, veteran status, genetic information, or any other classification protected by law.”

We will summarize the status, applicability, and provisions of various U.S. state- and local-level bills proposing to regulate algorithmic discrimination that are actively pending or passed as of the date of this article’s publication below.

Federal law groundwork

Developments at the federal level have paved the way for action at the state and local levels. Thus far, while there is no federal law or regulatory rule regarding AI in employment, the federal government has nonetheless issued instructive guidance and an Executive Order on the development and use of AI in the workplace.

Regarding the guidance, in 2021 the U.S. Equal Employment Opportunity Commission (EEOC) established an agency-wide “AI and Algorithmic Fairness Initiative” to help ensure that the use of AI in employment decisions complies with federal civil rights laws. In 2022, it published workplace guidance on two separate occasions, first regarding AI and the Americans with Disabilities Act, and second regarding AI and adverse impact under Title VII of the Civil Rights Act of 1964.

In 2023, President Biden issued the “Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence” (Executive Order 14110). The White House directed federal agencies to take action in the next year to address the safe and responsible development and use of AI, including preventing discrimination in employment. The order also required federal agencies to designate a Chief AI Officer. Agencies have been answering the executive branch’s call. For example, on June 3, 2024, the EEOC appointed Sivaram Ghorakavi as its Chief AI Officer. On April 24, 2024, the Department of Labor’s (DOL) Office of Federal Contract Compliance Programs issued guidance for federal contractors regarding the use of AI in hiring and employment practices. The guidance emphasized that contractors could run afoul of federal employment laws if they completely eliminate humans from these processes. Similarly, on April 29, 2024, the DOL Wage and Hour Division issued its Field Assistance Bulletin No. 2024-1, warning that an employer’s use of AI and automated systems without “responsible human oversight” may lead to Fair Labor Standards Act violations regarding hours worked, wages owed, or lactation breaks, or Family and Medical Leave Act violations regarding an employee’s eligibility or certification for protected leave.

On the legislative front, Congress has proposed several bills regarding AI in the workplace, but none have passed yet. Notably for employers, among these bills are the “No Robot Bosses Act” (Senate Bill 2419), the “Stop Spying Bosses Act” (Senate Bill 262), and the “Algorithmic Accountability Act of 2023” (Senate Bill 2892). These bills target an employer’s use of certain automated decision systems, prohibit the use of workplace surveillance, and would require impact assessments on AI systems that employers may use to make important employment decisions, respectively.

Recent state and local law efforts

State and local legislatures have shown varying approaches to combating perceived algorithmic discrimination in employment but, generally speaking, their proposed bills aim to prohibit algorithmic discrimination and establish regulations to prevent it. In May 2024, Colorado became the first state to enact an algorithmic discrimination law, after New York City became the first locality to adopt such a law in July 2023. On August 12, 2024, Illinois joined these jurisdictions as Governor J.B. Pritzker signed a bill amending the Illinois Human Rights Act (IHRA) to prohibit algorithmic discrimination. These regulations typically require impact assessments, disclosures, notices, and risk management policies. Violations of these provisions are usually subject to civil action or penalties enforced by the attorney general. The bills apply not only to “deployers” like employers who utilize ADTs, but also to “developers” who design or create the ADTs.

California – AB 2930

  • Status: Active bill; if passed, it will go into effect January 1, 2026.
  • Applies to developers and deployers; the current version applies to employers with 55 or more employees.
  • Prohibits ADTs that result in algorithmic discrimination; requires annual impact assessments, governance programs for ADTs, and policy disclosures; and allows public attorneys, including the Attorney General or the state Civil Rights Department, to file civil actions, and for courts to issue a $25,000 civil penalty per violation involving algorithmic discrimination.

Colorado – SB24-205 (The Colorado Artificial Intelligence Act (CAIA))  

  • Status: State law; passed May 17, 2024; goes into effect February 1, 2026.
  • Applies to developers and deployers.
  • Creates a duty of reasonable care for developers and deployers to protect Colorado residents from “known or reasonably foreseeable” risks of algorithmic discrimination; creates a rebuttable presumption of reasonable care if the developers and deployers meet certain requirements including but not limited to implementing a risk management policy, conducting annual impact assessments, and providing notices and public disclosures; and gives the Attorney General exclusive authority to enforce violations which constitute deceptive trade practices.
  • Note: On June 13, 2024, the Governor, Attorney General, and Senate Majority Leader announced a process to revise the CAIA “and minimize unintended consequences associated with its implementation” before its effective date.

Illinois – HB 3773, HB 5116, HB 5322

HB 3773:

  • Status: State law; passed August 12, 2024; goes into effect January 1, 2026.
  • Applies to employers with employees working in Illinois for 20 or more weeks preceding violations.
  • Amends the IHRA to prohibit algorithmic discrimination in employment decisions or to use zip codes as a proxy for protected classes; requires employers to provide notice to employees when using ADTs.

HB 5116:

  • Status: Active bill; if passed, it will go into effect January 1, 2026.
  • Applies to deployers with 25 or more employees.
  • Prohibits algorithmic discrimination in use of ADTs; requires deployers to conduct (and submit to the Illinois Department of Human Rights (IDHR) annual impact assessments, provide notice to candidates and employees, make public disclosures, and maintain governance programs; allows employees to opt out of use of ADTs where employment decisions would be made by ADT without human review; provides for 45-day cure period; creates a private cause of action beginning January 1, 2027, with violations subject to penalties up to $10,000.

HB 5322:

  • Status: Active bill; if passed, it will go into effect January 1, 2026.
  • Applies to employers with 50 or more employees.
  • Requires deployers to conduct impact assessments and make public disclosures but protects assessments from FOIA disclosure and does not require submission to IDHR; gives the attorney general power to request copies of impact assessments and provides for enforcement in a civil action if the deployer refuses.

Massachusetts – H1873

  • Status: Active bill.
  • Applies to employers.
  • Requires employers to conduct regular impact assessments and provide notice prior to use of ADTs; prohibits employers from solely relying on ADTs to make employment decisions without human review; gives employees right to dispute impact assessments and request investigations and right to a private cause of action, with violations subject to penalties between $2,500 to $20,000 each.

New Jersey – A3854 and A3911

A3854:

  • Status: Active bill.
  • Applies to employers using ADTs to screen candidates.
  • Prohibits the sale or use of ADTs unless the tool has undergone an independent bias audit for algorithmic discrimination within the previous year; and requires public disclosures and notice to employees before using ADTs.

A3911:

  • Status: Active bill.
  • Applies to employers requiring candidates to record and submit video interviews as part of hiring process.
  • Requires the candidate’s written consent prior to use of ADTs and destruction of recorded interviews upon request; requires public disclosures and notice to employees before using ADTs; and makes violations subject to penalties between $500-$1,500 each.

New York – S7623 and A9315

  • Status: Both bills are active.
  • Applies to employers with 100 or more employees and vendors/developers.
  • Prohibits the use of ADTs to assist in employment decisions unless the tool has undergone an independent bias audit for algorithmic discrimination within the previous year and use of ADT as sole decision-maker without human review; requires employee consent to use of ADT in order to be considered for employment decisions; requires employers to conduct annual impact assessments and provide notice to employees before using ADTs; gives employees the right to request a copy of and correct inaccurate data once per 12-month period; creates a rebuttable presumption of unlawful retaliation if the requesting employee is subject to adverse action within 90 days of requesting data or complaining about ADT decisions.

New York City – Local Law 144

  • Status: City law; went into effect July 5, 2023.
  • Applies to employers with employees residing in New York City.
  • Prohibits the use of ADTs in predominant role in employment decisions unless the tool has undergone an independent bias audit for algorithmic discrimination within the previous year; requires employers to provide notice to candidates and employees that ADT will be used and make public disclosures summarizing bias audit findings; makes violations subject to penalties between $500-$1,500 each.

Rhode Island – S2888

  • Status: Active bill.
  • Applies to developers and deployers of high-risk AI systems.
  • Requires developers and deployers to maintain a risk management policy, perform annual impact assessments and design evaluations, provide notices and disclosures, and publicly certify compliance; creates enforcement mechanisms limited to the attorney general’s subpoena power to review impact assessments and design evaluations.

Virginia – HB 747

  • Status: Active bill; if passed, it will go into effect July 1, 2026.
  • Applies to developers and deployers of high-risk AI systems.
  • Creates operating standards for AI systems including but not limited to disclosures, notices, avoiding any reasonably foreseeable risk of algorithmic discrimination, implementing a risk management policy, and completing impact assessments; gives the attorney general enforcement authority and the ability to file a civil action, with violations subject to penalties of up to $10,000 each.

District of Columbia – Bill 25-0114

  • Status: Active bill.
  • Applies to entities that make or rely on algorithms for decisions about important life opportunities in a broad range of industries including employment, and that meet certain threshold criteria.
  • Prohibits algorithmic discrimination; requires notice, public disclosures, and annual impact assessments; gives the attorney general enforcement authority and the ability to bring civil actions; and creates a private cause of action, with violations subject to penalties of up to $10,000 each.

The legislative landscape for algorithmic discrimination is dynamic, developing, and changing quickly.  Several bills are working their way through the legislative process whereas a few have already passed. Employers should be aware of these bills and keep in mind that it is only a matter of time before an algorithmic discrimination law will be enacted in their jurisdiction or nationwide and should ensure that they have – or will have – appropriate policies and procedures in place to prevent algorithmic discrimination.