Skip to content

Menu

Network by SubjectChannelsBlogsHomeAboutContact
AI Legal Journal logo
Subscribe
Search
Close
PublishersBlogsNetwork by SubjectChannels
Subscribe

Employment Discrimination Using AI Is Still Discrimination

By Odia Kagan on August 20, 2025
Email this postTweet this postLike this postShare this post on LinkedIn
California state flag.
California state flag.

If artificial intelligence results in employment discrimination, employer needs to realize it is still discrimination.

But a bias audit can help.

California’s Civil Rights Counsel recently issued the Final Text of Proposed Employment Regulations Regarding Automated-Decision Systems. It goes into effect on October 1, 2025.

Some key takeaways:

  • It is unlawful for an employer or other covered entity to use an automated-decision system or selection criteria that discriminates against an applicant or employee or a class of applicants or employees on a basis protected by the Act. This includes any such practice conducted in whole or in part through the use of an automated-decision system.
  • Relevant to a claim of employment discrimination or an available defense of that claim is evidence, or the lack of evidence, of anti-bias testing or similar proactive efforts to avoid unlawful discrimination, including the quality, efficacy, recency, and scope of such effort, the results of such testing or other effort, and the response to the results.

Practices that could be in scope:

  • The use of online application technology that limits or screens out, ranks, prioritizes applicants based on their schedule may discriminate against applicants based on their religious creed, disability, or medical condition, unless it is job-related and consistent with business necessity.
  • The use of an automated-decision system that, for example, measures an applicant’s skill, dexterity, reaction time, and/or other abilities or characteristics may discriminate against individuals with certain disabilities or other characteristics.
  • An automated-decision system that, for example, analyzes an applicant’s tone of voice, facial expressions or other physical characteristics or behavior may discriminate against individuals based on race, national origin, gender, disability, or other characteristics protected under the Act.
  • Any policy or practice of an employer or other covered entity that has an adverse impact on employment opportunities of individuals on a basis enumerated in the Act is unlawful unless the policy or practice is job-related and consistent with business necessity.
  • A testing device, automated-decision system, or other means of selection that is facially neutral, but that has an adverse impact is permissible only upon a showing that the selection practice is job-related and consistent with business necessity.

Automated decision systems may be derived from and/or use artificial intelligence, machine-learning, algorithms, statistics, and/or other data processing techniques. They include:

  • Using computer-based assessments.
  • Directing job advertisements or other recruiting materials to targeted groups.
  • Screening resumes for particular terms or patterns.
  • Analyzing facial expression, word choice, and/or voice in online interviews.
  • Analyzing employee or applicant data acquired from third parties.
  • Posted in:
    Privacy & Data Security
  • Blog:
    Privacy Compliance & Data Security
  • Organization:
    Fox Rothschild LLP
  • Article: View Original Source

LexBlog logo
Copyright © 2025, LexBlog. All Rights Reserved.
Legal content Portal by LexBlog LexBlog Logo