Skip to content

Menu

Network by SubjectChannelsBlogsHomeAboutContact
AI Legal Journal logo
Subscribe
Search
Close
PublishersBlogsNetwork by SubjectChannels
Subscribe

Employers can’t outsource discrimination to an algorithm

By Jesse Beatson on March 23, 2026
Email this postTweet this postLike this postShare this post on LinkedIn

AI is new and shiny. Employment law is not.

Mobley v. Workday proves the point. The court concluded that employers don’t get to outsource liability just because they’ve outsourced the tool to an AI vendor.

The plaintiffs, a nationwide class of job applicants over the age of 40, allege that employers’ use of Workday’s AI-driven screening tools discriminates on the basis of age. Whether those claims ultimately stick is a question for another day. But the legal framework governing them is old, settled, and very familiar. Discrimination is discrimination—whether it’s carried out by a hiring manager, a spreadsheet, or an outsourced algorithm.

What would have been surprising is the opposite outcome—if the court had said, “Not your problem, employer, your vendor did it.” That’s not how employment law works. It never has been.

If your hiring process produces a disparate impact, you own it. Full stop.

This case—and others like it percolating through the courts—should recalibrate how employers think about HR tech. AI doesn’t create new legal obligations. It just exposes how seriously you’re taking the ones that already exist.

So what should you be doing now?

Start with your contracts. If you’re relying on a vendor’s AI to source, screen, or rank candidates, you need to understand exactly how liability is allocated. Who is indemnifying whom? For what claims? With what caps and carveouts? “Trust us” is not a risk mitigation strategy.

Next, build audit rights into those agreements—and use them. You should have the contractual ability to test your vendor’s tools for disparate impact and to obtain meaningful information about how those tools function. If you can’t evaluate it, you shouldn’t be using it.

Also, don’t treat AI as a black box. You don’t need to code it, but you do need to understand how it’s trained, what data it relies on, and where bias might creep in. Speed and efficiency are great. Not at the expense of compliance.

Finally, own the outcomes. If a tool flags—or filters out—candidates, that’s your hiring decision. Regulators and courts aren’t going to draw a distinction between “human” and “machine-assisted” discrimination.

AI may be the new frontier. The rules governing it are not. Ignore that at your peril.

     

Related Stories

  • Beware the legal risks of AI meeting agents
  • If you can’t force older employees to retire, how do you succession plan?
  • Mangement discussion of an older worker’s “retirement” as age discrimination

 

Photo of Jesse Beatson Jesse Beatson
Read more about Jesse Beatson
  • Posted in:
    Employment & Labor
  • Blog:
    Ohio Employer Law Blog
  • Organization:
    Jon Hyman
  • Article: View Original Source

LexBlog logo
Copyright © 2026, LexBlog. All Rights Reserved.
Legal content Portal by LexBlog LexBlog Logo