Skip to content

Menu

Network by SubjectChannelsBlogsHomeAboutContact
AI Legal Journal logo
Subscribe
Search
Close
PublishersBlogsNetwork by SubjectChannels
Subscribe

Your ChatGPT history as a hiring test? That’s a hard no.

By Jesse Beatson on February 18, 2026
Email this postTweet this postLike this postShare this post on LinkedIn

“Take out your phone and open your ChatGPT app. Type this prompt: ‘Based on my past conversations, analyze my behavioral tendencies.'”


In a Reddit post that has gone viral, that’s what someone claims just happened to them during a job interview.

If that interview scenario is real, the issues aren’t just ethical. They’re also potentially legal.

Here’s the framework. Employers are allowed to use personality assessments in hiring. But those assessments live in a carefully regulated space.

Under the ADA, an employer may not require a medical examination or make disability-related inquiries before a conditional offer of employment. The EEOC draws a line between permissible “personality tests” (measuring traits like honesty or preferences) and impermissible medical or psychological exams that screen for mental disorders such as depression, anxiety, or PTSD.

A standard, validated personality assessment that does not diagnose or identify mental impairments is generally lawful pre-offer. But a tool designed to reveal mental health conditions—or that predictably elicits that information—crosses into prohibited territory.

Now apply that framework to: “Based on my past conversations, can you analyze my behavioral tendencies?”

What’s in those past conversations? For many users: therapy-adjacent discussions, stress about family, questions about ADHD, depression, medications, burnout, addiction, trauma. If an employer requires a candidate to generate and disclose a summary built from that data, it is difficult to argue the employer is not, at minimum, eliciting disability-related information.

Intent isn’t the only issue. Effect matters. If the process predictably surfaces mental health indicators, the employer may be conducting an unlawful pre-offer medical inquiry—without calling it one.

There’s another problem. Personality testing must be job-related and consistent with business necessity if it disproportionately screens out individuals with disabilities. An AI-generated “behavioral tendencies” report is unlikely to be validated for any specific role. No validation study. No reliability metrics. No guardrails. Just a black box summary.

That’s we lawyers call a precursor to litigation.

Add in the power imbalance of an interview setting, and “voluntary” disclosure becomes legally murky. If a candidate feels compelled to reveal information that touches on protected conditions, you’ve created risk before the first day of employment.

AI in hiring isn’t inherently unlawful. But using a candidate’s personal AI history as a de facto psychological assessment? That starts to look a lot like a medical exam dressed up as innovation.

When AI tools wander into the territory of mental health assessment—even indirectly, the ADA is not optional. Employers who ignore that line do so at their peril.

     

Related Stories

  • Pro tip from pop culture: Don’t fire your employees while they are in the ER
  • Filing an EEOC charge doesn’t automatically buy an employee job immunity
  • 4 solid steps to win your disability discrimination/reasonable accommodation case

 

Photo of Jesse Beatson Jesse Beatson
Read more about Jesse Beatson
  • Posted in:
    Employment & Labor
  • Blog:
    Ohio Employer Law Blog
  • Organization:
    Jon Hyman
  • Article: View Original Source

LexBlog logo
Copyright © 2026, LexBlog. All Rights Reserved.
Legal content Portal by LexBlog LexBlog Logo