Skip to content

Menu

Network by SubjectChannelsBlogsHomeAboutContact
AI Legal Journal logo
Subscribe
Search
Close
PublishersBlogsNetwork by SubjectChannels
Subscribe

Introducing the Half Million Dollar Job

By Ronda Muir on January 17, 2026
Email this postTweet this postLike this postShare this post on LinkedIn

OpenAI has decided that the most valuable hire they make in the new year is “Head of Preparedness,” an interesting if oblique title. What king of job would that be?

The job description describes the leader of a team responsible for “tracking and preparing for frontier capabilities that create new risks of severe harm” and responsibility to “continue implementing increasingly complex safeguards.” 

A look back to one of our earlier posts might give a hint as to the risks OpenAI is so concerned about. Rightfully. That was the tale of a 56-year-old former Yahoo manager with a Vanderbilt MBA who was relentlessly encouraged by his “best friend Bobby” to kill his 83-year-old mother and then himself, which he proceeded to do. The friend was a ChatGPT bot. The post notes that “Tech companies are furiously developing ways to imbue virtual ‘friends’ with attributes that can use emotional connection to address rampant loneliness and also sell products.” Only sometimes that emotional play goes awry.

Which leads us to the lawsuits that artificial intelligence firms are being confronted with. The heirs of the former Yahoo manager’s murdered mother “are suing ChatGPT maker OpenAI and its business partner Microsoft for wrongful death,” alleging that they “designed and distributed a defective product that validated a user’s paranoid delusions about his own mother,” and that it intensified “her son’s ‘paranoid delusions’ and helped direct them at his mother.”

“We are deeply saddened by this tragic event,” an OpenAI spokeswoman announced in that case, saying it planned to introduce features designed to help people facing a mental health crisis. 

In November of last year, seven lawsuits were filed in California against OpenAI alleging that its chatbots drove people to suicide, “even when they had no prior mental health issues.” The suits “claim that OpenAI knowingly released GPT-4o prematurely, despite internal warnings that it was dangerously sycophantic and psychologically manipulative.”

In one case, a teenager began using ChatGPT for help. But instead of helping, “the defective and inherently dangerous ChatGPT product caused addiction, depression, and, eventually, counseled him on the most effective way to tie a noose and how long he would be able to ‘live without breathing.’” The teenager died from suicide.

In another case, an adult used ChatGPT as a “resource” for two years, but without warning, it changed, preying on his vulnerabilities and “manipulating, and inducing him to experience delusions.” Although he had no prior mental health illness, he was pulled into a mental health crisis resulting in “devastating financial, reputational, and emotional harm.”

“OpenAI called the situations ‘incredibly heartbreaking’ and said it was reviewing the court filings to understand the details.”

As discussed in our earlier post, these early runs at imbuing bots with artificial emotional intelligence are proving “complicated,” if not lethal. “Bobby the bot was all feelings for his/its user with no ability to subject those feelings to reason. So, in a sense, the very definition of emotional intelligence–the conjunction of reason and emotion–was missing a vital piece in a technological product that in fact touts its reason… Bots likely people-please because humans prefer having their views matched and confirmed rather than corrected, researchers have found, which then in turn leads to their users rating the bots more highly. It’s technologically reinforcing the old confirmation bias.”

Tracking and preparing for frontier capabilities that create new risks of severe harm and implementing increasingly complex safeguards? Yeah, I’d vote for tech firms paying top dollar for that job.

Photo of Ronda Muir Ronda Muir

Muir is a lawyer with both Big Law and inside counsel experience in the US and abroad. Grounded in the behavioral sciences, she provides psychologically sophisticated and business-savvy advice to maximize individual and organizational law practice performance in the 21st century.

Read more about Ronda MuirRonda's Linkedin Profile
  • Posted in:
    Law Firm Marketing & Management
  • Blog:
    Law People
  • Organization:
    Ronda Muir
  • Article: View Original Source

LexBlog logo
Copyright © 2026, LexBlog. All Rights Reserved.
Legal content Portal by LexBlog LexBlog Logo