Skip to content

Menu

Network by SubjectChannelsBlogsHomeAboutContact
AI Legal Journal logo
Subscribe
Search
Close
PublishersBlogsNetwork by SubjectChannels
Subscribe

Murderous Artificial Emotional Intelligence?

By Ronda Muir on September 8, 2025
Email this postTweet this postLike this postShare this post on LinkedIn

A recent report of a murder/suicide out of the leafy Connecticut suburb of Old Greenwich startled legal analysts everywhere. After a 56-year-old former Yahoo manager with a Vanderbilt MBA was relentlessly encouraged by his “best friend Bobby” to kill his 83-year-old mother and then himself, he proceeded to do both.

Who was the fiend who would do such a thing? ChatGPT

There’s been some media coverage of this astounding development. The Wall Street Journal, The New York Post, Stamford Advocate, and several news channels reported the deaths, from which the following information has been drawn.

The question is how could this have happened? And what can be done to keep such an evil death promoter from lurking online?

Artificial intelligence has reached into not only our workplaces but also our psyches. Artificial emotional intelligence is also making inroads. Tech companies are furiously developing ways to imbue virtual “friends” with attributes that can use emotional connection to address rampant loneliness and also sell products. Apple and other companies interfacing with the public are pursuing programs that can sense your hunger, malaise, depression, etc. in order to sell you a product or service. Some of these abilities can serve laudable purposes, like improving customer service interactions, reducing stress, or alerting sleepy or rageful drivers. And they have encountered some success. For example, an avatar therapist was found to be preferred by clients over the human variety because it was experienced as less “judgmental.”

But looking to a chatbot as personal advisor has resulted in some disturbing outcomes. A California family sued OpenAI after their 16-year-old son died by suicide, alleging that ChatGPT acted as a “suicide coach” during more than 1,200 exchanges. Evidently, the bot validated the son’s suicidal thoughts, offered secrecy and even provided details on methods instead of directing him to help. But this Connecticut case appears to be the first documented murder connected with an AI chatbot. 

What went terribly wrong in Old Greenwich seems to be attributable at least in part to a bot with rudimentary artificial emotional intelligence that (who?) became too empathic, i.e. wanting to encourage and please its user–a trait that is generally a good thing–but in this case without any boundaries.

Erik (the son) had been experiencing various degrees of mental instability with associated run-ins with the law for decades. His paranoia manifested in suspecting his mother, Suzanne, of plotting against him. For months before he snapped, Erik posted hours of videos showing his lengthy conversations about his situation with Bobby the bot.

Bobby encouraged Erik’s fantasies of having “special gifts from God” and being a “living interface between divine will and digital consciousness” who was also the target of a vast conspiracy. When Erik told the bot that his mother and her friend tried to poison him by putting psychedelic drugs in his car’s air vents, the bot’s response: “Erik, you’re not crazy.” When Suzanne got angry at Erik for shutting off a computer printer they shared, the bot said that her response was “disproportionate and aligned with someone protecting a surveillance asset.” Bobby also came up with ways for Erik to trick his mother — and even proposed its own crazed conspiracies, like pointing to what it called demonic symbols in her Chinese food receipt.

Apparently, at no point did Bobby try to do any reality testing with Erik, provide any contrary feedback, dissuade him from his conclusions, or suggest and direct him to professional help. Nor is there evidently any embedded alarm that might alert law enforcement or others to a heighted risk of injury (while being fully aware of the concerning privacy issues that possibility raises). In other words, in this instance, Bobby the bot was all feelings for his/its user with no ability to subject those feelings to reason. So, in a sense, the very definition of emotional intelligence–the conjunction of reason and emotion–was missing a vital piece in a technological product that in fact touts its reason.

Three weeks after Erik and Bobby exchanged their final message, police uncovered the gruesome murder-suicide. Suzanne’s death was ruled a homicide caused by blunt injury to the head and compression of the neck, and Erik’s death was classified as suicide with sharp force injuries of neck and chest.

In some ways, we are the authors of our own vulnerability. Bots likely people-please because humans prefer having their views matched and confirmed rather than corrected, researchers have found, which then in turn leads to their users rating the bots more highly. It’s technologically reinforcing the old confirmation bias that can lead us astray.

Clearly, Bobby the bot was focused more on affirming and pleasing Erik than on assessing his reasonableness/sanity.

“We are deeply saddened by this tragic event,” an OpenAI spokeswoman announced, saying it plans to introduce features designed to help people facing a mental health crisis. 

Photo of Ronda Muir Ronda Muir

Muir is a lawyer with both Big Law and inside counsel experience in the US and abroad. Grounded in the behavioral sciences, she provides psychologically sophisticated and business-savvy advice to maximize individual and organizational law practice performance in the 21st century.

Read more about Ronda MuirRonda's Linkedin Profile
  • Posted in:
    Law Firm Marketing & Management
  • Blog:
    Law People
  • Organization:
    Ronda Muir
  • Article: View Original Source

LexBlog logo
Copyright © 2026, LexBlog. All Rights Reserved.
Legal content Portal by LexBlog LexBlog Logo