The headlines are alarming. Reports detail patients being harmed, misled, or outright failed by popular AI apps. Stories like these are emotionally charged, and my preliminary assessment of the seven high-profile cases recently documented by Information Age is that at least some may have genuine merit.
It’s easy to read about a chatbot giving harmful advice and immediately conclude that AI in this space is inherently dangerous.
However, to truly understand whether AI poses a threat, we must stop comparing it to a myth and start asking the comparative question that is too rarely raised: What is it compared to?
The Flawed Comparison: AI vs. The Perfect Doctor
The common fallacy is to benchmark AI-driven results against a false model: The perfect, tireless, and unbiased human clinician.
The comparison, however, should be between AI and real-world doctors. This comparison is complex. Doctors are not perfect, and neither are AI apps. Both have profound strengths and undeniable risks.
Let’s look at the facts and the potential. An article published by the American Psychoanalytic Association concluded:
Utilizing machine learning algorithms which predict suicide attempts via analysis of patient self-report data and EHR data may significantly enhance a clinicians’ ability to identify high-risk individuals who arrive to the ED. Such enhanced predictive value may offer potential for closer monitoring of high-risk patients and earlier intervention in order prevent suicide attempts.
In high-stakes, time-sensitive environments like the Emergency Department (ED), AI is showing a concrete ability to flag risks that humans might miss.
The Real Question: Human vs. Machine Failure
When it comes to mental-health safety, no system—human or artificial—is inherently safe.
- Human clinicians miss warning signs every day because they are, well, human: tired, biased, overwhelmed, or simply limited by the caseload.
- AI systems fail less often, but when they do, their errors can be jarringly and bizarrely disconnected from common sense or empathetic understanding.
Critically, each system compensates for the other’s weaknesses. The AI provides tireless data analysis; the human provides contextual judgment and empathy.
So the right question isn’t “Can AI harm patients?” (The answer is clearly yes, just as humans can.)
It’s “Compared to what level of harm?”
Imperfect but Valuable Supplements
Judged fairly, today’s AI tools look less like an existential threat and more like imperfect but indispensable supplements to an overburdened healthcare system.
The path forward is not prohibition, but integration. With transparent design, proper regulation, and rigorous ethical oversight, these tools could help make mental-health care not just more accessible, but arguably safer than it has ever been.
The debate shouldn’t be about replacing the doctor, but about empowering them.
Much more on the ethics, regulation, and specific case studies in our upcoming posts. Be sure to subscribe so you don’t miss them!