Fresno Criminal Lawyer

Fresno Criminal Lawyer – Criminal Defense Lawyer Rick Horowitz

If you’ve read any of my other posts on AI, you know my key concern with it: confabulations cause hallucinations.

Case in point — and I’ll come back to this later — the other day I ran across an article about a dead man “coming to court” to give a victim impact statement regarding his own murder. I sent it to a couple of friends, including Scott Greenfield, who blogged about it.

As Scott wrote in an update to that post, victims’ advocates love this new move. Nevermind that the whole thing is a total fiction.

Fictions bring convictions, after all.

Confabulation vs. Hallucination

Let’s clear up what I consider to be a philosophical-terminological, — and I don’t consider it merely semantic — distinction.

What Are Hallucinations?

As I explained in a previous blog post on artificial intelligence:

In AI circles, you’ll often hear people say that large language models “hallucinate.” I’ve used this term myself in my past posts. It’s the industry’s shorthand for when the model makes something up: a case that doesn’t exist, a quote that no one ever said, a citation to one or more books or articles that exist only for the ghosts in the machine.

— Rick Horowitz, Ghosts in the Machine: Why Language Models Seem Conscious, § Hallucinations & Confabulations (Apr. 15, 2025)

But as I also explain in that post, “hallucination” is not really the right word. “Hallucination” refers to something which the brain believes it has sensed, even though it’s not real — it’s not really “there” in the outside world.

A hallucination is a false perception of objects or events involving your senses: sight, sound, smell, touch and taste. Hallucinations seem real, but they’re not.

— Cleveland Clinic, Hallucinations, Clev. Clinic (last visited May 10, 2025)

Although “hallucinations” may rely to some degree on sensory input, they are mere figments of the imagination.

I think “confabulation” is a better explanation of what happens when an artificial intelligence produces a falsity, whether in response to a prompt, or otherwise.

Human beings might hallucinate (“see,” or otherwise sense) ghosts, but the ghosts in the machine sense nothing. They are not sentient. They do, however, produce “explanations,” some of which are pure confabulations.

What Is Confabulation?

This idea of confabulation comes to us from the field of psychology. Human beings confabulate frequently — perhaps just as frequently as LLMs. As I explained in Ghosts in the Machine, confabulation is a way of “filling in the blanks” with something that seems completely plausible. So plausible, in fact, that it is taken even by the confabulator to be true.

Most of the time, you’ll hear that confabulation has to do with memories. Often the explanation is that confabulation is a neuropsychiatric phenomenon where individuals create false memories without the intention of deception.

Confabulation occurs when individuals, including victims, witnesses, or suspects, recall events or details that did not actually happen, blending fiction with reality. Importantly, confabulation is not intentional deception or lying; rather, it arises from false or misplaced memory errors triggered by various factors such as trauma, stress, brain injuries, or cognitive impairments. In the legal context, confabulation can lead to false testimonies, wrongful accusations, and miscarriages of justice.

— Janina Cich, Navigating Confabulation: A Toolkit for Criminal Justice, Police Chief Online (Feb. 14, 2024)

But that’s not completely accurate.

Confabulation isn’t limited to faulty memory. It can show up any time we need to explain something.

For example, psychological studies make it clear that people often don’t know why they’ve done something. Yet, when asked, they produce a reason that sounds completely logical. They confabulate a justification. The brain doesn’t like to leave blanks unfilled. It prefers a tidy narrative, even if it’s made up after the fact.

Split-brain research makes this very transparent. You may have heard that the brain is divided into two “hemispheres.” And you’ll sometimes hear about the “right brain” and the “left brain” myth.

In patients whose hemispheres were surgically separated, one side of the brain was shown an image and acted on it. The other — the language-dominant side — had no way of “seeing” the image, or even knowing about it because, remember, the pathway between the hemispheres was cut. It was no longer there.


Hallucination vs. Confabulation

AI doesn’t hallucinate — it confabulates.

Hallucination implies sensory error. But AI has no senses, no awareness.

Confabulation means filling in blanks with plausible fictions — confidently, fluently, and without knowing they’re false.

That’s what LLMs do. And we believe them — at our own peril — because plausibility makes the errors that much harder to spot.

When asked why they had performed the action, the verbal hemisphere would offer a perfectly plausible explanation — entirely invented. It didn’t lie. It just had no access to the real cause and came up with a plausible story. Michael Gazzaniga coined the term “interpreter” for what the left brain does: it fills in gaps, invents explanations, spins coherent stories even when reality is fragmented or incomplete. Sound familiar?

We do this all the time. We say we bought something because it was on sale, not because we were sad. We say we felt “uneasy” about someone, not because of unconscious bias. We say we objected because of the rule, not because of pride or spite or fear.

In all these cases, we believe the stories we tell — even when they aren’t the truth. That’s the power of confabulation. It doesn’t just shape our memory. It even shapes our identity.

This “hidden” yet pervasive nature of confabulation makes it all the more dangerous when AI does it. Since we’ve trained ourselves to accept a smooth explanation as evidence of understanding and the AI explanation makes sense we’re sucked into the story. We believe it.

If you don’t understand this about AI, all kinds of things can go wrong.

The Delphic Oracle Returns, But Digitized

In an earlier post, I described ChatGPT — and other large language models — as a “Twenty-First Century Delphic Oracle.”

Like the Pythia of ancient Greece, today’s AI delivers answers in response to questions posed by mortals. But instead of inhaling volcanic fumes, it breathes in curated datasets and spits out fluently ambiguous answers that sound insightful — even when they aren’t.

That post made the case that:

  • AI only appears intelligent because we’ve trained it to mimic us, essentially copying our language, our reasoning patterns, and — when applied to the law — even our legal structures.
  • Its outputs are shaped entirely by its inputs and are just as biased as the curated and pre-chewed data it has been fed.
  • And yet, despite the uncertainty, ambiguity, and risks, we are increasingly treating this Oracle as a source of truth.

I also noted that AI’s usefulness can’t be ignored — but neither can its limitations, especially in law. When the people building the tools don’t understand the field they’re building for, we get what Scott Greenfield called “computer nerds[,]” instead of lawyers, who write algorithms that allow AI “to play judge, jury, and executioner.”

The Hallucination of Justice: AI Invades Our Courts

Now that we’ve explored the roots of confabulation and hallucination — both human and artificial — and distinguished between them, let’s look at how AI confabulations are unleashed in real courtrooms. The results are sometimes absurd, sometimes terrifying, and always bullshit.

The Case of the Missing Cases

By now, many people have heard stories of the lawyers who get into trouble because they trusted AI. Specifically, they used Large Language Models to help them research and write briefs that they then went on to use in real courts.

They did this not knowing that some of the cases they cited and relied upon were not real cases.

In another “hard lesson learned” case, on Monday, February 24, 2025, a federal district court sanctioned three lawyers from the national law firm Morgan & Morgan for citing artificial intelligence (AI)-generated fake cases in motions in limine. Of the nine cases cited in the motions, eight were non-existent.

— Linn F. Freedman, “Lawyers Sanctioned for Citing AI-Generated Fake Cases,” Nat’l L. Rev. (Apr. 25, 2023)

Yikes! Just another reminder of why I don’t let AI help me do legal research or write briefs for me.

That said, all the major legal research tools are now infected with AI. So are police reports and even “discovery” provided by prosecutors in criminal cases.

The Case of the Avatar Defendant

A few weeks ago, Jerome Dewald represented himself in court.

Dewald supposedly has a background in engineering and computer science, but he’s not a lawyer. He owns a company — an AI startup called Pro Se Pro — that aims to help people represent themselves in court without lawyers.

Dewald’s request[ed] to play a video arguing his case, as according to him a medical condition had left the entrepreneur unable to easily address the court verbally in person at length. The panel was not expecting a computer-imagined person to show up, however.

— Thomas Claburn, AI entrepreneur sent avatar to argue in court – and the judge shut it down fast (Apr. 9, 2025)

In this case, had the judge allowed the video to go on, the result might have just been odd. Despite his “engineering and computer science” background, Dewald had been unable to properly create the avatar of himself. He used a generic avatar named “Jim” instead.

The court was understandably confused and asked who this “new” person was that was addressing the court.

Suffice it to say the court was not pleased with the explanation. The judges believed — wrongly according to the founder of the AI startup (Dewald) — that it was a publicity stunt to help advertise his AI startup by using an “AI lawyer.”

“Beyond” Forgiveness

The court hearing Dewald’s case disallowed his avatar. Another court, however, did something even more aburd: they allowed the introduction of a total fiction: an AI-generated representation of a murder victim who “appeared” in court from beyond the grave to forgive his murderer.

“To Gabriel Horcasitas, the man who shot me, it is a shame we encountered each other that day in those circumstances,” says a video recording of Pelkey. “In another life, we probably could have been friends.

“I believe in forgiveness, and a God who forgives. I always have, and I still do,” Pelkey continues, wearing a grey baseball cap and sporting the same thick red and brown beard he wore in life.

— Cy Neff, AI of dead Arizona road rage victim addresses killer in court (May 6, 2025)

Just one problem. Well, okay, a lot of problems. First, the “man” in the video wasn’t Pelkey: it was an AI-generated version of him. Second, it wasn’t Pelkey’s voice: it was an AI-generated version of it. And, third, those weren’t Pelkey’s words. The article doesn’t say who wrote the words. It might have been Pelkey’s sister; it might have been an AI.

In any event, every part of the video was a fictionalized representation of what someone else thought Pelkey might have said, if he could really have returned from the beyond to appear on court.

Talk about confabulation!

Difface: Seeing What Isn’t There

I was already going to write this post because of the Pelkey case I just told you about.

Then my friend, Mike Hamilton, posted an article about AI generating people’s faces using just their DNA.

Chinese scientists fed a string of genetic code into an artificial intelligence model. What came out looked like a face — a strikingly realistic, three-dimensional face. No photograph had ever been taken. No sketch artist had drawn it. The only source was DNA.

— Tibi Puiu, A New AI Tool Can Recreate Your Face Using Nothing But Your DNA (May 9, 2025)

But, as another (misleadingly titled) article points out, all is not hunky-dory.

Despite this, the team brought up some challenges with developing Difface, such as a limited knowledge of facial genetics and technological difficulties with high-dimensional data and a small sample size. They believe that Difface’s underlying structure is flexible enough to apply it to different ethnic groups, so they hope that expanding the database to include individuals from a wide range of ethnic backgrounds can help Difface generate accurate facial images. The team said future studies will have the opportunity to explore whether Difface needs more genetic loci to identify certain facial features, which could make the model more valuable to forensic investigations and personal medicine on a global scale. 

— Kendra Leon Barrionuevo, ‘Difface’ uses crime scene DNA to show what suspects look like (May 7, 2025)

In short, we don’t know enough about genetics and this particular experiment has other issues that make it unclear how well this tech could be used “use[] crime scene DNA to show what suspects look like.”

But, remember, fictions bring convictions. I expect it won’t be long before law enforcement tries to turn this fake headline into a reality.

When Confabulations Cause Hallucinations

Let’s not kid ourselves: the real danger isn’t just that AI lies. It’s that people believe the lies.

And not just ordinary people. Judges. Prosecutors. Jurors. Even defense lawyers sometimes.

Because the Oracle has spoken. But who is the Oracle? And how does it work?

We don’t let anonymous experts testify. We don’t let prosecutors just say “trust us” when it comes to the evidence. (Or at least, we’re not supposed to. Reality doesn’t always match this aspiration.)

But now we’ve got black box algorithms being treated like witnesses whose evidence never lies. They’re showing up in discovery, pretrial risk assessments, sentencing reports, and more. Defense attorneys have almost no way to question how they work, what they’ve been trained on, or how many false positives they’ve kicked out — in other words, how often they confabulated — in other cases.

On the plus side, some courts get it.

Courts are beginning to address such issues, including discovery in criminal cases. In an important ruling in State v. Arteaga, a New Jersey Appellate Court affirmed a trial court order, ruling that if the prosecutor planned to use FRT, or the eyewitness who selected the defendant in a photo array, then they must provide the defense with information concerning “the identity, design, specifications, and operation of the program or programs used for analysis, and the database or databases used for comparison,” as all “are relevant to FRT’s reliability.” The court concluded, the “[d]efendant must have the tools to impeach the State’s case and sow reasonable doubt.

— AI and the Criminal Justice System Working Group, Input Regarding AI and Criminal Justice, at 10 (Wilson Ctr. for Sci. & Just. June 2024) (footnotes omitted)

There’s a problem here, though. No one even knows how AI does what AI does!

It’s all a black box.

A black box AI is an AI system whose internal workings are a mystery to its users. Users can see the system’s inputs and outputs, but they can’t see what happens within the AI tool to produce those outputs.

Consider a black box model that evaluates job candidates’ resumes. Users can see the inputs—the resumes they feed into the AI model. And users can see the outputs—the assessments the model returns for those resumes. But users don’t know exactly how the model arrives at its conclusions—the factors it considers, how it weighs those factors and so on. 

Many of the most advanced machine learning models available today, including large language models such as OpenAI’s ChatGPT and Meta’s Llama, are black box AIs. These artificial intelligence models are trained on massive data sets through complex deep learning processes, and even their own creators do not fully understand how they work. 

These complex black boxes can deliver impressive results, but the lack of transparency can sometimes make it hard to trust their outputs. Users cannot easily validate a model’s outputs if they don’t know what’s happening under the hood. Furthermore, the opacity of a black box model can hide cybersecurity vulnerabilities, biases, privacy violations and other problems. 

— Matthew Kosinski, What is black box AI? (undated)

If that doesn’t scare you, it should. We’re not just talking about unreliable tools. We’re talking about the creeping illusion that the tools are truth-tellers and objective — that they don’t lie, don’t guess, don’t confabulate.

But we actually know that’s not true: they do “lie”; they do “guess”; they do confabulate. And we don’t really know because we don’t know how it all works.

My concern is that once you start to believe the Oracle, you stop questioning the fumes. You don’t even try to peek into the black boxes of their neural nets. In fact, even if a judge would order that the necessary discovery be turned over to try, how could you or anyone else figure it out?

That’s the real danger: when we forget how these systems are built — from curated datasets, full of human biases and pattern-matching tricks — and we start treating their pronouncements like prophecy.

Confabulations cause hallucinations. Not just in machines, but in us.

Conclusion: Courtrooms Must Be Places of Reality


Brave New World?

When people think of dystopias, they often reach for Orwell’s 1984: a world of surveillance, force, and visible repression. But Aldous Huxley’s vision in Brave New World was more insidious — and more relevant to our AI age.

In Huxley’s world, people weren’t crushed by tyranny. They were seduced by comfort, distraction, and false promises. They chose to stop asking questions. They didn’t need to be forced into silence; they embraced illusion freely.

That’s why Huxley belongs here. When AI confabulates, and we treat its output as oracular truth, we’re not being oppressed. We’re volunteering for the hallucination. Just like in Huxley’s temple of soma and simulated serenity — we’re surrendering, not resisting.

The brave new world of artificial intelligence is already here — and it’s leaking into our courtrooms through evidence, arguments, and even the lips of the dead.

And just like in Huxley’s novel, the danger isn’t the tech itself — or I should say isn’t just the tech itself — it’s how willingly we surrender to it.

But a courtroom is not supposed to be a tech demo.

It’s not a stage for simulated empathy. Not a sandbox for black box code. It’s a place where reality matters. Where people’s lives, freedom, and futures hang in the balance. And if we don’t anchor courtroom decisions to something real — something verifiable and challengeable — then we’re not just letting machines confabulate.

We’re hallucinating temples — our courthouses — not of justice, but of AI’s confabulated “truth.”

Related Artificial Intelligence Posts

  • May 10, 2025
    Confabulations Cause Hallucinations
  • April 28, 2025
    Orphans in Poisoned Libraries
  • April 15, 2025
    Ghosts in the Machine
  • August 31, 2024
    From Fumes to Function
  • May 18, 2024
    Twenty-First Century Delphic Oracle

The post Confabulations Cause Hallucinations appeared first on Fresno Criminal Lawyer. It was written by Rick.