Skip to content

Menu

Network by SubjectChannelsBlogsHomeAboutContact
AI Legal Journal logo
Subscribe
Search
Close
PublishersBlogsNetwork by SubjectChannels
Subscribe

A Modest Proposal Concerning AI Hallucinations

By Bexis on March 2, 2026
Email this postTweet this postLike this postShare this post on LinkedIn

We added a new site to our blogroll recently – “AI Hallucination Cases,” which describes itself as:

This database tracks legal decisions in cases where generative AI produced hallucinated content – typically fake citations, but also other types of AI-generated arguments. . . .  While seeking to be exhaustive (972 cases identified so far), it is a work in progress and will expand as new examples emerge. . . .

Of the (as of this writing) 972 judicial decisions the site identifies, over two-thirds (676) are from the United States.  Not surprisingly, more than half of the cases involve pro se litigants, but in almost 400 (391) of the listed decisions the perpetrators of AI hallucinations were actual lawyers, who, by definition, are supposed to know better.

Unfortunately, the problem does not seem to be getting any better.  The number of judicial opinions involving actual lawyers (non-pro se) accused of passing off AI hallucinations in filed documents for the incomplete month of February 2026 (it’s 2/23/26 as we write this post) is thirty-three (31 of which were in the USA), which is significantly more than a decision a day.  For January 2026, the number is thirty-six (25 of which were in the USA).  For December 2025, the number was fifty-one (36 of which were in the USA).  We didn’t do any detailed counts before then, but anyone who wants to can go to the site and verify our numbers.

A lawyer’s use of AI hallucinated anything in a filed document is unethical.  It’s a fraud on the court and opposing parties.  It’s lack of candor with the court.  It’s unprofessional since at minimum the presence of such hallucinations demonstrates incompetent use of technology.  See ABA Model Rules of Prof. Conduct, Rule 1.1.  In federal court, it’s an open-and-shut violation of Fed. R. Civ. P. 11 for not reading what was cited in signed, filed papers.  The Fifth Circuit, in a recent published opinion, held that “submitting a brief riddled with fabricated quotations” is conduct “unbecoming a member of the bar” that violates F.R.A.P. 46(c).  Fletcher v. Experian Information Solutions, Inc., ___ F.4th ___, 2026 WL 456842, at *2, 6 (5th Cir. Feb. 18, 2026) (citing, inter alia, the same website we mentioned above).

Moreover, it’s stupid as hell.  There’s nothing easier for the other side to identify than legal citations and quotations that don’t in fact exist.  Can any lawyer reasonably believe that the other side isn’t going to review the cases that s/he cites?  There are even AI programs available to detect AI hallucinations, including in legal filings.  Once hallucinations are detected, the party responsible is essentially caught red-handed with virtually no defense.  Then the perpetrator ends up like the attorney in Fletcher, or in this recent prescription medical product liability-adjacent decision:

Plaintiff has submitted dozens of inaccurate factual and legal citations across at least five different filings.  Regardless of any use of generative artificial intelligence, erroneous citations suggest Plaintiff’s counsel submitted these filings without “conduct[ing] a reasonable inquiry into the facts and the law” as required by Rule 11(b).  When confronted with similar situations, courts have ordered the filing attorneys to show cause why sanctions or discipline should not issue.  The Court finds such an order is appropriate here.

Ledoux v. Outliers, Inc., 2026 WL 291023, at *3 (W.D. Wash. Feb. 4, 2026) (citations and quotation marks omitted).

Something needs to be done about this.  We had hoped that embarrassment stemming from the broad public disclosure of the initial instances of attorneys found responsible for AI hallucinations would have been enough of a deterrent.  But our hope in ridicule as a deterrent appears to be misplaced.  Nor do the minor fines (a $2500 sanction in Fletcher, for instance) and remedial legal education requirements (the website lists sanctions, where available) that have been imposed appear to have had much effect.  Stronger medicine is needed.  This is rapidly becoming a major problem for the legal profession.  The current wave of lawyers using AI to make up things that don’t exist exposes the profession to widespread derision and worse.

Reluctantly, we don’t believe that the temptation to use AI to cut legal corners will become resistible unless and until counsel guilty of such behavior suffer significant professional consequences.  After due prior publicity – bar associations should punish attorneys caught with their hand in the AI cookie jar with suspensions from the practice of law.  We think that a first offense should warrant a suspension of one week for each fraudulent citation, quotation, or misrepresentation of a judicial opinion.  Second offenses should be punished at the rate of a month apiece.  If that’s not a sufficient deterrent, then the licensed miscreant is probably not fit to practice law.

That’s our modest proposal to keep AI hallucinations from cannibalizing the legal profession.

Photo of Bexis Bexis

JAMES M. BECK is Counsel resident in the Philadelphia office of ReedSmith. He is the author of, among other things, Drug and Medical Device Product Liability Handbook (2004) (with Anthony Vale). He wrote the seminal law review article on off-label use cited by…

JAMES M. BECK is Counsel resident in the Philadelphia office of ReedSmith. He is the author of, among other things, Drug and Medical Device Product Liability Handbook (2004) (with Anthony Vale). He wrote the seminal law review article on off-label use cited by the Supreme Court in Buckman v. Plaintiffs Legal Committee. He has written more amicus briefs for the Product Liability Advisory Council than anyone else in the history of the organization, and in 2011 won PLAC’s highest honor, the John P. Raleigh award. He has been a member of the American Law Institute (ALI) since 2005. He is the long-time editor of the newsletter of the ABA’s Mass Torts Committee.  He is vice chair of the Class Actions and Multi-Plaintiff Litigation SLG of DRI’s Drug and Device Committee.  He can be reached at jmbeck@reedsmith.com.  His LinkedIn page is here.

Read more about BexisJames's Linkedin Profile
Show more Show less
  • Posted in:
    Food, Drug & Agriculture
  • Blog:
    Drug & Device Law
  • Organization:
    Drug & Device Law Blogging Team
  • Article: View Original Source

LexBlog logo
Copyright © 2026, LexBlog. All Rights Reserved.
Legal content Portal by LexBlog LexBlog Logo