Skip to content

Menu

Network by SubjectChannelsBlogsHomeAboutContact
AI Legal Journal logo
Subscribe
Search
Close
PublishersBlogsNetwork by SubjectChannels
Subscribe

How AI Misled Two US Courts and the Urgent Case for AI Rules in Judging

By Tristan Marot on August 25, 2025
Email this postTweet this postLike this postShare this post on LinkedIn

Federal courts in New Jersey and in Mississippi have recently withdrawn published rulings after lawyers discovered glaring factual and legal errors that appear to trace back to unvetted generative‑AI research. The two episodes unfolded within hours of each other at the end of July 2025 and have prompted parties to demand explanations and safeguards.

In In re CorMedix Inc. Securities Litigation things took an abrupt turn after the judge’s first opinion, dated 30 June 2025, was posted. That opinion denied the defendants’ motion to dismiss, but defence counsel quickly spotted fundamental errors. Several authorities were quoted for propositions they did not contain, other case outcomes were reported the wrong way round, and two statements were attributed to the CorMedix defendants that were never alleged. Counsel set out the problems in a 22 July letter and expressly stopped short of asking for reconsideration; instead, they invited the Court to “consider whether amendment or any other action should be taken.”

Less than twenty‑four hours later, on 23 July, the judge entered a short text order withdrawing the opinion and its accompanying order in their entirety, directing the clerk to remove both from the docket, and promising that a substitute opinion would follow. The order records that the original materials “were entered in error,” but offers no public explanation.

Although the docket itself is silent on the reason for the mistakes, contemporaneous reporting, citing a person familiar with chambers, said that research produced using artificial intelligence was included in a draft decision that was inadvertently placed on the public docket, contrary to a chambers policy against unauthorised AI use. No filing has yet confirmed or denied that AI was involved; the only official action remains the withdrawal order and the promise of a corrected opinion.

A near‑identical controversy erupted the same week in the Southern District of Mississippi. On 23 July, the judge replaced a temporary restraining order he had issued three days earlier in a constitutional challenge to the State’s ban on diversity, equity, and inclusion (DEI) programmes. Five days later, the Mississippi Attorney General’s office filed a motion asking the court to reinstate both versions on the docket and to explain how “significant substantive errors” crept into the first order. Those errors included naming parties who are not in the case, citing declarations that do not exist, and quoting statutory language that is nowhere in the challenged law.

The motion stresses that the mistakes “cannot be dismissed as typographical or scrivener’s errors” and that the parties and public “are due an explanation,” echoing the concerns raised in the New Jersey matter. While no admission has surfaced in Mississippi about AI involvement, the nature of the defects, namely fictional declarations, misidentified litigants, and unsourced quotations, tracks the kind of hallucinations that large‑language‑model tools can generate.

Viewed side‑by‑side, the New Jersey and Mississippi withdrawals are likely just an indication of further issues which will occur within the judicial system from the irresponsible use of AI. Consider South African judges, who are already grappling with daunting caseloads and tight turnaround expectations, and the lure of generative‑AI tools which is impossible to ignore. The UK’s judiciary has confronted the issue, publishing guidance that covers responsible use of AI, confidentiality, accuracy, bias, security, and accountability for judicial officers. There are consultation meetings occurring within the South African judiciary and working groups elsewhere on the continent already have drafts circulating, yet progress remains frustratingly slow.

These American judicial missteps are unlikely to be the last or even the most serious. They are simply the first to be caught in public view as courts worldwide experiment, often quietly, with AI‑assisted drafting. Unless judiciaries act now to adopt clear, transparent rules which include defining acceptable use, mandating rigorous fact‑checking, and creating audit trails, each hurried judgment risks becoming the next cautionary tale. The very backlog judges hope AI will relieve could instead deepen if their decisions have to be withdrawn and correct and public confidence in the reasoning behind their opinions is allowed to erode.

  • Posted in:
    Financial
  • Blog:
    Financial Institutions Legal Snapshot
  • Organization:
    Norton Rose Fulbright
  • Article: View Original Source

LexBlog logo
Copyright © 2025, LexBlog. All Rights Reserved.
Legal content Portal by LexBlog LexBlog Logo