Skip to content

Menu

Network by SubjectChannelsBlogsHomeAboutContact
AI Legal Journal logo
Subscribe
Search
Close
PublishersBlogsNetwork by SubjectChannels
Subscribe

Court Sanctions Highlight Potential Risks of Using Unchecked AI in Litigation

By Kathryn C. Cole on March 10, 2026
Email this postTweet this postLike this postShare this post on LinkedIn
Binary code background vector illustration, blockchain and legal, made of triangle shape, crushing and fading in the dark-shutterstock_1299146458

In February, a federal judge in the Southern District of New York issued case-ending sanctions against an attorney who failed to “learn from his mistakes” and repeatedly submitted filings containing false, AI-generated citations to the court. The judge emphasized in her order that while she does not oppose using AI to help with research and writing, she “must take a stand where, as here, counsel repeatedly file[d] submissions with false citations” because counsel refused to comply with his ethical obligations and verify his submissions.

In Flycatcher Corp. Ltd. et al. v. Affable Avenue LLC et al., Judge Katherine Polk Failla issued a decision and order spanning more than 30 pages that may serve as a cautionary tale about the risks of using AI in litigation. In the decision, Failla discussed several instances in which the attorney filed documents containing references to nonexistent cases or misattributed quotes and observed that the attorney was clearly undeterred by the threat of sanctions from the court.

Failla indicated “the trouble began” for the attorney when he submitted a motion to dismiss containing numerous false citations. The court issued an order to show cause for the attorney to demonstrate why the court should not strike his motion to dismiss. In response, the attorney indicated – in what the court referred to as a submission replete with florid prose, including references to Ray Bradbury’s “Fahrenheit 451,” and scribes “in the ancient libraries of Ashurbanipal,” which “raised the court’s eyebrows” – that the errors “resulted from sophisticated AI hallucination mechanisms rather than intentional misconduct.” The court concluded the submission itself appeared to have been created by generative AI and contained a false citation. Thus, the court set a hearing for the attorney to explain himself.

A few days prior to the court’s hearing on the order to show cause, “more bad news quickly followed,” as the attorney submitted a proposed reply brief in further support of his motion to dismiss. Once again, his brief contained false citations. At the August 22, 2025, hearing Failla set for the attorney to explain his false citations, the attorney could not respond directly to, much less answer, the court’s questions about his submission. Rather, the court noted, “[the attorney] struggled to make eye contact with the court and described an approach to legal research that was redolent of Rube Goldberg,” leaving “the court … to draw its own conclusions.”

In her decision, Failla painstakingly detailed the procedural history of the case to make a point: that the attorney “was not dissuaded by court orders or the threat of sanctions from filing unchecked, AI-generated submissions with false legal citations.” Failla subsequently issued sanctions against the attorney, pursuant to Federal Rule of Civil Procedure 11 and the court’s inherent powers and struck his client’s submissions, entered default judgment against his client, and granted opposing counsel his fees because the court found the attorney acted in bad faith and “multiplie[d] the proceedings in [this] case unreasonably and vexatiously,” 28 U.S.C. § 1927.

This case serves as a reminder that the increasing use of AI to generate legal documents may pose meaningful integrity and accuracy risks. With that in mind, attorneys must remain accountable, and may wish to review all AI-generated legal documents to maintain accuracy and integrity in judicial proceedings. Attorneys and their firms should consider taking steps to safeguard against errors, bias, fabrication, and the use of fictitious authority from generative AI systems and may need to take additional steps to protect client confidentiality and preserve claims of privilege and protection, where applicable, in court dockets.

Photo of Kathryn C. Cole Kathryn C. Cole

Kathryn C. Cole represents large and small businesses, financial institutions, and individuals in virtually all aspects of federal and state court commercial litigation, arbitration and mediation, and before federal agencies and regulatory bodies. In addition to advising on electronic data and cyber-related issues…

Kathryn C. Cole represents large and small businesses, financial institutions, and individuals in virtually all aspects of federal and state court commercial litigation, arbitration and mediation, and before federal agencies and regulatory bodies. In addition to advising on electronic data and cyber-related issues, Katy has considerable experience in all areas of complex litigation including contract claims, product liability claims, tort claims, consumer class-action claims and securities class-action claims.

Read more about Kathryn C. ColeKaty's Linkedin Profile
Show more Show less
  • Posted in:
    E-Discovery
  • Blog:
    eDiscovery Watch
  • Organization:
    Greenberg Traurig, LLP
  • Article: View Original Source

LexBlog logo
Copyright © 2026, LexBlog. All Rights Reserved.
Legal content Portal by LexBlog LexBlog Logo