Skip to content

Menu

Network by SubjectChannelsBlogsHomeAboutContact
AI Legal Journal logo
Subscribe
Search
Close
PublishersBlogsNetwork by SubjectChannels
Subscribe

Evil AI in Court Proceedings

By Doug Cornelius on February 3, 2025
Email this postTweet this postLike this postShare this post on LinkedIn

A Minnesota court threw out an expert statement because it was generated by AI and included fake references. The case involved a Minnesota Stat. § 609.771 that bans people from using deepfake media to influence elections.

Jeff Hancock, a misinformation specialist and a Stanford University communication professor, used fake article citations generated by AI to support the state’s arguments. Hancock, subsequently admitted that his declaration inadvertently included citations to two non-existent academic articles, and incorrectly cited the authors of a third article. In his defense, Hancock told Judge Provinzino that he used ChatGPT-4, while drafting his declaration and could not explain precisely how these AI-hallucinated citations got into his declaration.

As Judge Provinzino said:

“The irony. Professor Hancock, a credentialed expert on the dangers of AI and misinformation, has fallen victim to the siren call of relying too heavily on AI—in a case that revolves around the dangers of AI, no less.”

Blaming it on the AI may be this decade’s excuse replacing the dog ate my homework.

Sources:

  • Judge Blasts Stanford AI Expert’s Credibility Over Fake, AI-Created Sources
  • Judge rebukes Minnesota over AI errors in ‘deepfakes’ lawsuit
  • Order Excluding Expert Testimony
  • Maryke Sher-Lun LinkedIn Post

  • Posted in:
    Corporate & Commercial
  • Blog:
    Compliance Building
  • Organization:
    Doug Cornelius
  • Article: View Original Source

LexBlog logo
Copyright © 2026, LexBlog. All Rights Reserved.
Legal content Portal by LexBlog LexBlog Logo