Skip to content

Menu

Network by SubjectChannelsBlogsHomeAboutContact
AI Legal Journal logo
Subscribe
Search
Close
PublishersBlogsNetwork by SubjectChannels
Subscribe

Understanding AI Hallucinations: Making Sure You Don’t End Up At The Wrong Stop

By Stephen Embry on April 10, 2026
Email this postTweet this postLike this postShare this post on LinkedIn

Despite what seems to be an accepted truism, AI hallucinations aren’t necessarily completely random. That’s the key insight from a new physics-based analysis by a group of scientists and engineers and it may change how we should be using GenAI tools.

The key finding: GenAI systems have a deterministic mechanism that causes output to flip from reliable to fabricated at a calculable step. And that step arrives exactly when a lawyer’s need is greatest. On novel, unsettled legal questions where training data is sparse.

That’s good and news. The good: if failure is somewhat predictable, more verification is needed when you are in ambiguous areas. More confidence on well-known information. 

The bad: the stretch of accurate output that precedes the failure builds false confidence by the uninformed, making the fabrication harder to catch, not easier.

My post for Above the Law.

  • Posted in:
    Technology
  • Blog:
    TechLaw Crossroads
  • Organization:
    Stephen Embry
  • Article: View Original Source

LexBlog logo
Copyright © 2026, LexBlog. All Rights Reserved.
Legal content Portal by LexBlog LexBlog Logo