Skip to content

Menu

Network by SubjectChannelsBlogsHomeAboutContact
AI Legal Journal logo
Subscribe
Search
Close
PublishersBlogsNetwork by SubjectChannels
Subscribe

Using ChatGPT to Interpret Insurance Policies? Eleventh Circuit Opens the Door to AI’s Role in Policy Interpretation

By Justin P. Gunter & Harneet Kaur on July 17, 2024
Email this postTweet this postLike this postShare this post on LinkedIn
Using ChatGPT to Interpret Insurance Policies? Eleventh Circuit Opens the Door to AI’s Role in Policy Interpretation

While recently resolving an insurance coverage dispute in Snell v. United Specialty Insurance Company, 102 F.4th 1208 (11th Cir. 2024), an Eleventh Circuit concurring opinion discussed the potential employment of artificial intelligence large language models to interpret policy terms.

The case concerned whether a landscaper’s insurance policy — which generally provided coverage for claims arising out of the insured’s “landscaping” — covered installation of a “ground-level trampoline.” The district court granted summary judgment to the insurer, and the Eleventh Circuit affirmed. Judge Kevin Newsom’s concurrence joined the majority’s opinion in full, but questioned whether an AI “large language model” (or “LLM”) could aid courts in interpreting policy provisions:

Here’s the proposal, which I suspect many will reflexively condemn as heresy, but which I promise to unpack if given the chance: Those, like me, who believe that “ordinary meaning” is the foundational rule for the evaluation of legal texts should consider—consider—whether and how AI-powered large language models like OpenAI’s ChatGPT, Google’s Gemini, and Anthropic’s Claude might—might—inform the interpretive analysis. There, having thought the unthinkable, I’ve said the unsayable.

Judge Newsom explains that he found dictionary definitions of “landscaping” unhelpful, and on a whim he asked a law clerk to ask ChatGPT, “What is the ordinary meaning of ‘landscaping’?” ChatGPT responded that:

“Landscaping” refers to the process of altering the visible features of an area of land, typically a yard, garden or outdoor space, for aesthetic or practical purposes. This can include activities such as planting trees, shrubs, flowers, or grass, as well as installing paths, fences, water features, and other elements to enhance the appearance and functionality of the outdoor space.

The court ultimately resolved the case without determining whether installation of the in-ground trampoline constituted “landscaping.” Nevertheless, Judge Newsom’s concurrence favorably compared AI to more traditional interpretive sources in defining the “ordinary meaning” of a word. Among other benefits, he explained that large language model AIs could be beneficial over other interpretive sources because:

  • LLMs train on ordinary language – LLMs “learn” from a vast amount of data — approximately 400-500 billion words — that reflect how words are used and understood in everyday life.
  • LLMs “understand” context – LLMs can recognize and contextualize words – for example, understanding and appropriately using the difference between a “bat” referring to a flying mammal and a “bat” used by a baseball player.
  • LLMs are accessible – Most LLMs are easy to use and are either freely available or available at a nominal cost.

On the other hand, potential downsides include the risk of AI “hallucination,” LLM’s inability to account for offline speech or underrepresented populations, the risk of source data manipulation, and the dystopian fear of AI “robo judges.” At the end of the day, Judge Newsom offered an equivocal answer to his own question about whether LLMs could be a tool for legal interpretation: “Having initially thought the idea positively ludicrous, I think I’m now a pretty firm ‘maybe.’”

Judge Newsom’s concurrence, while not fully endorsing LLMs for policy interpretation, suggests significant implications. First, it opens the door to using AI in policy interpretation. Counsel should no longer be surprised if an opponent’s brief cites to an LLM, and should look for opportunities to use LLMs, in addition to more traditional sources, to support their own analysis. Despite expressing reservations, Judge Newsom recommends that “a cautious first use of an LLM would be in helping to discern how normal people use and understand language, not in applying a particular meaning to a particular set of facts to suggest an answer to a particular question.” For example, LLMs could help illustrate different reasonable interpretations of policy terms and demonstrate ambiguities by exposing multiple plausible interpretations. This is particularly important since, in many states, ambiguities in an insurance policy must be “strictly construed against the insurer.” See, e.g., Old Republic Nat’l Title Ins. Co. v. RM Kids, LLC, 352 Ga. App. 314, 318 (2019).  

Second, LLMs highlight limitations of traditional sources, like dictionaries, in interpreting policy terms. Dictionaries provide precise, narrow definitions often diverging from every day, colloquial use and struggle with new words and evolving meanings. LLMs face no such difficulties. They draw from diverse source material — “from Hemmingway novels and Ph.D. dissertations to gossip rags and comment threads” — and inherently incorporate modern and informal usage. Judge Newsom cites the term “landscaping” as an example. Traditional dictionary definitions focused on natural features and aesthetics, and excluded — or at least minimized — other activities that a reasonable person might consider “landscaping,” like regrading a yard, adding drainage, or installing outdoor walkways or lighting fixtures. The ChatGPT definition, in contrast, encompassed either “aesthetic or practical” activities that modified either natural or artificial aspects to “enhance the appearance and functionality of the outdoor space.”

Third, the concurrence highlights the importance of counsel understanding the potential uses, and more importantly abuses, of LLMs in litigation. Litigants might try to design specific queries (often called “prompts”) to extract a desired outcome from an LLM or may “shop around” using different LLMs to find the most favorable response. The novelty of the technology does mean counsel and jurists must be vigilant. The concurrence recommends as a “best practice… full disclosure of both the queries put to the LLMs… and the models’ answers.”

In conclusion, Judge Newsom’s cautious approach to use of AI large language models for policy interpretation may well be overshadowed by his concluding observation “that AI is here to stay.” Judicial attitudes toward AI are shifting from hostility to curiosity, and counsel should take note of Judge Newsom’s recommendation that now “is the time to figure out how to use it profitably and responsibly.”

Photo of Justin P. Gunter Justin P. Gunter

Justin Gunter is a partner in the firm’s Litigation Practice Group. Justin represents clients in a wide range of complex and technical litigation matters, including securities litigation, antitrust litigation, class actions, insurance coverage disputes, and cases arising under the Uniform Commercial Code. His…

Justin Gunter is a partner in the firm’s Litigation Practice Group. Justin represents clients in a wide range of complex and technical litigation matters, including securities litigation, antitrust litigation, class actions, insurance coverage disputes, and cases arising under the Uniform Commercial Code. His clients include banks and broker-dealers, as well as clients in the technology, healthcare, and logistics industries. His strategic approach to litigation ensures that his clients’ interests are proactively protected and advanced in litigation.

Read more about Justin P. Gunter
Show more Show less
Photo of Harneet Kaur Harneet Kaur

Harneet Kaur is an associate in Bradley’s Litigation Practice Group. Her practice is focused on complex commercial litigation, dispute resolution, international arbitration, and labor and employment matters.

Read more about Harneet Kaur
  • Posted in:
    Insurance
  • Blog:
    It Pays to Be Covered™
  • Organization:
    Bradley Arant Boult Cummings LLP
  • Article: View Original Source

LexBlog logo
Copyright © 2026, LexBlog. All Rights Reserved.
Legal content Portal by LexBlog LexBlog Logo