As usual, I had a terrific experience at DRI’s annual Insurance Coverage and Practice Symposium in midtown Manhattan, which was held last week. I had gone in many ways simply for two particular presentations, one on generative AI and the other on the impact of nuclear verdicts on insurance coverage and bad faith issues, although other presentations were informative and had value as well.

The AI presentation, ably presented by Lewis Wagner’s Meghan Ruesch and Melissa Fernandez of Travelers, was oriented towards the question of how AI would impact insurance, with discussion of new claims against insureds it may give rise to, how and whether such claims will be covered by standard industry policy language and how generative AI may be employed in, or otherwise affect, claims handling by insurers. On the first issue, the presenters emphasized the range of claims that can be expected to arise, running from the obvious – copyright infringement claims against insureds using generative AI apps or other tools to create media – to the less obvious, such as invasion of privacy by the creation of so-called deep fakes or through other AI driven activity. Your mileage may vary, but I think that any imaginative lawyer with a basic knowledge of how AI works can likely envision a nearly limitless range of potential claims against users of generative AI tools and distributors of AI crafted output.

What jumped out at me about this particular topic, however, wasn’t the range of potential claims but the extent to which the law itself – both statutory and common law – has a long ways to go before doctrines, evidentiary approaches, or the recognized elements of claims are in sync with the coming of claims based upon generative AI. We simply don’t know at this point what the elements of various causes of action should look like with regard to harms caused by generative AI, nor what new causes of action will have to be recognized to account for them. In effect, we are about to try to handle generative AI based claims with legal doctrines designed for, and resulting from, a much different age. It’s akin to the courts trying to handle claims arising from locomotives, cross-country railroads and automobiles, with a body of law still based on an agrarian society and the horse and buggy. The law eventually caught up, but it took a while. So too the law will eventually catch up here – but again, it will take time, and many judicial decisions, before the parameters of claims arising from the use of generative AI will be clear and widely agreed upon.

With regard to the presenters’ second issue, namely the extent to which such claims will be covered by standard policy language, of course the final answer to that question won’t be known until after the exact nature and elements of such claims have been at least substantially developed in future judicial decisions. However, the nature of insurance, obviously, is protection against the unknown, and thus both insureds and insurers will have to muddle through on the question of the scope of coverage even while the nature and elements of the claims themselves are still being developed.

For now, though, what caught my attention in this regard was the focus on the extent to which claims arising from generative AI may end up being shoehorned into the coverage granted by advertising injury insuring grants in policies. I believe it was all the way back in 1992 when I wrote an article for an insurance industry publication that I titled “The Expanding Scope of Advertising Injury Coverage,” addressing the extent to which, way back then, this coverage grant was being broadly read by courts to provide coverage beyond what the industry believed was its intended scope. The more things change, the more things stay the same I guess – back then, we didn’t even have iPhones but were discussing the propriety of a broad reading of the scope of this coverage, and today we are doing the same only with artificial intelligence, something that when I wrote the article in 1992 was the stuff of science fiction (shout out to Hal goes here).

At this point, one has to ask why the industry hasn’t revised the language of advertising injury coverage or removed it entirely from policies, rather than deal with the uncertainty of constant questions as to whether it provides coverage for the latest trend, this time of claims generated by generative AI. The only answer I can really come up with at this point is that, one, insureds expect such coverage to exist in their policies (or maybe their brokers do, as I have to think that only insureds with particularly sophisticated risk management departments give this particular coverage much thought at all when acquiring coverage), and, two, underwriters have long since figured out how to price this uncertainty into providing this coverage (of course, the underwriters who signed off on policies with sudden and accidental pollution exclusions in them a generation ago probably thought the same thing, and look how well that worked out for the industry).

As for the presenters’ third point – the extent to which the use of generative AI will affect claims handling – there can be little doubt that, at some point, a generative AI tool will be marketed that effectively does much of the basic factual work of claims handling and removes it from the province of adjusters themselves. I have to say, though, that in light of so-called hallucinations and the like, that day is not yet here. One of the presenters gave an example of having asked ChatGPT to summarize the My Pillow litigation, and it returned a summary with causes of action that were clearly and wildly inaccurate. Now imagine a claims adjuster handling a claim based on a similarly erroneous characterization of the underlying case against the insured generated by a generative AI product used by an insurer and you see the problem. Take for instance the adjuster’s job of deciding whether the company should provide a defense or not – the law in most jurisdictions requires that a determination of the duty to defend be based on the claims pled in the complaint, not on what ChatGPT might hallucinate those claims to have been.

But what about when there is eventually an effective, in the sense of being accurate, generative AI tool available to the industry, one that assumes much of the routine fact gathering and even potentially the evaluation of a claim? Bad faith lawyers will have a field day testing the analyses of such tools against the legal obligation of fair and reasonable claims handling imposed on insurers, and arguing over whether implicit biases, or overweighting of one factor or another in the software or underlying algorithm, or prejudice in the triggering prompt renders conduct by an insurer based on that tool unreasonable and in bad faith. The only saving grace I can think of right now in this regard is that this may be too subtle a task for many of the bad faith lawyers we see, who currently typically apply a much more formulaic approach to building out bad faith cases against insurers but give them time – they’re smart people and they will figure it out.

Bad faith claims, of course, are a common theme that make up part of almost every issue confronted by the insurance industry, and the discussion of bad faith in the context of generative AI flowed naturally into the discussion of nuclear verdicts by Wendy Stein Fulton of Kiernan Trebach and Sonia Valdes of Medmarc. I have written extensively about this issue in the past and handled the bad faith and intra-insurer disputes within a coverage tower that arose from a substantial nuclear verdict in Massachusetts, so I have my own particular interest in these issues and my views don’t necessarily align with those of other authors and speakers on the subject. That said, however, both speakers did a terrific job with the topic, demonstrating – statistically and convincingly – the rise in nuclear verdicts over the past decade. More interestingly, they recognized that most lawyers have never seen a nuclear verdict, or how one comes into existence. This is an important point because nuclear verdicts do not happen in a vacuum, nor are they the result of traditional approaches to calculating damages as most lawyers understand them (in the nuclear verdict in Massachusetts in which I handled the subsequent coverage and bad faith disputes, for instance, the approach to damages taken by the jury certainly did not correspond to the way I was taught as a young lawyer to calculate out the range of reasonably likely damages under the elements of Massachusetts’ wrongful death statute, nor with how most Massachusetts lawyers have done it since time immemorial). To account for that, the presenters gave a very detailed demonstration of exactly how the jury in a recent nuclear verdict calculated such a large number.

Their presentation was excellent, but I wanted to comment on one particular point where my thinking on nuclear verdicts departs from that of the presenters. No matter how many statistical data points are presented showing that nuclear verdicts are increasingly not outlier events, and no matter how many other factors are also present (I put a lot more stock on certain social inflation factors and the cultural impacts of ostentatious displays of wealth in this country as substantial contributing factors in this phenomenon than do most commentators), what I find in the cases in my own bad faith docket where either nuclear verdicts have occurred or the risk of one occurring has led to settlement is the existence of very unique, sui generis fact patterns that placed the insured (and thus the insurer) at great risk. This was true in particular of the example used in the speakers’ presentation to demonstrate the making, so to speak, of the sausage of a nuclear verdict – you wouldn’t find a repeat of that fact pattern in a court in this country in a thousand years.

My point in beating this particular drum, which is one I have been beating for years, is that understanding the unique confluence of factors that give rise to nuclear verdicts is crucial to accurate, thoughtful and proper claims handling during the time that a potentially explosive claim is pending, when the eventuality of a nuclear verdict can still be avoided. Trying to put Humpty Dumpty back together again after a jury has returned a nuclear verdict is a lot harder than evaluating and, if need be, settling a claim before that can happen. Avoiding a nuclear verdict, though, requires paying close attention to the details of the claim while it is progressing and being aware of whether it presents the type of scenario that might give rise, at trial, to a nuclear verdict. If you miss the warning signs, you also miss the opportunity to avoid the myriad problems for insurers that are triggered by a nuclear verdict, problems that only begin with paying off the judgment and spiral from there into bad faith and coverage problems.

Photo of Stephen Rosenberg Stephen Rosenberg

Stephen has chaired the ERISA and insurance coverage/bad faith litigation practices at two Boston firms, and has practiced extensively in commercial litigation for nearly 30 years. As head of the Wagner Law Group’s ERISA litigation practice, he represents plan sponsors, plan fiduciaries, financial…

Stephen has chaired the ERISA and insurance coverage/bad faith litigation practices at two Boston firms, and has practiced extensively in commercial litigation for nearly 30 years. As head of the Wagner Law Group’s ERISA litigation practice, he represents plan sponsors, plan fiduciaries, financial advisors, plan participants, company executives, third-party administrators, employers and others in a broad range of ERISA disputes, including breach of fiduciary duty, denial of benefit, Employee Stock Ownership Plan and deferred compensation matters.