Today’s guest post is by Reed Smith‘s Jamie Lanphear. Like Bexis, she follows tech issues as they apply to product liability litigation. In this post she discusses a pro-plaintiff piece of legislation recently introduced in Congress that would overturn the current majority rule that electronic data is not considered a “product” for purposes of strict liability, and impose product status on such data nationwide. As always our guest posters deserve 100% of the credit (and any blame) for their writings.
**********
Taking a page from the EU playbook, Senators Dick Durbin and Josh Hawley recently introduced the AI LEAD Act, a bill that would define AI systems as “products” and establish a federal product liability framework for such systems. The structure is strikingly reminiscent of the EU’s new Product Liability Directive (“PLD”), which we previously unpacked at length here, here, and here. Unlike the EU (aside from Ireland), however, the United States has a common-law system and a history of considering product liability to be a creature of state law. Federal legislation to create uniform, substantive product liability principles for the entire country appears unprecedented, although an attempt forty some years ago was filibustered to death.
Like the attempt in the 1980s, it is unlikely the bill will pass in its current form (read it and you’ll understand why), but its introduction still matters. It signals a policy tilt toward treating AI as a product, an issue U.S. courts have been wrestling with.
Before diving in, a word on tone. The bill’s findings assert that AI systems, while promising, have already caused harm, and cite tragic incidents involving teenagers who allegedly died after “being exploited” by AI chatbots. S. 2937, 2025, p. 2. That is likely a nod to ongoing AI chatbot litigation—where no court has yet adjudicated liability. Building a federal framework on such “findings,” without established liability, illustrates how current sentiment could shape future law—even if this bill never becomes one.
Key Provisions of the AI LEAD Act
The bill lifts familiar product liability doctrines into an AI-specific statute, then tweaks them in ways that matter. The devil, as always, is in those tweaks.
- First, causes of action. The bill would create four routes to liability: negligent design, negligent failure to warn, breach of express warranty, and strict liability for a “defective condition [that is] unreasonably dangerous.” Conspicuously absent is a standalone manufacturing defect claim. And unlike most state court regimes that parse strict liability by defect type, the bill would package strict liability into a single “defective condition” bucket, raising the obvious question of what counts as “defect” in a software context.
- Second, noncompliance as defect. Under the bill, noncompliance with applicable safety statutes or regulations would deem a product defective with respect to the risks those rules target. Compliance, by contrast, would be only evidence—it would not preclude a finding of defect. This biased provision resembles how the new EU PLD treats noncompliance with safety regulations, though the LEAD Act is more aggressive in its approach, establishing defect rather than merely presuming it in the face of noncompliance.
- Third, nontraditional harms. “Harm” would include not only personal injury and property damage but also “financial or reputational injury” and “distortion of a person’s behavior that would be highly offensive to a reasonable person.” S. 2937, Sec. 3(8). That expansion raises more questions than answers. What exactly is “behavioral distortion”? How is it shown? And how would reputational injury be defined within the context of AI products?
- Fourth, circumstantial proof of defect. Courts could infer defect from circumstantial evidence where the harm is of a kind that “ordinarily” results from a product defect and is not solely due to non-defect causes. That is a familiar evidentiary concept in product cases involving shattered glass and exploding widgets, but translating “ordinarily” to AI, where baseline failure modes, expected outputs, and user modifications are still being defined, will be harder. What “ordinarily” happens with a large model depends on the training corpus, the guardrails, the deployment environment, and the prompt. In other words, results can vary widely, so it’s hard to say that any particular outcome is ordinary. That may explain why “ordinarily” would expand liability beyond the usual state-law “malfunction theory” formulation, which requires plaintiffs to exclude reasonable secondary causes.
- Fifth, liability for deployers. The law would extend liability to deployers, not just developers. “Deployers”—those who use or operate AI systems for themselves or others—could be liable as developers if they substantially modify the system (i.e., make unauthorized or unanticipated changes that alter the system’s purpose, function, or intended use) or if they intentionally misuse it contrary to intended use and proximately cause harm. Separately, if the developer is insolvent, beyond the reach of the court, or otherwise unavailable, a deployer could be held liable to the same extent as the developer, with indemnity back against the developer if feasible. That tracks the EU trend of extending exposure across the supply chain.
- Sixth, a federal cause of action and limited preemption. The bill would create a federal cause of action that could be brought by the U.S. Attorney General, state AGs, individuals, or classes. It would allow injunctive relief, damages, restitution, and the recovery of reasonable attorneys’ fees and costs. On preemption, the bill would supersede state law only where there is a conflict, while expressly allowing states to go further. That is not a clean sweep of state law; it is a floor with room for states to set a higher ceiling.
- Seventh, a foreign-developer registration hook. Foreign developers would need a designated U.S. agent for service of process before making AI systems available in the U.S., with a public registry maintained by DOJ and injunctive enforcement for noncompliance.
The Bigger Picture: Software is Marching Toward “Product”
The AI LEAD Act fits a global trend of treating software and AI as products subject to strict liability. The EU’s rebooted PLD makes this explicit. This bill points in the same direction and, in places, pushes harder. That matters because, as the Blog discussed in a previous post, U.S. courts traditionally treated software as a service, which often kept strict liability theories off the table. Recent decisions, however, have nudged in the other direction, allowing product liability claims to proceed against software and AI systems. Bexis just finished a law review article on this subject. A federal statute that codifies AI as a “product” would accelerate that shift, harmonize some rules, and upend others.
Conclusion: What’s Next
While unlikely to pass as written, the AI LEAD Act is further evidence that AI and software are entering a new phase in the world of product liability law. The bill reflects a growing interest in regulating AI through a product liability lens. For companies developing or deploying AI, the practical takeaway at this stage is simple: keep watching. Whether or not the AI LEAD Act advances, the center of gravity is moving toward treating at least some AI functionality like a product.