Skip to content

Menu

Network by SubjectChannelsBlogsHomeAboutContact
AI Legal Journal logo
Subscribe
Search
Close
PublishersBlogsNetwork by SubjectChannels
Subscribe

Emerging Legal Challenges: Artificial Intelligence and Product Liability

By Kimberly Chew, Hilda Akopyan & Nick Morgan on October 20, 2025
Email this postTweet this postLike this postShare this post on LinkedIn

The bar is rising for the developers of generative artificial intelligence (AI) platforms and other companies that utilize generative AI in public-facing applications. As AI becomes more integrated into everyday products and services—and as litigation involving these uses evolves—avoiding legal liability and maintaining regulatory compliance will be something of a moving target but one that the industry will need to follow closely.

Recent litigation and legislation have highlighted how traditional product liability theories, such as design defect and failure to warn, are being tested and redefined in the AI context. Because AI platforms’ system operations and decision-making can be opaque even to their creators—AI’s so-called “black box”—it is inherently difficult to assess liability, assign responsibility, and anticipate the full range of potential harms. Nonetheless, federal and state policymakers are introducing legislation at a dizzying pace. According to some observers, over 1,000 bills have been introduced by federal and state legislators during the 2025 legislative session.

Notable among these efforts, the Senate Judiciary Committee held hearings on September 17, 2025, on the harm of AI chatbots. Based off the testimony from that hearing, Senators Josh Hawley
(R-MO) and Dick Durbin (D-IL) introduced the Aligning Incentives for Leadership, Excellence, and Advancement in Development (AI LEAD) Act. The proposed legislation classifies AI systems as products and creates a federal cause of action for products liability claims to be brought when an AI system causes harm. The bill attempts to ensure that AI companies are incentivized to design their systems with safety as a priority and not as a secondary concern behind deploying the product to the market as quickly as possible.

One of the proximate events leading to congressional action involved a recent high-profile lawsuit filed in California, where the parents of a teenager allege that an AI-powered chatbot engaged in a series of conversations with their 16-year-old child while he was experiencing a mental health crisis. According to the complaint, the chatbot validated the teenager’s feelings of despair and over a period of months provided increasingly specific guidance on methods of self-harm.1 The parents allege that the chatbot failed to intervene or de-escalate the situation—even after being shown evidence of physical injury—and ultimately assisted in drafting a suicide note. The child at the center of the litigation committed suicide in April 2025.

Parents of the deceased teenager sued the company responsible for the AI-powered chatbot alleging that the AI system identified their son in crisis and allegedly encouraged self-harm and isolation from family and peers during the conversations that took place. They further allege that the product prioritized engagement over safety. Plaintiffs’ causes of action include (1) Strict Liability (design defect), (2) Strict Liability (failure to warn), (3) Negligence (design defect), (4) Negligence (Failure to Warn), (5) UCL Violation, (6) Wrongful Death, and (7) Survival Action.

The parents are suing for Strict Products Liability based on the allegation that the company had knowledge the software was known to be defective when it became the AI system for the company. They further allege defendant failed to warn users about its priority being engagement and not user safety, and they created a design defect that would validate users who were facing dangerous mental health crises, particularly suicidal ideation.

This high-profile litigation has been characterized as the “first known wrongful death suit” against an AI platform, and we anticipate allegations premised on product liability theories will grow in number and sophistication.

Looking Ahead to Future Developments

With this attention on AI products, what does this mean for companies either developing AI software or commercializing products? First, it is crucial to ensure that product safety measures are in place to protect consumers. Lawmakers are moving quickly to enact new frameworks. The proposed federal AI LEAD Act would, for the first time, explicitly classify AI systems as products and create a federal cause of action for AI-related product liability. At the state level, California is poised to adopt SB 243, which would impose stringent requirements on companies operating >br>AI-powered chatbots, including mandatory risk assessments, transparency obligations, and proactive risk mitigation measures. Similarly, Colorado’s AI Act and the EU AI Act both reflect a global trend toward comprehensive, risk-based regulation of AI systems, with significant penalties for non-compliance and a strong emphasis on consumer protection. For a deeper look at California’s emerging regulatory regime and info regarding the EU AI Act, see California Legislature Advances Sweeping AI Bill: Implications for Businesses and Developers of “Companion Chatbots”.

While software has been expertly designed to capture human attention, engage users, and keep them coming back by validating feelings, it is clear that proposed legislation aims to balance engagement with safety. For companies, this means the standard is being raised. Beyond implementing robust safety protocols and monitoring user interactions for signs of harm, organizations must be prepared to conduct regular bias audits, document risk assessments, and provide clear disclosures about how their AI systems work and what risks they may pose.

Companies should also review and update their internal policies and governance structures to ensure compliance with emerging laws in all relevant jurisdictions. Staying informed about legislative developments—such as the status of California’s SB 243 or the EU AI Act—is also critical. Lastly, working with experienced attorneys can help navigate the complexities of the evolving AI and product liability space.

Photo of Kimberly Chew Kimberly Chew

As a former life sciences researcher, Kimberly utilizes her ability to understand and explain complex topics to advise clients as to regulatory and legal liability issues. With a focus on litigation and discovery, she manages the discovery program for complex product liability and

…

As a former life sciences researcher, Kimberly utilizes her ability to understand and explain complex topics to advise clients as to regulatory and legal liability issues. With a focus on litigation and discovery, she manages the discovery program for complex product liability and toxic tort cases as national counsel.

Kimberly works with the firm’s innovative and award-winning Asbestos Litigation team in helping clients manage their risk profiles and in defending large national dockets. Businesses, academic and medical research centers, premises owners, contractors, and manufacturers are among those who rely on Kimberly’s broad range of litigation and regulatory experience. As national coordinating counsel, she coordinates the discovery for thousands of cases—drafting, responding to and preparing discovery and trial support documents for complex cases throughout the nation.

Kimberly has advised clients in matters of product liability, environmental matters, and legal matters regarding Schedule I controlled substances that involve laws such as the Controlled Substances Act, healthcare and research study regulations under the California Business & Professions Code, requirements relating to DEA licensing and registration of controlled substances, CERCLA/Superfund, RCRA, California Proposition 65, and other alleged solid and liquid hazardous waste violations in response to federal and state agency inspections.

Kimberly is the co-founder and co-lead of the firm’s Psychedelic and Emerging Therapies practice group. Kimberly’s background in biotech startups as a research scientist gives her a unique perspective in emerging areas of law such as psychedelic therapeutics. Psychedelics have garnered headline news as breakthrough medicines for the treatment of psychiatric conditions, including PTSD and major depression. Kimberly keeps current on the academic and commercial research and drug discovery efforts in this rapidly developing field so as to assist clients in navigating and complying with complex regulatory issues and protecting and commercializing intellectual property.

Knowledgeable and energetic, Kimberly is known for identifying key strategies in challenging cases, then easily communicating those ideas to clients and legal teams in order to move forward with the best solutions.

Prior to her admission into the state bar, Kimberly externed with the California Attorney General’s Office.

Read more about Kimberly ChewKimberly's Linkedin Profile
Show more Show less
Photo of Hilda Akopyan Hilda Akopyan

Hilda is a Senior Associate in Husch Blackwell’s Mass Tort & Product Liability group.

Read more about Hilda Akopyan
Photo of Nick Morgan Nick Morgan

Nick is an Attorney in the Mass Tort & Product Liability group.

Read more about Nick Morgan
  • Posted in:
    Personal Injury
  • Blog:
    Product Perspective: Complex Tort & Product Law
  • Organization:
    Husch Blackwell LLP
  • Article: View Original Source

LexBlog logo
Copyright © 2025, LexBlog. All Rights Reserved.
Legal content Portal by LexBlog LexBlog Logo