Skip to content

Menu

Network by SubjectChannelsBlogsHomeAboutContact
AI Legal Journal logo
Subscribe
Search
Close
PublishersBlogsNetwork by SubjectChannels
Subscribe

European Parliament Study Recommends Strict Liability Regime for High-Risk AI Systems

By Anna Oberschelp de Meneses, Louise Freeman, Dan Cooper & Matsumoto Ryoko on August 22, 2025
Email this postTweet this postLike this postShare this post on LinkedIn

On July 24, 2025, the European Parliament (EP) published a study entitled Artificial Intelligence and Civil Liability – A European Perspective. The study considers some of the EU’s existing and proposed liability frameworks, notably the revised Product Liability Directive (PLDr) and the AI Liability Directive (AILD), which was proposed by the European Commission only to be later withdrawn. The study concludes that neither instrument sufficiently addresses the full scope of product liability risks and defects uniquely posed by high-risk AI systems, as that concept is defined by the EU AI Act. Therefore, it calls for the creation of a dedicated strict liability framework, specifically designed to tackle the particular liability risks that these systems are said to give rise to. While it is too early to predict whether other key European stakeholders will support such a framework and bring it to fruition, this development is an important one to monitor closely for those creating or working with high-risk AI systems.

What does the study propose?

The EP’s study proposes a strict liability framework—potentially in the form of an EU regulation—to address “physical or virtual harm” caused by a high-risk AI system, including damage to the AI system itself. Liability would fall on both providers and/or deployers of such systems, depending on their degree of involvement. These parties would be unable to evade liability by arguing that they acted with due diligence or that the relevant physical or virtual harm was caused by an autonomous activity, device or process driven by their AI-system, except in cases of force majeure or, potentially, where the harm was due to the “reckless behaviour” of the plaintiff.

The study argues that providers and/or deployers of high-risk AI systems, as professional parties serving many users, would be well positioned to investigate incidents, address recurring issues through contractual or market mechanisms, and consolidate claims—such as suing manufacturers under existing product liability rules—thereby reducing litigation and transaction costs. While specific disclosure requirements are said to be unnecessary under a strict liability regime, the study suggests that limited cooperation obligations between litigants could help to streamline the legal process. These cooperation obligations would focus on the exchange of relevant information and evidence between the primary parties involved in the dispute, namely the defendant (provider or deployer) and the claimant (injured party).

How would this strict liability framework differ from the PLDr and the withdrawn AILD?

The EP’s proposed strict liability framework would differ from the PLDr primarily in its broader scope of covered damages and in being tailored to high-risk AI systems only. Under the PLDr, producers are strictly liable for certain harm caused by a defective product, which may include a AI system, regardless of fault–as further discussed in this earlier blog post. However, unlike the PLDr, which excludes damage to the AI system itself and generally retains the development-risk defence—allowing producers to avoid liability if the defect was undiscoverable based on scientific knowledge at the time the product was marketed—the EP proposal explicitly includes liability for “any harm or damage that was caused by a physical or virtual activity, device, or process driven by the AI system”, including damage to the AI system itself. Compensable damages under the proposal would not be limited either by predefined categories or by monetary amounts. The proposal also does not contemplate retaining the development-risk defence. Additionally, while the PLDr relies on procedural tools like rebuttable presumptions of defect and court-ordered disclosures of evidence, the EP proposal envisages more limited information exchange obligations.

The EP’s proposed strict liability framework for high-risk AI systems also differs from the withdrawn AILD, as the latter sought to harmonize procedural aspects of national tort law to support fault-based AI liability claims. The AILD focused, in large part, on easing the burden of proof for claimants by introducing rebuttable presumptions of fault and causality and promoting inter-party disclosure mechanisms, but retained the core fault-based liability principle within existing national legal systems. In contrast, the EP’s strict liability proposal eliminates the need for a plaintiff to prove fault, instead predicating a defendant’s liability solely on the occurrence of harm caused by their high-risk AI system.

*      *      *

Covington will continue to closely monitor legal and policy developments relating to AI liability in the EU, including the implementation of the revised Product Liability Directive and the evolving debate on a dedicated strict liability regime for high-risk AI systems. If you have any questions about the issues discussed in this article, please do not hesitate to contact members of our Commercial Litigation, Public Policy, or Privacy and Cybersecurity teams.

Photo of Louise Freeman Louise Freeman

Louise Freeman focuses on complex commercial disputes, and co-chairs the firm’s Commercial Litigation Practice Group. Described by Legal 500 as “one of London’s most effective partners,” Ms. Freeman helps clients to navigate challenging situations in a range of industries, including financial markets, technology…

Louise Freeman focuses on complex commercial disputes, and co-chairs the firm’s Commercial Litigation Practice Group. Described by Legal 500 as “one of London’s most effective partners,” Ms. Freeman helps clients to navigate challenging situations in a range of industries, including financial markets, technology and life sciences. Most of her cases involve multiple parties and jurisdictions, where her strategic, dynamic advice is invaluable.

Read more about Louise Freeman
Show more Show less
Photo of Dan Cooper Dan Cooper

Daniel Cooper heads up the firm’s growing Data Privacy and Cybersecurity practice in London, and counsels clients in the information technology, pharmaceutical research, sports and financial services industries, among others, on European and UK data protection, data retention and freedom of information laws…

Daniel Cooper heads up the firm’s growing Data Privacy and Cybersecurity practice in London, and counsels clients in the information technology, pharmaceutical research, sports and financial services industries, among others, on European and UK data protection, data retention and freedom of information laws, as well as associated information technology and e-commerce laws and regulations. Mr. Cooper also regularly counsels clients with respect to Internet-related liabilities under European and US laws. Mr. Cooper sits on the advisory boards of a number of privacy NGOs, privacy think tanks, and related bodies.

Read more about Dan Cooper
Show more Show less
  • Posted in:
    Privacy & Data Security
  • Blog:
    Inside Privacy
  • Organization:
    Covington & Burling LLP
  • Article: View Original Source

LexBlog logo
Copyright © 2026, LexBlog. All Rights Reserved.
Legal content Portal by LexBlog LexBlog Logo