On July 24, 2025, the European Parliament (EP) published a study entitled Artificial Intelligence and Civil Liability – A European Perspective. The study considers some of the EU’s existing and proposed liability frameworks, notably the revised Product Liability Directive (PLDr) and the AI Liability Directive (AILD), which was proposed by the European Commission only to be later withdrawn. The study concludes that neither instrument sufficiently addresses the full scope of product liability risks and defects uniquely posed by high-risk AI systems, as that concept is defined by the EU AI Act. Therefore, it calls for the creation of a dedicated strict liability framework, specifically designed to tackle the particular liability risks that these systems are said to give rise to. While it is too early to predict whether other key European stakeholders will support such a framework and bring it to fruition, this development is an important one to monitor closely for those creating or working with high-risk AI systems.

What does the study propose?

The EP’s study proposes a strict liability framework—potentially in the form of an EU regulation—to address “physical or virtual harm” caused by a high-risk AI system, including damage to the AI system itself. Liability would fall on both providers and/or deployers of such systems, depending on their degree of involvement. These parties would be unable to evade liability by arguing that they acted with due diligence or that the relevant physical or virtual harm was caused by an autonomous activity, device or process driven by their AI-system, except in cases of force majeure or, potentially, where the harm was due to the “reckless behaviour” of the plaintiff.

The study argues that providers and/or deployers of high-risk AI systems, as professional parties serving many users, would be well positioned to investigate incidents, address recurring issues through contractual or market mechanisms, and consolidate claims—such as suing manufacturers under existing product liability rules—thereby reducing litigation and transaction costs. While specific disclosure requirements are said to be unnecessary under a strict liability regime, the study suggests that limited cooperation obligations between litigants could help to streamline the legal process. These cooperation obligations would focus on the exchange of relevant information and evidence between the primary parties involved in the dispute, namely the defendant (provider or deployer) and the claimant (injured party).

How would this strict liability framework differ from the PLDr and the withdrawn AILD?

The EP’s proposed strict liability framework would differ from the PLDr primarily in its broader scope of covered damages and in being tailored to high-risk AI systems only. Under the PLDr, producers are strictly liable for certain harm caused by a defective product, which may include a AI system, regardless of fault–as further discussed in this earlier blog post. However, unlike the PLDr, which excludes damage to the AI system itself and generally retains the development-risk defence—allowing producers to avoid liability if the defect was undiscoverable based on scientific knowledge at the time the product was marketed—the EP proposal explicitly includes liability for “any harm or damage that was caused by a physical or virtual activity, device, or process driven by the AI system”, including damage to the AI system itself. Compensable damages under the proposal would not be limited either by predefined categories or by monetary amounts. The proposal also does not contemplate retaining the development-risk defence. Additionally, while the PLDr relies on procedural tools like rebuttable presumptions of defect and court-ordered disclosures of evidence, the EP proposal envisages more limited information exchange obligations.

The EP’s proposed strict liability framework for high-risk AI systems also differs from the withdrawn AILD, as the latter sought to harmonize procedural aspects of national tort law to support fault-based AI liability claims. The AILD focused, in large part, on easing the burden of proof for claimants by introducing rebuttable presumptions of fault and causality and promoting inter-party disclosure mechanisms, but retained the core fault-based liability principle within existing national legal systems. In contrast, the EP’s strict liability proposal eliminates the need for a plaintiff to prove fault, instead predicating a defendant’s liability solely on the occurrence of harm caused by their high-risk AI system.

*      *      *

Covington will continue to closely monitor legal and policy developments relating to AI liability in the EU, including the implementation of the revised Product Liability Directive and the evolving debate on a dedicated strict liability regime for high-risk AI systems. If you have any questions about the issues discussed in this article, please do not hesitate to contact members of our Commercial Litigation, Public Policy, or Privacy and Cybersecurity teams.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Anna Sophia Oberschelp de Meneses Anna Sophia Oberschelp de Meneses

Anna Sophia Oberschelp de Meneses is special counsel in the Data Privacy and Cybersecurity Practice Group.

Anna is a qualified Portuguese lawyer, but is both a native Portuguese and German speaker.

Anna advises companies on European data protection law and helps clients coordinate…

Anna Sophia Oberschelp de Meneses is special counsel in the Data Privacy and Cybersecurity Practice Group.

Anna is a qualified Portuguese lawyer, but is both a native Portuguese and German speaker.

Anna advises companies on European data protection law and helps clients coordinate international data protection law projects.

She has obtained a certificate for “corporate data protection officer” by the German Association for Data Protection and Data Security (“Gesellschaft für Datenschutz und Datensicherheit e.V.”). She is also Certified Information Privacy Professional Europe (CIPPE/EU) by the International Association of Privacy Professionals (IAPP).

Anna also advises companies in the field of EU consumer law and has been closely tracking the developments in this area.

Her extensive language skills allow her to monitor developments and help clients tackle EU Data Privacy, Cybersecurity and Consumer Law issues in various EU and ROW jurisdictions.

Photo of Louise Freeman Louise Freeman

Louise Freeman represents parties in complex commercial disputes and class actions, and co-chairs the firm’s Commercial Litigation and EMEA Dispute Resolution Practice Groups.

Described by Legal 500 as “one of London’s most effective partners,” Louise helps clients to navigate challenging situations in a…

Louise Freeman represents parties in complex commercial disputes and class actions, and co-chairs the firm’s Commercial Litigation and EMEA Dispute Resolution Practice Groups.

Described by Legal 500 as “one of London’s most effective partners,” Louise helps clients to navigate challenging situations in a range of industries, including technology, life sciences and financial markets. Most of her cases involve multiple parties and jurisdictions, where her strategic, dynamic advice is invaluable. Chambers notes “Louise is tactically and strategically brilliant and has phenomenal management skills on complex litigation,” she is “a class act.”

Louise also represents parties in significant competition law claims, including a number of the leading cases in England.

Louise is a “recognised name for complex class actions” (Legal 500), defending clients targeted in proposed opt-out and opt-in claims, as well as advising clients on multi-jurisdictional class action risks.

Photo of Dan Cooper Dan Cooper

Daniel Cooper is co-chair of Covington’s Data Privacy and Cyber Security Practice, and advises clients on information technology regulatory and policy issues, particularly data protection, consumer protection, AI, and data security matters. He has over 20 years of experience in the field, representing…

Daniel Cooper is co-chair of Covington’s Data Privacy and Cyber Security Practice, and advises clients on information technology regulatory and policy issues, particularly data protection, consumer protection, AI, and data security matters. He has over 20 years of experience in the field, representing clients in regulatory proceedings before privacy authorities in Europe and counseling them on their global compliance and government affairs strategies. Dan regularly lectures on the topic, and was instrumental in drafting the privacy standards applied in professional sport.

According to Chambers UK, his “level of expertise is second to none, but it’s also equally paired with a keen understanding of our business and direction.” It was noted that “he is very good at calibrating and helping to gauge risk.”

Dan is qualified to practice law in the United States, the United Kingdom, Ireland and Belgium. He has also been appointed to the advisory and expert boards of privacy NGOs and agencies, such as the IAPP’s European Advisory Board, Privacy International and the European security agency, ENISA.

Matsumoto Ryoko

Ryoko Matsumoto is a global visiting lawyer who attended Kyoto University, Kyoto University Law School, and Stanford Law School.