Now that the EU Artificial Intelligence Act (“AI Act”) has entered into force, the EU institutions are turning their attention to the proposal for a directive on adapting non-contractual civil liability rules to artificial intelligence (the so-called “AI Liability Directive”).  Although the EU Parliament and the Council informally agreed on the text of the proposal in December 2023 (see our previous blog posts here and here), the text of the proposal is expected to change based on a complementary impact assessment published by the European Parliamentary Research Service on September 19.

Brief Overview of the AI Liability Directive

The AI Liability Directive was proposed to establish harmonised rules in fault-based claims (e.g., negligence).  These were to cover the disclosure of evidence on high-risk artificial intelligence (“AI”) systems and the burden of proof including, in certain circumstances, a rebuttable presumption of causation between the fault of the defendant (i.e., the provider or deployer of an AI system) and the output produced by the AI system or the failure of the AI system to produce an output.

Potential Changes to the AI Liability Directive

In July, news reports leaked a slightly amended version of the European Commission’s AI Liability Directive proposal to align the wording with the adopted AI Act (Council document ST 12523 2024 INIT).  The amendments reflect the difference in numbering between the proposed AI Act and the enacted version.

Over the summer, the EU Parliamentary Research Service carried out a complementary impact assessment to evaluate whether the AI Liability Directive should remain on the EU’s list of priorities.  In particular, the new assessment was to determine whether the AI Liability Directive is still needed in light of the proposal for a new Product Liability Directive (see our blog post here).

The European institutions are expected to adopt a new Product Liability Directive in the autumn of 2024 to apply from autumn 2026 (see our blog post here).  While the AI Liability Directive would apply to fault-based claims, the soon to be adopted proposal for a new Product Liability Directive deals with strict liability, including in relation to AI systems.

The Parliamentary Research Service has now published the complementary impact assessment.  While it concludes that the AI Liability Directive is still needed, it recommends substantial changes to its scope.  Among others, it recommends the following:

  • The AI Liability Directive should become a regulation that is directly applicable in all Member States, instead of a directive that Member States have to transpose into their national laws.  The reason for this would be to avoid discrepancies between Member States’ AI liability frameworks, which would negatively affect AI developers and consumers.  This would also be in line with what is happening in the areas most closely related to product liability (i.e., product safety and market regulation), which have recently moved to the use of regulations instead of directives.  The complementary impact assessment also suggested that the Product Liability Directive should be revised to become a regulation.
  • The material scope of the proposed AI Liability Directive should be extended to non-AI software.  This would align with the proposed Product Liability Directive, which also applies to all types of software.  The type of harm to be compensated would also be broader under the AI Liability Directive.  While the Product Liability Directive would apply strict liability to damage to consumers’ property, health, and life, the AI Liability Directive would also apply to damage arising from discrimination, personality rights, other fundamental rights, professional property (e.g., intellectual property rights), pure economic loss, and sustainability (e.g., increasing energy and water consumption).
  • The AI Liability Directive’s provisions on high-risk AI systems should be extended to “newly identified areas of concern” and AI systems prohibited under the AI Act.  The complementary impact assessment identifies the following new “areas of concern”: (i) general purpose AI systems; (ii) “OLF systems” (such as autonomous vehicles, transportation-related AI applications more generally, and other AI systems falling under Annex I, Section B, of the AI Act); and (iii) insurance applications beyond health and life insurance.
  • The AI Liability Directive should explicitly establish a causal link between the output of an AI system and any resulting damages in cases of non-compliance with the human oversight provisions of the AI Act (i.e., Articles 14 and 26(2) and (5)).  The failure of the provider of an AI system to provide for adequate human supervision, and the failure of the deployer of that system to exercise such supervision, should be presumed to have been the cause of the output of the AI system resulting in harm.
  • The AI Liability Directive should allow claimants to seek a court order requiring the defendant to disclose evidence and information necessary for the claimant to bring the claim, simply by demonstrating harm and the involvement of an AI system, and possibly by demonstrating that it is not implausible that the AI caused the harm.  This would not apply to claimants who are competitors of the defendant, in order to avoid vexatious litigation and to protect trade secrets.  The current version of the AI Liability Directive requires claimants to provide sufficient evidence to support the plausibility of the claim.
  • The AI Liability Directive should provide for joint liability along the AI value chain.  The complementary impact assessment proposes three options for the “fair sharing of the liability burden” along the AI value chain.  Briefly, these are: (i) presuming an equal share of liability for all actors involved in the AI value chain; (ii) including in the AI Liability Directive exemptions from liability in favour of SMEs; and (iii) prohibiting contractual clauses that waive or restrict the right of recourse for downstream actors.

In addition to the above, the complementary impact assessment recommends assessing in more detail whether to include strict liability in future versions of the AI Liability Directive, potentially in the context of an impact assessment for a regulation on AI liability.  The complementary impact assessment identifies various pros and cons of doing so.

Next Steps

The European Parliament’s Legal Affairs Committee (JURI), which is responsible for the adoption of AI liability legislation, is expected to decide in October whether to follow the complementary impact assessment’s suggestion to abandon the current proposal for a directive and recommend that the European Commission propose an AI liability regulation.  While the JURI is not obliged to take into account the findings of the complementary impact assessment, it will help to inform the political decision.

Meanwhile, the Council has sent questions to Member State governments on: (i) the measures that claimants should have to identify the person potentially liable for the damages; and (ii) the rebuttable presumption of a casual link between the AI system and the damage in certain circumstances.  Member States have until October 11 to respond.

*           *           *

Covington’s Data Privacy and Cybersecurity team and Litigation team regularly advise companies on their most challenging compliance issues in the EU and UK and other key markets, including on AI, data protection, and consumer law.  Our team is happy to assist companies any other related inquiries

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Kristof Van Quathem Kristof Van Quathem

Kristof Van Quathem advises clients on information technology matters and policy, with a focus on data protection, cybercrime and various EU data-related initiatives, such as the Data Act, the AI Act and EHDS.

Kristof has been specializing in this area for over twenty…

Kristof Van Quathem advises clients on information technology matters and policy, with a focus on data protection, cybercrime and various EU data-related initiatives, such as the Data Act, the AI Act and EHDS.

Kristof has been specializing in this area for over twenty years and developed particular experience in the life science and information technology sectors. He counsels clients on government affairs strategies concerning EU lawmaking and their compliance with applicable regulatory frameworks, and has represented clients in non-contentious and contentious matters before data protection authorities, national courts and the Court of the Justice of the EU.

Kristof is admitted to practice in Belgium.

Photo of Louise Freeman Louise Freeman

Louise Freeman represents parties in complex commercial disputes and class actions, and co-chairs the firm’s Commercial Litigation and EMEA Dispute Resolution Practice Groups.

Described by Legal 500 as “one of London’s most effective partners,” Louise helps clients to navigate challenging situations in a…

Louise Freeman represents parties in complex commercial disputes and class actions, and co-chairs the firm’s Commercial Litigation and EMEA Dispute Resolution Practice Groups.

Described by Legal 500 as “one of London’s most effective partners,” Louise helps clients to navigate challenging situations in a range of industries, including technology, life sciences and financial markets. Most of her cases involve multiple parties and jurisdictions, where her strategic, dynamic advice is invaluable. Chambers notes “Louise is tactically and strategically brilliant and has phenomenal management skills on complex litigation,” she is “a class act.”

Louise also represents parties in significant competition law claims, including a number of the leading cases in England.

Louise is a “recognised name for complex class actions” (Legal 500), defending clients targeted in proposed opt-out and opt-in claims, as well as advising clients on multi-jurisdictional class action risks.

Photo of Dan Cooper Dan Cooper

Daniel Cooper is co-chair of Covington’s Data Privacy and Cyber Security Practice, and advises clients on information technology regulatory and policy issues, particularly data protection, consumer protection, AI, and data security matters. He has over 20 years of experience in the field, representing…

Daniel Cooper is co-chair of Covington’s Data Privacy and Cyber Security Practice, and advises clients on information technology regulatory and policy issues, particularly data protection, consumer protection, AI, and data security matters. He has over 20 years of experience in the field, representing clients in regulatory proceedings before privacy authorities in Europe and counseling them on their global compliance and government affairs strategies. Dan regularly lectures on the topic, and was instrumental in drafting the privacy standards applied in professional sport.

According to Chambers UK, his “level of expertise is second to none, but it’s also equally paired with a keen understanding of our business and direction.” It was noted that “he is very good at calibrating and helping to gauge risk.”

Dan is qualified to practice law in the United States, the United Kingdom, Ireland and Belgium. He has also been appointed to the advisory and expert boards of privacy NGOs and agencies, such as the IAPP’s European Advisory Board, Privacy International and the European security agency, ENISA.

Photo of Anna Oberschelp de Meneses Anna Oberschelp de Meneses

Anna Sophia Oberschelp de Meneses is an associate in the Data Privacy and Cybersecurity Practice Group.

Anna is a qualified Portuguese lawyer, but is both a native Portuguese and German speaker.

Anna advises companies on European data protection law and helps clients coordinate…

Anna Sophia Oberschelp de Meneses is an associate in the Data Privacy and Cybersecurity Practice Group.

Anna is a qualified Portuguese lawyer, but is both a native Portuguese and German speaker.

Anna advises companies on European data protection law and helps clients coordinate international data protection law projects.

She has obtained a certificate for “corporate data protection officer” by the German Association for Data Protection and Data Security (“Gesellschaft für Datenschutz und Datensicherheit e.V.”). She is also Certified Information Privacy Professional Europe (CIPPE/EU) by the International Association of Privacy Professionals (IAPP).

Anna also advises companies in the field of EU consumer law and has been closely tracking the developments in this area.

Her extensive language skills allow her to monitor developments and help clients tackle EU Data Privacy, Cybersecurity and Consumer Law issues in various EU and ROW jurisdictions.