Now that the EU Artificial Intelligence Act (“AI Act”) has entered into force, the EU institutions are turning their attention to the proposal for a directive on adapting non-contractual civil liability rules to artificial intelligence (the so-called “AI Liability Directive”). Although the EU Parliament and the Council informally agreed on the text of the proposal in December 2023 (see our previous blog posts here and here), the text of the proposal is expected to change based on a complementary impact assessment published by the European Parliamentary Research Service on September 19.
Brief Overview of the AI Liability Directive
The AI Liability Directive was proposed to establish harmonised rules in fault-based claims (e.g., negligence). These were to cover the disclosure of evidence on high-risk artificial intelligence (“AI”) systems and the burden of proof including, in certain circumstances, a rebuttable presumption of causation between the fault of the defendant (i.e., the provider or deployer of an AI system) and the output produced by the AI system or the failure of the AI system to produce an output.
Potential Changes to the AI Liability Directive
In July, news reports leaked a slightly amended version of the European Commission’s AI Liability Directive proposal to align the wording with the adopted AI Act (Council document ST 12523 2024 INIT). The amendments reflect the difference in numbering between the proposed AI Act and the enacted version.
Over the summer, the EU Parliamentary Research Service carried out a complementary impact assessment to evaluate whether the AI Liability Directive should remain on the EU’s list of priorities. In particular, the new assessment was to determine whether the AI Liability Directive is still needed in light of the proposal for a new Product Liability Directive (see our blog post here).
The European institutions are expected to adopt a new Product Liability Directive in the autumn of 2024 to apply from autumn 2026 (see our blog post here). While the AI Liability Directive would apply to fault-based claims, the soon to be adopted proposal for a new Product Liability Directive deals with strict liability, including in relation to AI systems.
The Parliamentary Research Service has now published the complementary impact assessment. While it concludes that the AI Liability Directive is still needed, it recommends substantial changes to its scope. Among others, it recommends the following:
- The AI Liability Directive should become a regulation that is directly applicable in all Member States, instead of a directive that Member States have to transpose into their national laws. The reason for this would be to avoid discrepancies between Member States’ AI liability frameworks, which would negatively affect AI developers and consumers. This would also be in line with what is happening in the areas most closely related to product liability (i.e., product safety and market regulation), which have recently moved to the use of regulations instead of directives. The complementary impact assessment also suggested that the Product Liability Directive should be revised to become a regulation.
- The material scope of the proposed AI Liability Directive should be extended to non-AI software. This would align with the proposed Product Liability Directive, which also applies to all types of software. The type of harm to be compensated would also be broader under the AI Liability Directive. While the Product Liability Directive would apply strict liability to damage to consumers’ property, health, and life, the AI Liability Directive would also apply to damage arising from discrimination, personality rights, other fundamental rights, professional property (e.g., intellectual property rights), pure economic loss, and sustainability (e.g., increasing energy and water consumption).
- The AI Liability Directive’s provisions on high-risk AI systems should be extended to “newly identified areas of concern” and AI systems prohibited under the AI Act. The complementary impact assessment identifies the following new “areas of concern”: (i) general purpose AI systems; (ii) “OLF systems” (such as autonomous vehicles, transportation-related AI applications more generally, and other AI systems falling under Annex I, Section B, of the AI Act); and (iii) insurance applications beyond health and life insurance.
- The AI Liability Directive should explicitly establish a causal link between the output of an AI system and any resulting damages in cases of non-compliance with the human oversight provisions of the AI Act (i.e., Articles 14 and 26(2) and (5)). The failure of the provider of an AI system to provide for adequate human supervision, and the failure of the deployer of that system to exercise such supervision, should be presumed to have been the cause of the output of the AI system resulting in harm.
- The AI Liability Directive should allow claimants to seek a court order requiring the defendant to disclose evidence and information necessary for the claimant to bring the claim, simply by demonstrating harm and the involvement of an AI system, and possibly by demonstrating that it is not implausible that the AI caused the harm. This would not apply to claimants who are competitors of the defendant, in order to avoid vexatious litigation and to protect trade secrets. The current version of the AI Liability Directive requires claimants to provide sufficient evidence to support the plausibility of the claim.
- The AI Liability Directive should provide for joint liability along the AI value chain. The complementary impact assessment proposes three options for the “fair sharing of the liability burden” along the AI value chain. Briefly, these are: (i) presuming an equal share of liability for all actors involved in the AI value chain; (ii) including in the AI Liability Directive exemptions from liability in favour of SMEs; and (iii) prohibiting contractual clauses that waive or restrict the right of recourse for downstream actors.
In addition to the above, the complementary impact assessment recommends assessing in more detail whether to include strict liability in future versions of the AI Liability Directive, potentially in the context of an impact assessment for a regulation on AI liability. The complementary impact assessment identifies various pros and cons of doing so.
Next Steps
The European Parliament’s Legal Affairs Committee (JURI), which is responsible for the adoption of AI liability legislation, is expected to decide in October whether to follow the complementary impact assessment’s suggestion to abandon the current proposal for a directive and recommend that the European Commission propose an AI liability regulation. While the JURI is not obliged to take into account the findings of the complementary impact assessment, it will help to inform the political decision.
Meanwhile, the Council has sent questions to Member State governments on: (i) the measures that claimants should have to identify the person potentially liable for the damages; and (ii) the rebuttable presumption of a casual link between the AI system and the damage in certain circumstances. Member States have until October 11 to respond.
* * *
Covington’s Data Privacy and Cybersecurity team and Litigation team regularly advise companies on their most challenging compliance issues in the EU and UK and other key markets, including on AI, data protection, and consumer law. Our team is happy to assist companies any other related inquiries