On November 21, 2019, the European Commission’s Expert Group on Liability and New Technologies – New Technologies Formation (“NTF”) published its Report on Liability for Artificial Intelligence and other emerging technologies. The Commission tasked the NTF with establishing the extent to which liability frameworks in the EU will continue to operate effectively in relation to emerging digital technologies (including artificial intelligence, the internet of things, and distributed ledger technologies). This report presents the NTF’s findings and recommendations.
Generally, liability regimes across EU Member States are not harmonized, although there are some sectors where there is some harmonization (e.g., in relation to product liability). Nonetheless, the NTF found that existing regimes do provide basic protection for people that suffer harm caused by emerging digital technologies, and in many cases these frameworks (e.g., product liability for defective IoT devices) function effectively, or at least provide a starting point for assessing liability.
The NTF also found, however, that certain characteristics of new technologies may make it more difficult for victims to make successful claims in particular circumstances, and in others the allocation of liability may be unfair or inefficient. For example, where a victim is required to prove that damage suffered was caused by a some conduct attributable to an AI tool, that may require analysis of complex underlying code, which may be beyond the ability of many victims.
Consequently, the report provides high-level recommendations on how existing liability concepts might be applied, with adjustments, to emerging digital technologies, and does not recommend a complete overhaul of liability rules in the EU.
More specifically, key findings from the NTF include:
- It is not necessary to give autonomous systems a separate legal personality for liability purposes. In particular, this would require attributing funds to such a system, effectively putting a cap on liability, which may be inadequate to meet all compensation claims.
- In certain circumstances, it may be appropriate to impose strict liability for damage caused by emerging technologies (e.g., liability without any finding of fault). However, this should be limited to situations where technology may create comparable risks to those subject to strict liability under existing frameworks. This is likely to arise primarily in circumstances where digital technologies move in public spaces (e.g., drones, autonomous cars).
- Vicarious liability should be imposed on operators of technologies in line with existing vicarious liability regimes. For example, a company might be vicariously liable where an autonomous tool causes harm that, if caused by a human employee / auxiliary, would normally give rise to vicarious liability. The NTF also states that once autonomous technology capabilities outperform humans, there should be a higher expected standard of those technologies, and dropping below that higher standard would lead to vicarious liability for the operator.
- The burden of proof for both causation and damage should, as a general rule, be on the victim (as is the case under current liability frameworks). In certain circumstances, however, it may be appropriate to reverse or lower this burden, particularly where such a burden would be disproportionately high. In particular, the burden of proof should be reversed if damage was caused by a breach of some other set of rules (e.g., rules on information security).
- Where multiple people or companies cooperate to provide different elements of some technology (e.g., if they coordinate marketing or have a degree of technical interdependence), they should be jointly or severally liable for harms caused by that system.
- New duties of care may need to be developed, in particular on operators of technologies to choose appropriate systems to achieve their goals and to monitor the use of those systems, as well as on producers of those technologies to design products that enable operators to comply with their duties and to monitor products after they go into circulation.
- Insurance may, in the future, need to be mandatory for certain technologies if the potential harm is more frequent or more severe, and if operators in question are unlikely to be able to provide redress to individuals (e.g., for startups with limited capital for redress).
At this stage, the report does not provide specific recommendations as to how these recommendations should be implemented into EU or national laws. However, the Expert Group also incorporates a separate Product Liability Directive Formation (“PLDF”), whose task is to provide expertise on the applicability of Directive 85/374/EEC (the Product Liability Directive) to these new technologies. Going forward, therefore, further reports from the NTF, or publications from the PLDF, are likely to provide more concrete recommendations as to how, if at all, laws at an EU level should be changed to ensure that liability mechanisms are fit for purpose in the context of AI and other developing technologies.