The European Commission, as part of the launch of its digital strategy for the next five years, published on 19 February 2020 a White Paper On Artificial Intelligence – A European approach to excellence and trust (the “White Paper”).  (See our previous blog here for a summary of all four of the main papers published by the Commission.)  The White Paper recognizes the opportunities AI presents to Europe’s digital economy, and presents the Commission’s vision for a coordinated approach to promoting the uptake of AI in the EU and addressing the risks associated with certain uses of AI.  The White Paper is open for public consultation until 19 May 2020.

Promoting the uptake of AI in the EU: An Ecosystem of Excellence

The Commission notes that, in order for Europe to seize the opportunities presented by AI, Europe will need to foster “an ecosystem of excellence” that can support the development and uptake of AI across the EU economy and public administration.  To this end, the Commission plans to take the following actions (amongst others):

  • to review and update its 2018 Coordinated Plan on AI;
  • facilitate the creation of AI excellence and testing centres;
  • set up a new public-private partnership in AI, data and robotics;
  • invest in educating and upskilling the workforce to develop AI skills; and
  • promote the adoption of AI by the public sector.

Addressing the risks associated with AI: An Ecosystem of Trust

Despite the transformative potential of AI, the Commission recognizes that certain uses of AI present challenges and risks that the existing EU legislative framework may not be well-suited to address.  Although the White Paper does not set out a concrete framework for new AI legislation, it does set out the Commission’s key priorities in that regard.

A core element of the Commission’s proposals for a potential regulatory framework for AI is the introduction of a mandatory pre-marketing conformity assessment requirement that would apply to “high-risk” AI applications.  The White Paper states that an AI application would be considered high risk only if it meets the following two cumulative criteria:

  1. Whether the AI was deployed in a high-risk sector; the White Paper states that any future legislation should “specifically and exhaustively” list such sectors, and mentions healthcare, transport, energy and parts of the public sector as examples of sectors that are likely to be “high-risk”; and
  2. whether the intended use — or the manner in which it is deployed — is likely to raise significant risks for any individual or company, in particular from the viewpoint of safety, consumer rights and fundamental rights.

The Commission adds that certain types or applications of AI could be deemed high risk regardless of the sector in which it was deployed, and specifically calls out the use of AI in the recruitment and broader employment context, and remote biometric identification systems (e.g., surveillance using facial recognition technology), as two examples.  Note that, on remote biometric identification systems, the Commission intends to “launch a broad European debate on the specific circumstances, if any, which might justify such use, and on common safeguards” (p.22).

The Commission envisages that, under the mandatory pre-marketing conformity assessment, high-risk AI systems could be assessed on certain requirements, including the following:

  • Training data. Requirements to ensure that AI systems are trained on data sets that are sufficiently broad and representative to cover all relevant scenarios and do not lead to dangerous situations or outcomes entailing prohibited discrimination.
  • Keeping records and data. Requirements to keep accurate records regarding the dataset used to train and test AI systems; documentation on the programming and training techniques used to build, test and validate the AI systems; and, in some cases, the training datasets themselves.
  • Information provision. Requirements to provide notice to users when they are interacting with AI systems; requirements to provide clear information about the AI system’s capabilities, limitations, the purposes for which it is intended, the conditions under which it should function, and the expected level of accuracy in achieving the specified purpose.
  • Robustness and accuracy. Requirements to ensure that AI systems are robust, accurate, and can deal with errors or inconsistences during all phases of its life cycle.
  • Human oversight. Requirements to ensure that there is an appropriate level of human oversight over the AI system, including for affected individuals or organizations to seek human review.  The Commission notes that the type and degree of human oversight is likely to depend on the context in which the AI system is deployed.

The Commission plans to ensure that developers, deployers, and other economic operators who are “best placed to address risks” (p. 22) should be subject to these requirements.  Compliance assessments—which may include testing, inspection, and certification requirements—would be undertaken by approved Member State bodies, or by bodies in third countries subject to applicable mutual recognition agreements.  Such compliance assessments could involve testing (and potential disclosure) of both the AI algorithms and the data used to train them.  The Commission also states that it plans to make any such requirements applicable to “all relevant economic operators providing AI-enabled products or services in the EU, regardless of whether they are established in the EU or not” (p.22).

In addition to the mandatory pre-marketing conformity assessment requirement for high-risk AI, the Commission proposes a voluntary labelling scheme for non-high-risk AI applications. This scheme would allow interested suppliers of non-high-risk AI to be awarded “a quality label” for their AI applications that users can easily recognize.  Although entirely voluntary, once a supplier opted to use the label, the requirements would be binding.

Finally, in addition to these new proposals, the Commission intends to review the current EU product safety and liability regimes to address risks associated with products and services involving AI.  A more in-depth discussion of these issues can be found in the Commission’s Report on the safety and liability implications of AI, IoT and robotics (the “Report”), accompanying the White Paper.

The Commission notes in both the White Paper and the Report that there is an extensive body of existing EU product safety rules, including sector-specific rules, that goes some of the way to protect users with regard to certain AI applications.  The Commission nonetheless intends to explore options for reforming the existing product safety rules, including the following (amongst others):

  • requiring new risk assessments if the product is subject to important changes during its lifetime (particularly relevant for self-learning AI;
  • requiring developers to address the risks to safety of faulty data at the design stage and to ensure that quality of data is maintained throughout the use of AI applications; and
  • requiring transparency of algorithms to ensure product safety (particularly relevant for opaque or “black-box” AI systems).

With regard to product liability rules, the Commission also notes that, while the EU’s Product Liability Directive provides a layer of protection at the EU level with regard to certain products, the fault-based liability regimes in many Member States that apply in other scenarios (e.g., to stand-alone software and services) may not be sufficient to protect those harmed by AI applications in all scenarios — particularly given that many AI applications are built through a complex supply chain.  The Commission seeks to ensure that compensation is always available for damage caused by products that are defective because of software or other digital features.  To this end, the Commission is considering pursing an EU-level initiative to adapt the burden of proof required by national liability rules for damage caused by the operation of AI applications, and is seeking views on whether and to what extent strict liability may need to be introduced.

As stated above, the White Paper and the accompanying Report are open for public consultation until 19 May 2020.  Please contact the Covington team for a more detailed analysis of these proposals or to input into the consultation.  Stay tuned for further updates.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Lisa Peets Lisa Peets

Lisa Peets leads the Technology Regulatory and Policy practice in the London office and is a member of the firm’s Management Committee. Lisa divides her time between London and Brussels, and her practice embraces regulatory counsel and legislative advocacy. In this context, she…

Lisa Peets leads the Technology Regulatory and Policy practice in the London office and is a member of the firm’s Management Committee. Lisa divides her time between London and Brussels, and her practice embraces regulatory counsel and legislative advocacy. In this context, she has worked closely with leading multinationals in a number of sectors, including many of the world’s best-known technology companies.

Lisa counsels clients on a range of EU law issues, including data protection and related regimes, copyright, e-commerce and consumer protection, and the rapidly expanding universe of EU rules applicable to existing and emerging technologies. Lisa also routinely advises clients in and outside of the technology sector on trade related matters, including EU trade controls rules.

According to the latest edition of Chambers UK (2022), “Lisa is able to make an incredibly quick legal assessment whereby she perfectly distils the essential matters from the less relevant elements.” “Lisa has subject matter expertise but is also able to think like a generalist and prioritise. She brings a strategic lens to matters.”

Photo of Marty Hansen Marty Hansen

Martin Hansen has represented some of the world’s leading information technology, telecommunications, and pharmaceutical companies on a broad range of cutting edge international trade, intellectual property, and competition issues. Martin has extensive experience in advising clients on matters arising under the World Trade…

Martin Hansen has represented some of the world’s leading information technology, telecommunications, and pharmaceutical companies on a broad range of cutting edge international trade, intellectual property, and competition issues. Martin has extensive experience in advising clients on matters arising under the World Trade Organization agreements, treaties administered by the World Intellectual Property Organization, bilateral and regional free trade agreements, and other trade agreements.

Drawing on ten years of experience in Covington’s London and DC offices his practice focuses on helping innovative companies solve challenges on intellectual property and trade matters before U.S. courts, the U.S. government, and foreign governments and tribunals. Martin also represents software companies and a leading IT trade association on electronic commerce, Internet security, and online liability issues.

Photo of Sam Jungyun Choi Sam Jungyun Choi

Sam Jungyun Choi is an associate in the technology regulatory group in the London office. Her practice focuses on European data protection law and new policies and legislation relating to innovative technologies such as artificial intelligence, online platforms, digital health products and autonomous…

Sam Jungyun Choi is an associate in the technology regulatory group in the London office. Her practice focuses on European data protection law and new policies and legislation relating to innovative technologies such as artificial intelligence, online platforms, digital health products and autonomous vehicles. She also advises clients on matters relating to children’s privacy and policy initiatives relating to online safety.

Sam advises leading technology, software and life sciences companies on a wide range of matters relating to data protection and cybersecurity issues. Her work in this area has involved advising global companies on compliance with European data protection legislation, such as the General Data Protection Regulation (GDPR), the UK Data Protection Act, the ePrivacy Directive, and related EU and global legislation. She also advises on a variety of policy developments in Europe, including providing strategic advice on EU and national initiatives relating to artificial intelligence, data sharing, digital health, and online platforms.

Photo of Anna Oberschelp de Meneses Anna Oberschelp de Meneses

Anna Sophia Oberschelp de Meneses is an associate in the Data Privacy and Cybersecurity Practice Group.  Anna is a qualified Portuguese lawyer, but is both a native Portuguese and German speaker.  Anna advises companies on European data protection law and helps clients coordinate…

Anna Sophia Oberschelp de Meneses is an associate in the Data Privacy and Cybersecurity Practice Group.  Anna is a qualified Portuguese lawyer, but is both a native Portuguese and German speaker.  Anna advises companies on European data protection law and helps clients coordinate international data protection law projects.  She has obtained a certificate for “corporate data protection officer” by the German Association for Data Protection and Data Security (“Gesellschaft für Datenschutz und Datensicherheit e.V.”). She is also Certified Information Privacy Professional Europe (CIPPE/EU) by the International Association of Privacy Professionals (IAPP).  Anna also advises companies in the field of EU consumer law and has been closely tracking the developments in this area.  Her extensive language skills allow her to monitor developments and help clients tackle EU Data Privacy, Cybersecurity and Consumer Law issues in various EU and ROW jurisdictions.