Artificial Intelligence (AI)

On December 23, 2020, the European Commission (the “Commission”) published its inception impact assessment (“Inception Impact Assessment”) of policy options for establishing a European Health Data Space (“EHDS”).  The Inception Impact Assessment is open for consultation until February 3, 2021, encouraging “citizens and stakeholders” to “provide views on the Commission’s understanding of the current situation, problem and possible solutions”.

Continue Reading European Commission Conducts Open Consultation on the European Health Data Space Initiative

On 17 December 2020, the Council of Europe’s* Ad hoc Committee on Artificial Intelligence (CAHAI) published a Feasibility Study (the “Study”) on Artificial Intelligence (AI) legal standards. The Study examines the feasibility and potential elements of a legal framework for the development and deployment of AI, based on the Council of Europe’s human rights standards. Its main conclusion is that current regulations do not suffice in creating the necessary legal certainty, trust, and level playing field needed to guide the development of AI. Accordingly, it proposes the development of a new legal framework for AI consisting of both binding and non-binding Council of Europe instruments.

The Study recognizes the major opportunities of AI systems to promote societal development and human rights. Alongside these opportunities, it also identifies the risks that AI could endanger rights protected by the European Convention on Human Rights (ECHR), as well as democracy and the rule of law. Examples of the risks to human rights cited in the Study include AI systems that undermine the right to equality and non-discrimination by perpetuating biases and stereotypes (e.g., in employment), and AI-driven surveillance and tracking applications that jeopardise individuals’ right to freedom of assembly and expression.


Continue Reading The Council of Europe Publishes Feasibility Study on Developing a Legal Instrument for Ethical AI

On October 9, 2020, the French Supervisory Authority (“CNIL”) issued guidance on the use of facial recognition technology for identity checks at airports (available here, in French).  The CNIL indicates that it has issued this guidance in response to a request from several operators and service providers of airports in France who are planning to deploy this technology on an experimental basis.  In this blog post, we summarize the main principles that the CNIL says airports should observe when deploying biometric technology.

Continue Reading French Supervisory Authority Releases Strict Guidance on the Use of Facial Recognition Technology at Airports

In this edition of our regular roundup on legislative initiatives related to artificial intelligence (AI), cybersecurity, the Internet of Things (IoT), and connected and autonomous vehicles (CAVs), we focus on key developments in the European Union (EU).

Continue Reading AI, IoT, and CAV Legislative Update: EU Spotlight (Third Quarter 2020)

In a new post on the Covington Inside Tech Media Blog, our colleagues discuss the National Institute of Standards and Technology’s draft of the Four Principles of Explainable Artificial Intelligence (NISTIR 8312), which seeks to define the principles that capture the fundamental properties of explainable AI systems.  Comments on the draft will be accepted

On July 30, 2020, the UK Information Commissioner’s Office (“ICO”) published its final guidance on Artificial Intelligence (the “Guidance”).  The Guidance sets out a framework for auditing AI systems for compliance with data protection obligations under the GDPR and the UK Data Protection Act 2018.  The Guidance builds on the ICO’s earlier commitment to enable good data protection practice in AI, and on previous guidance and blogs issued on specific issues relating to AI (for example, on explaining decisions on AI, trade-offs, and bias and discrimination, all covered in Covington blogs).

Continue Reading UK ICO publishes guidance on Artificial Intelligence

On July 17, 2020, the High-Level Expert Group on Artificial Intelligence set up by the European Commission (“AI HLEG”) published The Assessment List for Trustworthy Artificial Intelligence (“Assessment List”). The purpose of the Assessment List is to help companies identify the risks of AI systems they develop, deploy or procure, and implement appropriate measures to mitigate those risks.

The Assessment List is not mandatory, and there isn’t yet a self-certification scheme or other formal framework built around it that would enable companies to signal their adherence to it.  The AI HLEG notes that the Assessment List should be used flexibly; organizations can add or ignore elements as they see fit, taking into consideration the sector in which they operate. As we’ve discussed in our previous blog post here, the European Commission is currently developing policies and legislative proposals relating to trustworthy AI, and it is possible that the Assessment List may influence the Commission’s thinking on how organizations should operationalize requirements relating to this topic.
Continue Reading AI Update: EU High-Level Working Group Publishes Self Assessment for Trustworthy AI

On June 2, 2020, the French Supervisory Authority (“CNIL”) published a paper on algorithmic discrimination prepared by the French independent administrative authority known as “Défenseur des droits”.  The paper is divided into two parts: the first part discusses how algorithms can lead to discriminatory outcomes, and the second part includes recommendations on how to identify and minimize algorithmic biases.  This paper follows from a 2017 paper published by the CNIL on “Ethical Issues of Algorithms and Artificial Intelligence”.
Continue Reading French CNIL Publishes Paper on Algorithmic Discrimination

Trustworthy AI has garnered attention from policymakers and other stakeholders around the globe.  How can organizations operationalize trustworthy AI for Covid-19 and other AI applications, as the legal landscape continues to evolve? Lee Tiedrich and Lala R. Qadir share ten steps in this article with Law360.  For more information about AI, please see our “