Artificial Intelligence (AI)

On 6 October 2021, the European Parliament (“EP”) voted in favor of a resolution banning the use of facial recognition technology (“FRT”) by law enforcement in public spaces. The resolution forms part of a non-legislative report on the use of artificial intelligence (“AI”) by the police and judicial authorities in criminal matters (“AI Report”) published by the EP’s Committee on Civil Liberties, Justice and Home Affairs (“LIBE”) in July 2021. The AI Report will now be sent to the European Commission, which has three months to either (i) submit, or indicate it will submit, a legislative proposal on the use of AI by the police and judicial authorities as set out in the AI Report; or (ii) if it chooses not to submit a proposal, explain why.

Continue Reading European Parliament Votes in Favor of Banning the Use of Facial Recognition in Law Enforcement

On 22 September 2021, the UK Government published its 10-year strategy on artificial intelligence (“AI”; the “UK AI Strategy”).

The UK AI Strategy has three main pillars: (1) investing and planning for the long-term requirements of the UK’s AI ecosystem; (2) supporting the transition to an AI-enabled economy across all sectors and regions of the UK; and (3) ensuring that the UK gets the national and international governance of AI technologies “right”.

The approach to AI regulation as set out in the UK AI Strategy is largely pro-innovation, in line with the UK Government’s Plan for Digital Regulation published in July 2021.


Continue Reading The UK Government Publishes its AI Strategy

Introduction

In this update, we detail the key legislative developments in the second quarter of 2021 related to artificial intelligence (“AI”), the Internet of Things (“IoT”), connected and automated vehicles (“CAVs”), and federal privacy legislation.  As we recently covered on May 12,  President Biden signed an Executive Order to strengthen the federal government’s ability to respond to and prevent cybersecurity threats, including by removing obstacles to sharing threat information between private sector entities and federal agencies and modernizing federal systems.  On the hill, lawmakers have introduced a number of proposals to regulate AI, IoT, CAVs, and privacy.
Continue Reading U.S. AI, IoT, CAV, and Privacy Legislative Update – Second Quarter 2021

In April 2021, the European Commission released its proposed Regulation Laying Down Harmonized Rules on Artificial Intelligence (the “Regulation”), which would establish rules on the development, placing on the market, and use of artificial intelligence systems (“AI systems”) across the EU. The proposal, comprising 85 articles and nine annexes, is part of a wider package of Commission initiatives aimed at positioning the EU as a world leader in trustworthy and ethical AI and technological innovation.

The Commission’s objectives with the Regulation are twofold: to promote the development of AI technologies and harness their potential benefits, while also protecting individuals against potential threats to their health, safety, and fundamental rights posed by AI systems. To that end, the Commission proposal focuses primarily on AI systems identified as “high-risk,” but also prohibits three AI practices and imposes transparency obligations on providers of certain non-high-risk AI systems as well. Notably, it would impose significant administrative costs on high-risk AI systems of around 10 percent of the underlying value, based on compliance, oversight, and verification costs. This blog highlights several key aspects of the proposal.


Continue Reading European Commission Proposes New Artificial Intelligence Regulation

On January 6, 2021, the UK’s AI Council (an independent government advisory body) published its AI Roadmap (“Roadmap”). In addition to calling for a  Public Interest Data Bill to ‘protect against automation and collective harms’, the Roadmap acknowledges the need to counteract public suspicion of AI and makes 16 recommendations, based on three main pillars, to guide the UK Government’s AI strategy.

Continue Reading AI Update: The Future of AI Policy in the UK

On January 12, 2020, the Spanish Supervisory Authority (“AEPD”) issued guidance on how to audit personal data processing activities that involve Artificial Intelligence (“AI”) (available here, in Spanish).  The AEPD’s guidance is directed at data controllers and processors, as well as AI developers, data protection officers (“DPO”), and auditors.  The guidance aims to help ensure that products and services which incorporate AI comply with the requirements of the European Union’s (“EU”) General Data Protection Regulation (“GDPR”).

Continue Reading Spanish Supervisory Authority Issues Guidance on Auditing Data Processing Activities Involving Artificial Intelligence

On December 23, 2020, the European Commission (the “Commission”) published its inception impact assessment (“Inception Impact Assessment”) of policy options for establishing a European Health Data Space (“EHDS”).  The Inception Impact Assessment is open for consultation until February 3, 2021, encouraging “citizens and stakeholders” to “provide views on the Commission’s understanding of the current situation, problem and possible solutions”.

Continue Reading European Commission Conducts Open Consultation on the European Health Data Space Initiative

On 17 December 2020, the Council of Europe’s* Ad hoc Committee on Artificial Intelligence (CAHAI) published a Feasibility Study (the “Study”) on Artificial Intelligence (AI) legal standards. The Study examines the feasibility and potential elements of a legal framework for the development and deployment of AI, based on the Council of Europe’s human rights standards. Its main conclusion is that current regulations do not suffice in creating the necessary legal certainty, trust, and level playing field needed to guide the development of AI. Accordingly, it proposes the development of a new legal framework for AI consisting of both binding and non-binding Council of Europe instruments.

The Study recognizes the major opportunities of AI systems to promote societal development and human rights. Alongside these opportunities, it also identifies the risks that AI could endanger rights protected by the European Convention on Human Rights (ECHR), as well as democracy and the rule of law. Examples of the risks to human rights cited in the Study include AI systems that undermine the right to equality and non-discrimination by perpetuating biases and stereotypes (e.g., in employment), and AI-driven surveillance and tracking applications that jeopardise individuals’ right to freedom of assembly and expression.


Continue Reading The Council of Europe Publishes Feasibility Study on Developing a Legal Instrument for Ethical AI

On October 9, 2020, the French Supervisory Authority (“CNIL”) issued guidance on the use of facial recognition technology for identity checks at airports (available here, in French).  The CNIL indicates that it has issued this guidance in response to a request from several operators and service providers of airports in France who are planning to deploy this technology on an experimental basis.  In this blog post, we summarize the main principles that the CNIL says airports should observe when deploying biometric technology.

Continue Reading French Supervisory Authority Releases Strict Guidance on the Use of Facial Recognition Technology at Airports

In this edition of our regular roundup on legislative initiatives related to artificial intelligence (AI), cybersecurity, the Internet of Things (IoT), and connected and autonomous vehicles (CAVs), we focus on key developments in the European Union (EU).

Continue Reading AI, IoT, and CAV Legislative Update: EU Spotlight (Third Quarter 2020)