Artificial Intelligence (AI)

On 6 October 2021, the European Parliament (“EP”) voted in favor of a resolution banning the use of facial recognition technology (“FRT”) by law enforcement in public spaces. The resolution forms part of a non-legislative report on the use of artificial intelligence (“AI”) by the police and judicial authorities in criminal matters (“AI Report”) published by the EP’s Committee on Civil Liberties, Justice and Home Affairs (“LIBE”) in July 2021. The AI Report will now be sent to the European Commission, which has three months to either (i) submit, or indicate it will submit, a legislative proposal on the use of AI by the police and judicial authorities as set out in the AI Report; or (ii) if it chooses not to submit a proposal, explain why.

Continue Reading European Parliament Votes in Favor of Banning the Use of Facial Recognition in Law Enforcement

There have been many headlines today about the UK Government’s plans to reform UK data protection law. We are still reviewing the (near 150-page) consultation document, but set out below a dozen proposals that we thought might pique the interest of readers of our blog.
Continue Reading 12 Eye-Catching Proposals In The UK Government’s Plan To Reform UK Data Protection Law

Introduction

In this update, we detail the key legislative developments in the second quarter of 2021 related to artificial intelligence (“AI”), the Internet of Things (“IoT”), connected and automated vehicles (“CAVs”), and federal privacy legislation.  As we recently covered on May 12,  President Biden signed an Executive Order to strengthen the federal government’s ability to respond to and prevent cybersecurity threats, including by removing obstacles to sharing threat information between private sector entities and federal agencies and modernizing federal systems.  On the hill, lawmakers have introduced a number of proposals to regulate AI, IoT, CAVs, and privacy.
Continue Reading U.S. AI, IoT, CAV, and Privacy Legislative Update – Second Quarter 2021

In April 2021, the European Commission released its proposed Regulation Laying Down Harmonized Rules on Artificial Intelligence (the “Regulation”), which would establish rules on the development, placing on the market, and use of artificial intelligence systems (“AI systems”) across the EU. The proposal, comprising 85 articles and nine annexes, is part of a wider package of Commission initiatives aimed at positioning the EU as a world leader in trustworthy and ethical AI and technological innovation.

The Commission’s objectives with the Regulation are twofold: to promote the development of AI technologies and harness their potential benefits, while also protecting individuals against potential threats to their health, safety, and fundamental rights posed by AI systems. To that end, the Commission proposal focuses primarily on AI systems identified as “high-risk,” but also prohibits three AI practices and imposes transparency obligations on providers of certain non-high-risk AI systems as well. Notably, it would impose significant administrative costs on high-risk AI systems of around 10 percent of the underlying value, based on compliance, oversight, and verification costs. This blog highlights several key aspects of the proposal.


Continue Reading European Commission Proposes New Artificial Intelligence Regulation

On January 6, 2021, the UK’s AI Council (an independent government advisory body) published its AI Roadmap (“Roadmap”). In addition to calling for a  Public Interest Data Bill to ‘protect against automation and collective harms’, the Roadmap acknowledges the need to counteract public suspicion of AI and makes 16 recommendations, based on three main pillars, to guide the UK Government’s AI strategy.

Continue Reading AI Update: The Future of AI Policy in the UK

On January 12, 2020, the Spanish Supervisory Authority (“AEPD”) issued guidance on how to audit personal data processing activities that involve Artificial Intelligence (“AI”) (available here, in Spanish).  The AEPD’s guidance is directed at data controllers and processors, as well as AI developers, data protection officers (“DPO”), and auditors.  The guidance aims to help ensure that products and services which incorporate AI comply with the requirements of the European Union’s (“EU”) General Data Protection Regulation (“GDPR”).

Continue Reading Spanish Supervisory Authority Issues Guidance on Auditing Data Processing Activities Involving Artificial Intelligence

On December 23, 2020, the European Commission (the “Commission”) published its inception impact assessment (“Inception Impact Assessment”) of policy options for establishing a European Health Data Space (“EHDS”).  The Inception Impact Assessment is open for consultation until February 3, 2021, encouraging “citizens and stakeholders” to “provide views on the Commission’s understanding of the current situation, problem and possible solutions”.

Continue Reading European Commission Conducts Open Consultation on the European Health Data Space Initiative

On 17 December 2020, the Council of Europe’s* Ad hoc Committee on Artificial Intelligence (CAHAI) published a Feasibility Study (the “Study”) on Artificial Intelligence (AI) legal standards. The Study examines the feasibility and potential elements of a legal framework for the development and deployment of AI, based on the Council of Europe’s human rights standards. Its main conclusion is that current regulations do not suffice in creating the necessary legal certainty, trust, and level playing field needed to guide the development of AI. Accordingly, it proposes the development of a new legal framework for AI consisting of both binding and non-binding Council of Europe instruments.

The Study recognizes the major opportunities of AI systems to promote societal development and human rights. Alongside these opportunities, it also identifies the risks that AI could endanger rights protected by the European Convention on Human Rights (ECHR), as well as democracy and the rule of law. Examples of the risks to human rights cited in the Study include AI systems that undermine the right to equality and non-discrimination by perpetuating biases and stereotypes (e.g., in employment), and AI-driven surveillance and tracking applications that jeopardise individuals’ right to freedom of assembly and expression.


Continue Reading The Council of Europe Publishes Feasibility Study on Developing a Legal Instrument for Ethical AI

In a new post on the Covington Inside Tech Media Blog, our colleagues discuss the National Institute of Standards and Technology’s draft of the Four Principles of Explainable Artificial Intelligence (NISTIR 8312), which seeks to define the principles that capture the fundamental properties of explainable AI systems.  Comments on the draft will be accepted