The UK Government recently published its AI Governance and Regulation: Policy Statement (the “AI Statement”) setting out its proposed approach to regulating Artificial Intelligence (“AI”) in the UK. The AI Statement was published alongside the draft Data Protection and Digital Information Bill (see our blog post here for further details on the Bill) and is
Marianna Drake is an associate in the technology regulatory group in the London office. Her practice focuses on European data protection law and new policies and legislation relating to innovative technologies such as artificial intelligence and online platforms. Her practice encompasses regulatory compliance and advisory work. Marianna also advises clients on policy developments in online content and online safety.
In the Queen’s Speech on 10 May 2022, the UK Government set out its legislative programme for the months ahead. This includes: reforms to UK data protection laws (no details yet); confirmation that the government will strengthen cybersecurity obligations for connected products and make it easier for telecoms providers to improve the UK’s digital infrastructure; and new rules to enable the use of self-driving cars on public roads. In addition, the government confirmed its plans to move forward with the Online Safety Bill. As part of the government’s broader agenda to “level up” the UK and provide a post-Brexit economic dividend, many of the legislative initiatives referenced in the Queen’s Speech are presented as seeking to encourage greater use of data and technology to support innovation and enable growth.
We summarize below the key digital policy announcements in the Queen’s Speech and how they fit into wider developments in the UK’s regulatory landscape.…
On 6 October 2021, the European Parliament (“EP”) voted in favor of a resolution banning the use of facial recognition technology (“FRT”) by law enforcement in public spaces. The resolution forms part of a non-legislative report on the use of artificial intelligence (“AI”) by the police and judicial authorities in criminal matters (“AI Report”) published by the EP’s Committee on Civil Liberties, Justice and Home Affairs (“LIBE”) in July 2021. The AI Report will now be sent to the European Commission, which has three months to either (i) submit, or indicate it will submit, a legislative proposal on the use of AI by the police and judicial authorities as set out in the AI Report; or (ii) if it chooses not to submit a proposal, explain why.
Continue Reading European Parliament Votes in Favor of Banning the Use of Facial Recognition in Law Enforcement
On 22 September 2021, the UK Government published its 10-year strategy on artificial intelligence (“AI”; the “UK AI Strategy”).
The UK AI Strategy has three main pillars: (1) investing and planning for the long-term requirements of the UK’s AI ecosystem; (2) supporting the transition to an AI-enabled economy across all sectors and regions of the UK; and (3) ensuring that the UK gets the national and international governance of AI technologies “right”.
The approach to AI regulation as set out in the UK AI Strategy is largely pro-innovation, in line with the UK Government’s Plan for Digital Regulation published in July 2021.…
On January 13, 2021, the Advocate General (“AG”), Michal Bobek, of the Court of Justice of the European Union (“CJEU”) issued his Opinion in Case C-645/19 Facebook Ireland Limited, Facebook Inc., Facebook Belgium BVBA v. the Belgian Data Protection Authority (“Belgian DPA”). The AG determined that the one-stop shop mechanism under the EU’s General Data Protection Regulation (“GDPR”) prevents supervisory authorities, who are not the lead supervisory authority (“LSA”) of a controller or processor, from bringing proceedings before their national court, except in limited and exceptional cases specifically provided for by the GDPR. The case will now move to the CJEU for a final judgment.
Continue Reading Supervisory Authorities Cannot Circumvent One-Stop-Shop According to CJEU Advocate General
In addition to releasing the new EU Cybersecurity Strategy before the holidays (see our post here), the Commission published a revised Directive on measures for high common level of cybersecurity across the Union (“NIS2”) and a Directive on the resilience of critical entities (“Critical Entities Resilience Directive”). In this blog post, we summarize key points relating to NIS2, including more onerous security and incident reporting requirements; extending requirements to companies in the food, pharma, medical device, and chemical sectors, among others; and increased powers for regulators, including the ability to impose multi-million Euro fines.
The Commission is seeking feedback on NIS2 and the Critical Entities Resilience Directive, and recently extended its original deadline of early February to March 11, 2021 (responses can be submitted here and here).…
On 17 December 2020, the Council of Europe’s* Ad hoc Committee on Artificial Intelligence (CAHAI) published a Feasibility Study (the “Study”) on Artificial Intelligence (AI) legal standards. The Study examines the feasibility and potential elements of a legal framework for the development and deployment of AI, based on the Council of Europe’s human rights standards. Its main conclusion is that current regulations do not suffice in creating the necessary legal certainty, trust, and level playing field needed to guide the development of AI. Accordingly, it proposes the development of a new legal framework for AI consisting of both binding and non-binding Council of Europe instruments.
The Study recognizes the major opportunities of AI systems to promote societal development and human rights. Alongside these opportunities, it also identifies the risks that AI could endanger rights protected by the European Convention on Human Rights (ECHR), as well as democracy and the rule of law. Examples of the risks to human rights cited in the Study include AI systems that undermine the right to equality and non-discrimination by perpetuating biases and stereotypes (e.g., in employment), and AI-driven surveillance and tracking applications that jeopardise individuals’ right to freedom of assembly and expression.…
On 25 November 2020, the European Commission published a proposal for a Regulation on European Data Governance (“Data Governance Act”). The proposed Act aims to facilitate data sharing across the EU and between sectors, and is one of the deliverables included in the European Strategy for Data, adopted in February 2020. (See our previous blog here for a summary of the Commission’s European Strategy for Data.) The press release accompanying the proposed Act states that more specific proposals on European data spaces are expected to follow in 2021, and will be complemented by a Data Act to foster business-to-business and business-to-government data sharing.
The proposed Data Governance Act sets out rules relating to the following:
- Conditions for reuse of public sector data that is subject to existing protections, such as commercial confidentiality, intellectual property, or data protection;
- Obligations on “providers of data sharing services,” defined as entities that provide various types of data intermediary services;
- Introduction of the concept of “data altruism” and the possibility for organisations to register as a “Data Altruism Organisation recognised in the Union”; and
- Establishment of a “European Data Innovation Board,” a new formal expert group chaired by the Commission.
In this edition of our regular roundup on legislative initiatives related to artificial intelligence (AI), cybersecurity, the Internet of Things (IoT), and connected and autonomous vehicles (CAVs), we focus on key developments in the European Union (EU).
Continue Reading AI, IoT, and CAV Legislative Update: EU Spotlight (Third Quarter 2020)