On 31 May 2023, at the close of the fourth meeting of the US-EU Trade & Tech Council (“TTC”), Margrethe Vestager – the European Union’s Executive Vice President, responsible for competition and digital strategy – announced that the EU and US are working together to develop a voluntary AI Code of Conduct in advance of formal regulation taking effect. The goal, according to Vestager, is to develop non-binding international standards on risk audits, transparency and other requirements for companies developing AI systems. The AI Code of Conduct, once finalized, would be put before G7 leaders as a joint transatlantic proposal, and companies would be encouraged to voluntarily sign up.Continue Reading EU and US Lawmakers Agree to Draft AI Code of Conduct
Lisa Peets leads the Technology Regulatory and Policy practice in the London office and is a member of the firm's Management Committee. Lisa divides her time between London and Brussels, and her practice embraces regulatory counsel and legislative advocacy. In this context, she has worked closely with leading multinationals in a number of sectors, including many of the world’s best-known technology companies.
Lisa counsels clients on a range of EU law issues, including data protection and related regimes, copyright, e-commerce and consumer protection, and the rapidly expanding universe of EU rules applicable to existing and emerging technologies. Lisa also routinely advises clients in and outside of the technology sector on trade related matters, including EU trade controls rules.
According to the latest edition of Chambers UK (2022), "Lisa is able to make an incredibly quick legal assessment whereby she perfectly distils the essential matters from the less relevant elements." "Lisa has subject matter expertise but is also able to think like a generalist and prioritise. She brings a strategic lens to matters."
On 11 May 2023, members of the European Parliament’s internal market (IMCO) and civil liberties (LIBE) committees agreed their final text on the EU’s proposed AI Act. After MEPs formalize their position through a plenary vote (expected this summer), the AI Act will enter the last stage of the legislative process: “trilogue” negotiations with the European Commission, Parliament and the Council, which adopted its own amendments in late 2022 (see our blog post here for further details). European lawmakers hope to adopt the final AI Act before the end of 2023, ahead of the European Parliament elections in 2024.
In perhaps the most significant change from the Commission and Council draft, under MEPs’ proposals, providers of foundation models – a term defined as an AI model that is “trained on broad data at scale, is designed for generality of output, and can be adapted to a wide range of distinctive tasks” (Article 3(1c)) – would be subject to a series of obligations. For example, providers would be under a duty to “demonstrate through appropriate design, testing and analysis that the identification, the reduction and mitigation of reasonably foreseeable risks to health, safety, fundamental rights, the environment and democracy and the rule of law prior and throughout development” (Article 28b(2)(a)), as well as to draw up “extensive technical documentation and intelligible instructions for use” to help those that build AI systems using the foundation model (Article 28b(2)(e)).Continue Reading EU Parliament’s AI Act Proposals Introduce New Obligations for Foundation Models and Generative AI
On 4 May 2023, the UK Competition and Markets Authority (“CMA”) announced it is launching a review into AI foundation models and their potential implications for the UK competition and consumer protection regime. The CMA’s review is part of the UK’s wider approach to AI regulation which will require existing regulators to take responsibility for promoting and overseeing responsible AI within their sectors (for further information on the UK Government’s strategy, including its recent AI White Paper, see our blog post here). The UK Information Commissioner’s Office (“ICO”) has also recently published guidance for businesses on best practices for data protection-compliant AI (see our post here for more details).Continue Reading UK’s Competition and Markets Authority Launches Review into AI Foundation Models
On 29 March 2023, the UK Information Commissioner’s Office (“ICO”) published updated Guidance on AI and data protection (the “Guidance”) following “requests from UK industry to clarify requirements for fairness in AI”. AI has been a strategic priority for the ICO for several years. In 2020, the ICO published its first set of guidance on AI (as discussed in our blog post here) which it complemented with supplementary recommendations on Explaining Decisions Made with AI and an AI and Data Protection risk toolkit in 2022. The updated Guidance forms part of the UK’s wider efforts to adopt a “pro-innovation” approach to AI regulation which will require existing regulators to take responsibility for promoting and overseeing responsible AI within their sectors (for further information on the UK Government’s approach to AI regulation, see our blog post here).
The updated Guidance covers the ICO’s view of best practice for data protection-compliant AI, as well as how the ICO interprets data protection law in the context of AI systems that process personal data. The Guidance has been restructured in line with the UK GDPR’s data protection principles, and features new content, including guidance on fairness, transparency, lawfulness and accountability when using AI systems.Continue Reading UK ICO Updates Guidance on Artificial Intelligence and Data Protection
On 29 March 2023, the UK Government published a White Paper entitled “A pro-innovation approach to AI regulation” (“White Paper”). The White Paper elaborates on the approach to AI set out by the Government in its 2022 AI Governance and Regulation Policy Statement (“Policy Statement” – covered in our blog post here). This announcement comes following the Government’s commitments, in the Spring Budget 2023, to build an expert taskforce to develop the UK’s capabilities in AI foundation models and produce guidance on the relationship between intellectual property law and generative AI (for more details of these initiatives, see here).
In its White Paper, the UK Government confirms that, unlike the EU, it does not plan to adopt new legislation to regulate AI, nor will it create a new regulator for AI (for further details on the EU’s proposed AI regulation see our blog posts here and here). Instead, the UK would require existing regulators, including the UK Information Commissioner’s Office (“ICO”), to take responsibility for the establishment, promotion, and oversight of responsible AI in their respective sectors. Regulators’ activities would be reinforced by the establishment of new support and oversight functions within central Government. This approach is already beginning to play out in certain regulated areas in the UK. For example, in October 2022, the Bank of England and Financial Conduct Authority (“FCA”) jointly released a Discussion Paper on Artificial Intelligence and Machine Learning considering how AI in financial services should be regulated and, in March 2023, the ICO updated its Guidance on AI and Data Protection. Continue Reading UK Government Adopts a “Pro-Innovation” Approach to AI Regulation
The EU’s AI Act Proposal is continuing to make its way through the ordinary legislative procedure. In December 2022, the Council published its sixth and final compromise text (see our previous blog post), and over the last few months, the European Parliament has been negotiating its own amendments to the AI Act Proposal. The European Parliament is expected to finalize its position in the upcoming weeks, before entering into trilogue negotiations with the Commission and the Council, which could begin as early as April 2023. The AI Act is expected to be adopted before the end of 2023, during the Spanish presidency of the Council, and ahead of the European elections in 2024.
During negotiations between the Council and the European Parliament, we can expect further changes to the Commission’s AI Act proposal, in an attempt to iron out any differences and agree on a final version of the Act. Below, we outline the key amendments proposed by the European Parliament in the course of its negotiations with the Council.Continue Reading A Preview into the European Parliament’s Position on the EU’s AI Act Proposal
On February 28, 2023, the European Data Protection Board (“EDPB”) released its non-binding opinion on the European Commission’s draft adequacy decision on the EU-U.S. Data Privacy Framework (“DPF”). The adequacy decision, once formally adopted, will establish a new legal basis by which organizations in the EU (as well as the three EEA states of Iceland, Liechtenstein, and Norway) may lawfully transfer personal data to the U.S., provided that the recipient in the U.S. certifies to and abides by the terms of the DPF (see our previous blogpost here).
The Commission sought the EDPB’s opinion pursuant to Article 71(1)(s) of the GDPR. The EDPB welcomes the fact that elements of the DPF represent a substantial improvement over the Privacy Shield, which was annulled by the EU Court of Justice (“CJEU”) in Schrems II (see our previous blogpost here). Nonetheless, the EDPB notes some concerns and seeks clarification on certain aspects of the DPF from the Commission. For example, the EDPB welcomes the establishment of a specific mechanism by which non-U.S. persons may seek redress for certain U.S. government surveillance of their personal data, but calls on the Commission to closely monitor the implementation of this mechanism in practice.Continue Reading EDPB Releases its Opinion on the Proposed EU-U.S. Data Privacy Framework
2023 is set to be an important year for developments in AI regulation and policy in the EU. At the end of last year, on December 6, 2022, the Council of the EU (the “Council”) adopted its general approach and compromise text on the proposed Regulation Laying Down Harmonized Rules on Artificial Intelligence (the “AI Act”), bringing the AI Act one step closer to being adopted. The European Parliament is currently developing its own position on the AI Act which is expected to be finalized by March 2023. Following this, the Council, Parliament and European Commission (“Commission”) will enter into trilogue discussions to finalize the Act. Once adopted, it will be directly applicable across all EU Member States and its obligations are likely to apply three years after the AI Act’s entry into force (according to the Council’s compromise text).
In 2022, the Commission also put forward new liability rules for AI systems via the proposed AI Liability Directive (“AILD”) and updates to the Product Liability Directive (“PLD”). The AILD establishes rules for non-contractual, fault-based civil claims involving AI systems. Specifically, the proposal establishes rules that would govern the preservation and disclosure of evidence in cases involving high-risk AI, as well as rules on the burden of proof and corresponding rebuttable presumptions. Meanwhile, the revised PLD harmonizes rules that apply to no-fault liability claims brought by persons who suffer physical injury or damage to property caused by defective products. Software, including AI systems, are explicitly named as “products” under the proposal meaning that an injured person can claim compensation for damage caused by AI (see our previous blog post for further details on the proposed AILD and PLD). Both pieces of legislation will be reviewed, and potentially amended, by the Council and the European Parliament in 2023.Continue Reading EU AI Policy and Regulation: What to look out for in 2023
On December 13, 2022, the European Commission released its draft adequacy decision on the EU-U.S. Data Privacy Framework (“EU-U.S. DPF”), which, once formally adopted, would recognize that the United States ensures an adequate level of protection for personal data transferred from the EU to organizations certified under the EU-U.S. DPF. The draft decision follows the issuance of Executive Order 14086 on Enhancing Safeguards for U.S. Signals Intelligence Activities (“EO 14086”) by President Biden on October 7, 2022 (see our previous blog post here), and the political agreement reached between the EU and the U.S. in March 2022 (see our previous blog post here).
As many had expected, the draft adequacy decision assesses the limitations and safeguards relating to the collection and subsequent use of personal data transferred to controllers and processors in the United States by U.S. public authorities. In particular, the draft decision assesses whether the conditions under which the U.S. government may access data transferred to the United States fulfill the “essential equivalence” test pursuant to Article 45(1) of the GDPR, as interpreted by the Court of Justice of the European Union (“CJEU”) in Schrems II (see our previous blog post here). Continue Reading European Commission Releases Draft Adequacy Decision on the EU-U.S. Data Privacy Framework
On October 13, 2022, the European Data Protection Supervisor (“EDPS”) released its Opinion 20/2022 on a Recommendation issued by the European Commission in August 2022 calling for a Council Decision authorising the opening of negotiations on behalf of the European Union for a Council of Europe convention on artificial intelligence, human rights, democracy and the…