Photo of Marianna Drake

Marianna Drake

Marianna Drake counsels leading multinational companies on some of their most complex regulatory, policy and compliance-related issues, including data privacy and AI regulation. She focuses her practice on compliance with UK, EU and global privacy frameworks, and new policy proposals and regulations relating to AI and data. She also advises clients on matters relating to children’s privacy, online safety and consumer protection and product safety laws.

Her practice includes defending organizations in cross-border, contentious investigations and regulatory enforcement in the UK and EU Member States. Marianna also routinely partners with clients on the design of new products and services, drafting and negotiating privacy terms, developing privacy notices and consent forms, and helping clients design governance programs for the development and deployment of AI technologies.

Marianna’s pro bono work includes providing data protection advice to UK-based human rights charities, and supporting a non-profit organization in conducting legal research for strategic litigation.

On 4 May 2023, the UK Competition and Markets Authority (“CMA”) announced it is launching a review into AI foundation models and their potential implications for the UK competition and consumer protection regime. The CMA’s review is part of the UK’s wider approach to AI regulation which will require existing regulators to take responsibility for promoting and overseeing responsible AI within their sectors (for further information on the UK Government’s strategy, including its recent AI White Paper, see our blog post here). The UK Information Commissioner’s Office (“ICO”) has also recently published guidance for businesses on best practices for data protection-compliant AI (see our post here for more details).Continue Reading UK’s Competition and Markets Authority Launches Review into AI Foundation Models

On 29 March 2023, the UK Information Commissioner’s Office (“ICO”) published updated Guidance on AI and data protection (the “Guidance”) following “requests from UK industry to clarify requirements for fairness in AI”. AI has been a strategic priority for the ICO for several years. In 2020, the ICO published its first set of guidance on AI (as discussed in our blog post here) which it complemented with supplementary recommendations on Explaining Decisions Made with AI and an AI and Data Protection risk toolkit in 2022. The updated Guidance forms part of the UK’s wider efforts to adopt a “pro-innovation” approach to AI regulation which will require existing regulators to take responsibility for promoting and overseeing responsible AI within their sectors (for further information on the UK Government’s approach to AI regulation, see our blog post here).

The updated Guidance covers the ICO’s view of best practice for data protection-compliant AI, as well as how the ICO interprets data protection law in the context of AI systems that process personal data. The Guidance has been restructured in line with the UK GDPR’s data protection principles, and features new content, including guidance on fairness, transparency, lawfulness and accountability when using AI systems.Continue Reading UK ICO Updates Guidance on Artificial Intelligence and Data Protection

On 29 March 2023, the UK Government published a White Paper entitled “A pro-innovation approach to AI regulation” (“White Paper”). The White Paper elaborates on the approach to AI set out by the Government in its 2022 AI Governance and Regulation Policy Statement (“Policy Statement” – covered in our blog post here). This announcement comes following the Government’s commitments, in the Spring Budget 2023, to build an expert taskforce to develop the UK’s capabilities in AI foundation models and produce guidance on the relationship between intellectual property law and generative AI (for more details of these initiatives, see here).

In its White Paper, the UK Government confirms that, unlike the EU, it does not plan to adopt new legislation to regulate AI, nor will it create a new regulator for AI (for further details on the EU’s proposed AI regulation see our blog posts here and here). Instead, the UK would require existing regulators, including the UK Information Commissioner’s Office (“ICO”), to take responsibility for the establishment, promotion, and oversight of responsible AI in their respective sectors. Regulators’ activities would be reinforced by the establishment of new support and oversight functions within central Government. This approach is already beginning to play out in certain regulated areas in the UK. For example, in October 2022, the Bank of England and Financial Conduct Authority (“FCA”) jointly released a Discussion Paper on Artificial Intelligence and Machine Learning considering how AI in financial services should be regulated and, in March 2023, the ICO updated its Guidance on AI and Data Protection.  Continue Reading UK Government Adopts a “Pro-Innovation” Approach to AI Regulation

The EU’s AI Act Proposal is continuing to make its way through the ordinary legislative procedure.  In December 2022, the Council published its sixth and final compromise text (see our previous blog post), and over the last few months, the European Parliament has been negotiating its own amendments to the AI Act Proposal.  The European Parliament is expected to finalize its position in the upcoming weeks, before entering into trilogue negotiations with the Commission and the Council, which could begin as early as April 2023.  The AI Act is expected to be adopted before the end of 2023, during the Spanish presidency of the Council, and ahead of the European elections in 2024. 

During negotiations between the Council and the European Parliament, we can expect further changes to the Commission’s AI Act proposal, in an attempt to iron out any differences and agree on a final version of the Act.  Below, we outline the key amendments proposed by the European Parliament in the course of its negotiations with the Council.Continue Reading A Preview into the European Parliament’s Position on the EU’s AI Act Proposal

2023 is set to be an important year for developments in AI regulation and policy in the EU. At the end of last year, on December 6, 2022, the Council of the EU (the “Council”) adopted its general approach and compromise text on the proposed Regulation Laying Down Harmonized Rules on Artificial Intelligence (the “AI Act”), bringing the AI Act one step closer to being adopted. The European Parliament is currently developing its own position on the AI Act which is expected to be finalized by March 2023. Following this, the Council, Parliament and European Commission (“Commission”) will enter into trilogue discussions to finalize the Act. Once adopted, it will be directly applicable across all EU Member States and its obligations are likely to apply three years after the AI Act’s entry into force (according to the Council’s compromise text).  

In 2022, the Commission also put forward new liability rules for AI systems via the proposed AI Liability Directive (“AILD”) and updates to the Product Liability Directive (“PLD”). The AILD establishes rules for non-contractual, fault-based civil claims involving AI systems. Specifically, the proposal establishes rules that would govern the preservation and disclosure of evidence in cases involving high-risk AI, as well as rules on the burden of proof and corresponding rebuttable presumptions. Meanwhile, the revised PLD harmonizes rules that apply to no-fault liability claims brought by persons who suffer physical injury or damage to property caused by defective products. Software, including AI systems, are explicitly named as “products” under the proposal meaning that an injured person can claim compensation for damage caused by AI (see our previous blog post for further details on the proposed AILD and PLD). Both pieces of legislation will be reviewed, and potentially amended, by the Council and the European Parliament in 2023.Continue Reading EU AI Policy and Regulation: What to look out for in 2023

The UK Government recently published its AI Governance and Regulation: Policy Statement (the “AI Statement”) setting out its proposed approach to regulating Artificial Intelligence (“AI”) in the UK. The AI Statement was published alongside the draft Data Protection and Digital Information Bill (see our blog post here for further details on the Bill) and is

In the Queen’s Speech on 10 May 2022, the UK Government set out its legislative programme for the months ahead. This includes: reforms to UK data protection laws (no details yet); confirmation that the government will strengthen cybersecurity obligations for connected products and make it easier for telecoms providers to improve the UK’s digital infrastructure; and new rules to enable the use of self-driving cars on public roads. In addition, the government confirmed its plans to move forward with the Online Safety Bill. As part of the government’s broader agenda to “level up” the UK and provide a post-Brexit economic dividend, many of the legislative initiatives referenced in the Queen’s Speech are presented as seeking to encourage greater use of data and technology to support innovation and enable growth.

We summarize below the key digital policy announcements in the Queen’s Speech and how they fit into wider developments in the UK’s regulatory landscape.Continue Reading UK Privacy and Digital Policy & Legislative Roundup

On 6 October 2021, the European Parliament (“EP”) voted in favor of a resolution banning the use of facial recognition technology (“FRT”) by law enforcement in public spaces. The resolution forms part of a non-legislative report on the use of artificial intelligence (“AI”) by the police and judicial authorities in criminal matters (“AI Report”) published by the EP’s Committee on Civil Liberties, Justice and Home Affairs (“LIBE”) in July 2021. The AI Report will now be sent to the European Commission, which has three months to either (i) submit, or indicate it will submit, a legislative proposal on the use of AI by the police and judicial authorities as set out in the AI Report; or (ii) if it chooses not to submit a proposal, explain why.
Continue Reading European Parliament Votes in Favor of Banning the Use of Facial Recognition in Law Enforcement

On 22 September 2021, the UK Government published its 10-year strategy on artificial intelligence (“AI”; the “UK AI Strategy”).

The UK AI Strategy has three main pillars: (1) investing and planning for the long-term requirements of the UK’s AI ecosystem; (2) supporting the transition to an AI-enabled economy across all sectors and regions of the UK; and (3) ensuring that the UK gets the national and international governance of AI technologies “right”.

The approach to AI regulation as set out in the UK AI Strategy is largely pro-innovation, in line with the UK Government’s Plan for Digital Regulation published in July 2021.Continue Reading The UK Government Publishes its AI Strategy

On January 13, 2021, the Advocate General (“AG”), Michal Bobek, of the Court of Justice of the European Union (“CJEU”) issued his Opinion in Case C-645/19 Facebook Ireland Limited, Facebook Inc., Facebook Belgium BVBA v. the Belgian Data Protection Authority (“Belgian DPA”).  The AG determined that the one-stop shop mechanism under the EU’s General Data Protection Regulation (“GDPR”) prevents supervisory authorities, who are not the lead supervisory authority (“LSA”) of a controller or processor, from bringing proceedings before their national court, except in limited and exceptional cases specifically provided for by the GDPR.  The case will now move to the CJEU for a final judgment.
Continue Reading Supervisory Authorities Cannot Circumvent One-Stop-Shop According to CJEU Advocate General