Photo of Marty Hansen

Marty Hansen

Martin Hansen has over two decades of experience representing some of the world’s leading innovative companies in the internet, IT, e-commerce, and life sciences sectors on a broad range of regulatory, intellectual property, and competition issues, including related to artificial intelligence. Martin has extensive experience in advising clients on matters arising under EU and U.S. law, UK law, the World Trade Organization agreements, and other trade agreements.

On December 9, 2023, the European Parliament, the Council of the European Union and the European Commission reached a political agreement on the EU Artificial Intelligence Act (“AI Act”) (see here for the Parliament’s press statement, here for the Council’s statement, and here for the Commission’s statement). Following three days of intense negotiations, during the fifth “trilogue” discussions amongst the EU institutions, negotiators reached an agreement on key topics, including: (i) the scope of the AI Act; (ii) AI systems classified as “high-risk” under the Act; and (iii) law enforcement exemptions.

As described in our previous blog posts on the AI Act (see here, here, and here), the Act will establish a comprehensive and horizontal law governing the development, import, deployment and use of AI systems in the EU. In this blog post, we provide a high-level summary of the main points EU legislators appear to have agreed upon, based on the press releases linked above and a further Q&A published by the Commission. However, the text of the political agreement is not yet publicly available. Further, although a political agreement has been reached, a number of details remain to be finalized in follow-up technical working meetings over the coming weeks.Continue Reading EU Artificial Intelligence Act: Nearing the Finish Line

On October 30, 2023, days ahead of government leaders convening in the UK for an international AI Safety Summit, the White House issued an Executive Order (“EO”) outlining an expansive strategy to support the development and deployment of safe and secure AI technologies (for further details on the EO, see our blog here). As readers will be aware, the European Commission released its proposed Regulation Laying Down Harmonized Rules on Artificial Intelligence (the EU “AI Act”) in 2021 (see our blog here). EU lawmakers are currently negotiating changes to the Commission text, with hopes of finalizing the text by the end of this year, although many of its obligations would only begin to apply to regulated entities in 2026 or later.

The EO and the AI Act stand as two important developments shaping the future of global AI governance and regulation. This blog post discusses key similarities and differences between the two.Continue Reading From Washington to Brussels: A Comparative Look at the Biden Administration’s Executive Order and the EU’s AI Act

A would-be technical development could have potentially significant consequences for cloud service providers established outside the EU. The proposed EU Cybersecurity Certification Scheme for Cloud Services (EUCS)—which has been developed by the EU cybersecurity agency ENISA over the past two years and is expected to be adopted by the European Commission as an implementing act in Q1 2024—would, if adopted in its current form, establish certain requirements that could:

  1. exclude non-EU cloud providers from providing certain (“high” level) services to European companies, and
  2. preclude EU cloud customers from accessing the services of these non-EU providers.

Continue Reading Implications of the EU Cybersecurity Scheme for Cloud Services

On June 27, 2023, the European Parliament and the Council of the EU reached a political agreement on the Data Act (see our previous blog post here), after 18 months of negotiations since the tabling of the Commission’s proposal in February 2022 (see our previous blog post here).  EU lawmakers bridged their differences on a number of topics, including governance matters, territorial scope, protection of trade secrets, and certain defined terms, among others.

The Data Act is a key component of the European strategy for data. Its objective is to remove barriers to the use and re-use of non-personal data, particularly as it relates to data generated by connected products and related services, including virtual assistants. It also seeks to facilitate the ability of customers to switch between providers of data processing services.

We’ve outlined below some key aspects of the new legislation.Continue Reading European Parliament and Council Release Agreed Text on Data Act

On 21 June 2023, at the close of a roundtable meeting of the G7 Data Protection and Privacy Authorities, regulators from the United States, France, Germany, Italy, United Kingdom, Canada and Japan published a joint “Statement on Generative AI” (“Statement”) (available here). In the Statement, regulators identify a range of data protection-related concerns they believe are raised by generative AI tools, including legal authority for processing personal information, and transparency, explainability, and security. The group of regulators also call on companies to “embed privacy in the design conception, operation, and management” of generative AI tools.

In advance of the G7 meeting, on 15 June 2023, the UK Information Commissioner’s Office (“ICO”) separately announced that it will be “checking” whether businesses have addressed privacy risks before deploying generative AI, and “taking action where there is risk of harm to people through poor use of their data”.Continue Reading UK and G7 Privacy Authorities Warn of Privacy Risks Raised by Generative AI

On 31 May 2023, at the close of the fourth meeting of the US-EU Trade & Tech Council (“TTC”), Margrethe Vestager – the European Union’s Executive Vice President, responsible for competition and digital strategy – announced that the EU and US are working together to develop a voluntary AI Code of Conduct in advance of formal regulation taking effect. The goal, according to Vestager, is to develop non-binding international standards on risk audits, transparency and other requirements for companies developing AI systems. The AI Code of Conduct, once finalized, would be put before G7 leaders as a joint transatlantic proposal, and companies would be encouraged to voluntarily sign up.Continue Reading EU and US Lawmakers Agree to Draft AI Code of Conduct

On 4 May 2023, the UK Competition and Markets Authority (“CMA”) announced it is launching a review into AI foundation models and their potential implications for the UK competition and consumer protection regime. The CMA’s review is part of the UK’s wider approach to AI regulation which will require existing regulators to take responsibility for promoting and overseeing responsible AI within their sectors (for further information on the UK Government’s strategy, including its recent AI White Paper, see our blog post here). The UK Information Commissioner’s Office (“ICO”) has also recently published guidance for businesses on best practices for data protection-compliant AI (see our post here for more details).Continue Reading UK’s Competition and Markets Authority Launches Review into AI Foundation Models

On 29 March 2023, the UK Information Commissioner’s Office (“ICO”) published updated Guidance on AI and data protection (the “Guidance”) following “requests from UK industry to clarify requirements for fairness in AI”. AI has been a strategic priority for the ICO for several years. In 2020, the ICO published its first set of guidance on AI (as discussed in our blog post here) which it complemented with supplementary recommendations on Explaining Decisions Made with AI and an AI and Data Protection risk toolkit in 2022. The updated Guidance forms part of the UK’s wider efforts to adopt a “pro-innovation” approach to AI regulation which will require existing regulators to take responsibility for promoting and overseeing responsible AI within their sectors (for further information on the UK Government’s approach to AI regulation, see our blog post here).

The updated Guidance covers the ICO’s view of best practice for data protection-compliant AI, as well as how the ICO interprets data protection law in the context of AI systems that process personal data. The Guidance has been restructured in line with the UK GDPR’s data protection principles, and features new content, including guidance on fairness, transparency, lawfulness and accountability when using AI systems.Continue Reading UK ICO Updates Guidance on Artificial Intelligence and Data Protection

On 29 March 2023, the UK Government published a White Paper entitled “A pro-innovation approach to AI regulation” (“White Paper”). The White Paper elaborates on the approach to AI set out by the Government in its 2022 AI Governance and Regulation Policy Statement (“Policy Statement” – covered in our blog post here). This announcement comes following the Government’s commitments, in the Spring Budget 2023, to build an expert taskforce to develop the UK’s capabilities in AI foundation models and produce guidance on the relationship between intellectual property law and generative AI (for more details of these initiatives, see here).

In its White Paper, the UK Government confirms that, unlike the EU, it does not plan to adopt new legislation to regulate AI, nor will it create a new regulator for AI (for further details on the EU’s proposed AI regulation see our blog posts here and here). Instead, the UK would require existing regulators, including the UK Information Commissioner’s Office (“ICO”), to take responsibility for the establishment, promotion, and oversight of responsible AI in their respective sectors. Regulators’ activities would be reinforced by the establishment of new support and oversight functions within central Government. This approach is already beginning to play out in certain regulated areas in the UK. For example, in October 2022, the Bank of England and Financial Conduct Authority (“FCA”) jointly released a Discussion Paper on Artificial Intelligence and Machine Learning considering how AI in financial services should be regulated and, in March 2023, the ICO updated its Guidance on AI and Data Protection.  Continue Reading UK Government Adopts a “Pro-Innovation” Approach to AI Regulation

The EU’s AI Act Proposal is continuing to make its way through the ordinary legislative procedure.  In December 2022, the Council published its sixth and final compromise text (see our previous blog post), and over the last few months, the European Parliament has been negotiating its own amendments to the AI Act Proposal.  The European Parliament is expected to finalize its position in the upcoming weeks, before entering into trilogue negotiations with the Commission and the Council, which could begin as early as April 2023.  The AI Act is expected to be adopted before the end of 2023, during the Spanish presidency of the Council, and ahead of the European elections in 2024. 

During negotiations between the Council and the European Parliament, we can expect further changes to the Commission’s AI Act proposal, in an attempt to iron out any differences and agree on a final version of the Act.  Below, we outline the key amendments proposed by the European Parliament in the course of its negotiations with the Council.Continue Reading A Preview into the European Parliament’s Position on the EU’s AI Act Proposal