Artificial Intelligence (AI)

On 8 October 2025, the European Commission published its Apply AI Strategy (the “Strategy”), a comprehensive policy framework aimed at accelerating the adoption and integration of artificial intelligence (“AI”) across strategic industrial sectors and the public sector in the EU.

The Strategy is structured around three pillars: (1) introducing sectoral flagships to boost AI use in key industrial sectors; (2) addressing cross-cutting challenges; and (3) establishing a single governance mechanism to provide sectoral stakeholders a way to participate in AI policymaking.

The Apply AI Strategy is accompanied by the AI in Science Strategy, and it will be complemented by the Data Union Strategy (which is anticipated later this year).Continue Reading European Commission Publishes Apply AI Strategy to Accelerate Sectoral AI Adoption Across the EU

On June 6, 2025, President Trump issued an Executive Order (“Sustaining Select Efforts to Strengthen the Nation’s Cybersecurity and Amending Executive Order 13694 and Executive Order 14144”) (the “Order”) that modifies certain initiatives in prior Executive Orders issued by Presidents Obama and Biden and highlights key cybersecurity priorities for the current Administration.  Specifically, the Order (i) directs that existing federal government regulations and policy be revised to focus on securing third-party software supply chains, quantum cryptography, artificial intelligence, and Internet of Things (“IoT”) devices and (ii) more expressly focuses cybersecurity-related sanctions authorities on “foreign” persons.  Although the Order makes certain changes to prior cybersecurity related Executive Orders issued under previous administrations, it generally leaves the framework of those Executive Orders in place.  Further, it does not appear to modify other cybersecurity Executive Orders.[1]  To that end, although the Order highlights some areas where the Trump administration has taken a different approach than prior administrations, it also signals a more general alignment between administrations on core cybersecurity principles.Continue Reading White House Issues New Cybersecurity Executive Order

On May 7, 2025, the European Commission published a Q&A on the AI literacy obligation under Article 4 of the AI Act (the “Q&A”).  The Q&A builds upon the Commission’s guidance on AI literacy provided in its webinar in February 2025, covered in our earlier blog here.  Among other things, the Commission clarifies that the AI literacy obligation started to apply from February 2, 2025, but that the national market surveillance authorities tasked with supervising and enforcing the obligation will start doing so from August 3, 2026 onwards.Continue Reading European Commission Publishes Q&A on AI Literacy

The “market” for AI contracting terms continues to evolve, and whilst there is no standardised approach (as much will depend on the use cases, technical features and commercial terms), a number of attempts have been made to put forward contracting models. One of the latest being from the EU’s Community of Practice on Public Procurement of AI, which published an updated version of its non-binding EU AI Model Contractual Clauses (“MCC-AI”) on March 5, 2025. The MCC-AI are template contractual clauses intended to be used by public organizations that procure AI systems developed by external suppliers.  An initial draft had been published in September 2023.  This latest version has been updated to align with the EU AI Act, which entered into force on August 1, 2024 but whose terms apply gradually in a staggered manner.  Two templates are available: one for public procurement of “high-risk” AI systems, and another for non-high-risk AI systems. A commentary, which provides guidance on how to use the MCC-AI, is also available.Continue Reading EU’s Community of Practice Publishes Updated AI Model Contractual Clauses

On March 14, 2025, the Cyberspace Administration of China (“CAC”) released the final Measures for Labeling Artificial Intelligence-Generated Content and the mandatory national standard GB 45438-2025 Cybersecurity Technology – Labeling Method for Content Generated by Artificial Intelligence (collectively “Labeling Rules”).  The rules will take effect on

Continue Reading China Releases New Labeling Requirements for AI-Generated Content

On November 8, 2024, the UK’s communications regulator, the Office of Communications (“Ofcom”) published an open letter to online service providers operating in the UK regarding the Online Safety Act (“OSA”) and generative AI (the “Open Letter”).  In the Open Letter, Ofcom reminds online service providers that generative AI tools, such as chatbots and search assistants may fall within the scope of regulated services under the OSA.  More recently, Ofcom also published several pieces of guidance (some of which are under consultation) that include further commentary on how the OSA applies to generative AI services.Continue Reading Ofcom Explains How the UK Online Safety Act Will Apply to Generative AI

On February 20, 2025, the European Commission’s AI Office held a webinar explaining the AI literacy obligation under Article 4 of the EU’s AI Act.  This obligation started to apply on February 2, 2025.  At this webinar, the Commission highlighted the recently published repository of AI literacy practices.  This repository compiles the practices that some AI Pact companies have adopted to ensure a sufficient level of AI literacy in their workforce.  Continue Reading European Commission Provides Guidance on AI Literacy Requirement under the EU AI Act

On February 7, 2025, the OECD launched a voluntary framework for companies to report on their efforts to promote safe, secure and trustworthy AI.  This global reporting framework is intended to monitor and support the application of the International Code of Conduct for Organisations Developing Advanced AI Systems delivered by the 2023 G7 Hiroshima AI Process (“HAIP Code of Conduct”).*  Organizations can choose to comply with the HAIP Code of Conduct and participate in the HAIP reporting framework on a voluntary basis.  This reporting framework will allow participating organizations that comply with the HAIP Code of Conduct to showcase the efforts they have made towards ensuring responsible AI practices – in a way that is standardized and comparable with other companies.Continue Reading OECD Launches Voluntary Reporting Framework on AI Risk Management Practices

On October 16, 2024, the New York Department of Financial Services (“NYDFS”) issued an industry letter (the “Guidance”) highlighting the cybersecurity risks arising from the use of artificial intelligence (“AI”) and providing strategies to address these risks.  While the Guidance “does not impose any new requirements,” it clarifies how Covered Entities should address AI-related risks as part of NYDFS’s landmark cybersecurity regulation, codified at 23 NYCRR Part 500 (“Cybersecurity Regulation”).  The Cybersecurity Regulation, as revised in November 2023, requires Covered Entities to implement certain detailed cybersecurity controls, including governance and board oversight requirements.  Covered Entities subject to the Cybersecurity Regulation should pay close attention to the new Guidance not only if they are using or planning on using AI, but also if they could be subject to any of the AI-related risks or attacks described below. Continue Reading NYDFS Issues Industry Guidance on Risks Arising from Artificial Intelligence

On 12 July 2024, EU lawmakers published the EU Artificial Intelligence Act (“AI Act”), a first-of-its-kind regulation aiming to harmonise rules on AI models and systems across the EU. The AI Act prohibits certain AI practices, and sets out regulations on “high-risk” AI systems, certain AI systems that pose transparency risks, and general-purpose AI (GPAI) models.Continue Reading EU Artificial Intelligence Act Published