Artificial Intelligence (AI)

On March 2, 2026, the UK Department for Science, Innovation and Technology (“DSIT”) launched its consultation, titled “Growing up in the online world: a national conversation”. The consultation is open until 26 May 2026, after which the government will publish a summary of responses and its proposed approach. DSIT has indicated that it intends to move quickly on the consultation’s findings, drawing on newly granted powers that allow for accelerated implementation of online safety measures.

The consultation seeks views on a wide range of potential measures to strengthen children’s safety and wellbeing online, including more robust age‑assurance mechanisms, a statutory minimum age for social media, raising the UK’s age of digital consent, restrictions on certain features (such as livestreaming and disappearing messages), and new obligations for AI chatbots and generative‑AI services.

DSIT’s proposals could significantly expand regulatory expectations beyond the Online Safety Act 2023 (“OSA”)—including potential age‑based access limits (including differing safeguards as between teens and younger children), feature‑level restrictions, and enhanced duties for AI‑enabled services. Early engagement will be important to ensure that the government takes account of the views of affected service providers and understands the operational and technical implications of the measures proposed.

Continue Reading UK Government Launches Consultation on Children’s Online Experiences, Including New Obligations for AI

On 3 February 2026, the second International AI Safety Report (the “Report”) was published—providing a comprehensive, science-based assessment of the capabilities and risks of general-purpose AI (“GPAI”). The Report touts itself as the largest global collaboration on AI safety to date—led by Turing Award winner Yoshua Bengio, backed by an Expert Advisory Panel with nominees from more than 30 countries and international organizations, and authored by over 100 AI experts.

Continue Reading International AI Safety Report 2026 Examines AI Capabilities, Risks, and Safeguards

On 8 October 2025, the European Commission published its Apply AI Strategy (the “Strategy”), a comprehensive policy framework aimed at accelerating the adoption and integration of artificial intelligence (“AI”) across strategic industrial sectors and the public sector in the EU.

The Strategy is structured around three pillars: (1) introducing sectoral flagships to boost AI use in key industrial sectors; (2) addressing cross-cutting challenges; and (3) establishing a single governance mechanism to provide sectoral stakeholders a way to participate in AI policymaking.

The Apply AI Strategy is accompanied by the AI in Science Strategy, and it will be complemented by the Data Union Strategy (which is anticipated later this year).

Continue Reading European Commission Publishes Apply AI Strategy to Accelerate Sectoral AI Adoption Across the EU

On June 6, 2025, President Trump issued an Executive Order (“Sustaining Select Efforts to Strengthen the Nation’s Cybersecurity and Amending Executive Order 13694 and Executive Order 14144”) (the “Order”) that modifies certain initiatives in prior Executive Orders issued by Presidents Obama and Biden and highlights key cybersecurity priorities for the current Administration.  Specifically, the Order (i) directs that existing federal government regulations and policy be revised to focus on securing third-party software supply chains, quantum cryptography, artificial intelligence, and Internet of Things (“IoT”) devices and (ii) more expressly focuses cybersecurity-related sanctions authorities on “foreign” persons.  Although the Order makes certain changes to prior cybersecurity related Executive Orders issued under previous administrations, it generally leaves the framework of those Executive Orders in place.  Further, it does not appear to modify other cybersecurity Executive Orders.[1]  To that end, although the Order highlights some areas where the Trump administration has taken a different approach than prior administrations, it also signals a more general alignment between administrations on core cybersecurity principles.

Continue Reading White House Issues New Cybersecurity Executive Order

On May 7, 2025, the European Commission published a Q&A on the AI literacy obligation under Article 4 of the AI Act (the “Q&A”).  The Q&A builds upon the Commission’s guidance on AI literacy provided in its webinar in February 2025, covered in our earlier blog here.  Among other things, the Commission clarifies that the AI literacy obligation started to apply from February 2, 2025, but that the national market surveillance authorities tasked with supervising and enforcing the obligation will start doing so from August 3, 2026 onwards.

Continue Reading European Commission Publishes Q&A on AI Literacy

The “market” for AI contracting terms continues to evolve, and whilst there is no standardised approach (as much will depend on the use cases, technical features and commercial terms), a number of attempts have been made to put forward contracting models. One of the latest being from the EU’s Community of Practice on Public Procurement of AI, which published an updated version of its non-binding EU AI Model Contractual Clauses (“MCC-AI”) on March 5, 2025. The MCC-AI are template contractual clauses intended to be used by public organizations that procure AI systems developed by external suppliers.  An initial draft had been published in September 2023.  This latest version has been updated to align with the EU AI Act, which entered into force on August 1, 2024 but whose terms apply gradually in a staggered manner.  Two templates are available: one for public procurement of “high-risk” AI systems, and another for non-high-risk AI systems. A commentary, which provides guidance on how to use the MCC-AI, is also available.

Continue Reading EU’s Community of Practice Publishes Updated AI Model Contractual Clauses

On March 14, 2025, the Cyberspace Administration of China (“CAC”) released the final Measures for Labeling Artificial Intelligence-Generated Content and the mandatory national standard GB 45438-2025 Cybersecurity Technology – Labeling Method for Content Generated by Artificial Intelligence (collectively “Labeling Rules”).  The rules will take effect on

Continue Reading China Releases New Labeling Requirements for AI-Generated Content

On November 8, 2024, the UK’s communications regulator, the Office of Communications (“Ofcom”) published an open letter to online service providers operating in the UK regarding the Online Safety Act (“OSA”) and generative AI (the “Open Letter”).  In the Open Letter, Ofcom reminds online service providers that generative AI tools, such as chatbots and search assistants may fall within the scope of regulated services under the OSA.  More recently, Ofcom also published several pieces of guidance (some of which are under consultation) that include further commentary on how the OSA applies to generative AI services.

Continue Reading Ofcom Explains How the UK Online Safety Act Will Apply to Generative AI

On February 20, 2025, the European Commission’s AI Office held a webinar explaining the AI literacy obligation under Article 4 of the EU’s AI Act.  This obligation started to apply on February 2, 2025.  At this webinar, the Commission highlighted the recently published repository of AI literacy practices.  This repository compiles the practices that some AI Pact companies have adopted to ensure a sufficient level of AI literacy in their workforce.  

Continue Reading European Commission Provides Guidance on AI Literacy Requirement under the EU AI Act

On February 7, 2025, the OECD launched a voluntary framework for companies to report on their efforts to promote safe, secure and trustworthy AI.  This global reporting framework is intended to monitor and support the application of the International Code of Conduct for Organisations Developing Advanced AI Systems delivered by the 2023 G7 Hiroshima AI Process (“HAIP Code of Conduct”).*  Organizations can choose to comply with the HAIP Code of Conduct and participate in the HAIP reporting framework on a voluntary basis.  This reporting framework will allow participating organizations that comply with the HAIP Code of Conduct to showcase the efforts they have made towards ensuring responsible AI practices – in a way that is standardized and comparable with other companies.

Continue Reading OECD Launches Voluntary Reporting Framework on AI Risk Management Practices