Artificial Intelligence (AI)

On May 22, 2025, the Cybersecurity and Infrastructure Security Agency (“CISA”), which sits within the Department of Homeland Security (“DHS”) released guidance for AI system operators regarding managing data security risks.  The associated press release explains that the guidance provides “best practices for system operators to mitigate cyber risks through the artificial intelligence lifecycle, including consideration on securing the data supply chain and protecting data against unauthorized modification by threat actors.”  CISA published the guidance in conjunction with the National Security Agency, the Federal Bureau of Investigation, and cyber agencies from Australia, the United Kingdom, and New Zealand.  This guidance is intended for organizations using AI systems in their operations, including Defense Industrial Bases, National Security Systems owners, federal agencies, and Critical Infrastructure owners and operators. This guidance builds on the Joint Guidance on Deploying AI Systems Security released by CISA and several other U.S. and foreign agencies in April 2024.Continue Reading CISA Releases AI Data Security Guidance

In February 2025, the European Commission published two sets of guidelines to clarify key aspects of the EU Artificial Intelligence Act (“AI Act”): Guidelines on the definition of an AI system and Guidelines on prohibited AI practices. These guidelines are intended to provide guidance on the set of AI Act obligations that started to apply on February 2, 2025 – which includes the definitions section of the AI Act, obligations relating to AI literacy, and prohibitions on certain AI practices.

This article summarizes the key takeaways from the Commission’s guidelines on the definition of AI systems (the “Guidelines”). Please see our blogs on the guidelines on prohibited AI practices here, and our blog on AI literacy requirements under the AI Act here.Continue Reading European Commission Guidelines on the Definition of an “AI System”

In February 2025, the European Commission published two sets of guidelines to clarify key aspects of the EU Artificial Intelligence Act (“AI Act”): Guidelines on the definition of an AI system and Guidelines on prohibited AI practices. These guidelines are intended to provide guidance on the set of AI Act obligations that started to apply on February 2, 2025 – which includes the definitions section of the AI Act, obligations relating to AI literacy, and prohibitions on certain AI practices.

This article summarizes the key takeaways from the Commission’s guidelines on prohibited AI practices (“Guidelines”). Please see our blogs on the guidelines on the definition of AI systems here, and our blog on AI literacy requirements under the AI Act here.Continue Reading European Commission Guidelines on Prohibited AI Practices under the EU Artificial Intelligence Act

On April 3, 2025, the Budapest District Court made a request for a preliminary ruling to the Court of Justice of the European Union (“CJEU”) relating to the application of EU copyright rules to outputs generated by large language model (LLM)-based chatbots, specifically Google’s Gemini (formerly Bard), in response to a user prompt. This Case C-250/25 involves a dispute between Like Company, a Hungarian news publisher, and Google Ireland Ltd.Continue Reading CJEU Receives Questions on Copyright Rules Applying to AI Chatbot

AI chatbots are transforming how businesses handle consumer inquiries and complaints, offering speed and availability that traditional channels often cannot match.  However, the European Commission’s recent Digital Fairness Act Fitness Check has spotlighted a gap: EU consumers currently lack a cross-sectoral right to demand human contact when interacting with AI chatbots in business-to-consumer settings.  It is still unclear whether and how the European Commission is proposing to address this.  The Digital Fairness Act could do so, but the Commission’s proposal is only planned to be published in the 3rd quarter of 2026.  This post highlights key consumer protection considerations for companies deploying AI chatbots in the EU market.Continue Reading Digital Fairness Act Series: Topic 2 – Transparency and Disclosure Obligations for AI Chatbots in Consumer Interactions

On May 7, 2025, the European Commission published a Q&A on the AI literacy obligation under Article 4 of the AI Act (the “Q&A”).  The Q&A builds upon the Commission’s guidance on AI literacy provided in its webinar in February 2025, covered in our earlier blog here.  Among other things, the Commission clarifies that the AI literacy obligation started to apply from February 2, 2025, but that the national market surveillance authorities tasked with supervising and enforcing the obligation will start doing so from August 3, 2026 onwards.Continue Reading European Commission Publishes Q&A on AI Literacy

The “market” for AI contracting terms continues to evolve, and whilst there is no standardised approach (as much will depend on the use cases, technical features and commercial terms), a number of attempts have been made to put forward contracting models. One of the latest being from the EU’s Community of Practice on Public Procurement of AI, which published an updated version of its non-binding EU AI Model Contractual Clauses (“MCC-AI”) on March 5, 2025. The MCC-AI are template contractual clauses intended to be used by public organizations that procure AI systems developed by external suppliers.  An initial draft had been published in September 2023.  This latest version has been updated to align with the EU AI Act, which entered into force on August 1, 2024 but whose terms apply gradually in a staggered manner.  Two templates are available: one for public procurement of “high-risk” AI systems, and another for non-high-risk AI systems. A commentary, which provides guidance on how to use the MCC-AI, is also available.Continue Reading EU’s Community of Practice Publishes Updated AI Model Contractual Clauses

Kenya has released its first National Artificial Intelligence Strategy (2025–2030), a landmark document on the continent that sets out a government-led vision for ethical, inclusive, and innovation-driven AI adoption. Framed as a foundational step in the country’s digital transformation agenda, the strategy articulates policy ambitions that will be of

Continue Reading Kenya’s AI Strategy 2025–2030: Signals for Global Companies Operating in Africa

On February 4, 2025, the Japanese Government announced its intention to position Japan as “the most AI-friendly country in the world”, with a lighter regulatory approach than that of the EU and some other nations.  This statement follows: (i) the Japanese government’s recent submission of an AI bill to Japan’s Parliament, and (ii) the Japanese Personal Data Protection Commission’s (“PPC”) proposals to amend the Japanese Act on the Protection of Personal Information (“APPI”) to facilitate the use of personal data for the development of AI.Continue Reading Japan Plans to Adopt AI-Friendly Legislation

On November 8, 2024, the UK’s communications regulator, the Office of Communications (“Ofcom”) published an open letter to online service providers operating in the UK regarding the Online Safety Act (“OSA”) and generative AI (the “Open Letter”).  In the Open Letter, Ofcom reminds online service providers that generative AI tools, such as chatbots and search assistants may fall within the scope of regulated services under the OSA.  More recently, Ofcom also published several pieces of guidance (some of which are under consultation) that include further commentary on how the OSA applies to generative AI services.Continue Reading Ofcom Explains How the UK Online Safety Act Will Apply to Generative AI