Artificial Intelligence (AI)

The “market” for AI contracting terms continues to evolve, and whilst there is no standardised approach (as much will depend on the use cases, technical features and commercial terms), a number of attempts have been made to put forward contracting models. One of the latest being from the EU’s Community of Practice on Public Procurement of AI, which published an updated version of its non-binding EU AI Model Contractual Clauses (“MCC-AI”) on March 5, 2025. The MCC-AI are template contractual clauses intended to be used by public organizations that procure AI systems developed by external suppliers.  An initial draft had been published in September 2023.  This latest version has been updated to align with the EU AI Act, which entered into force on August 1, 2024 but whose terms apply gradually in a staggered manner.  Two templates are available: one for public procurement of “high-risk” AI systems, and another for non-high-risk AI systems. A commentary, which provides guidance on how to use the MCC-AI, is also available.Continue Reading EU’s Community of Practice Publishes Updated AI Model Contractual Clauses

On March 14, 2025, the Cyberspace Administration of China (“CAC”) released the final Measures for Labeling Artificial Intelligence-Generated Content and the mandatory national standard GB 45438-2025 Cybersecurity Technology – Labeling Method for Content Generated by Artificial Intelligence (collectively “Labeling Rules”).  The rules will take effect on

Continue Reading China Releases New Labeling Requirements for AI-Generated Content

On November 8, 2024, the UK’s communications regulator, the Office of Communications (“Ofcom”) published an open letter to online service providers operating in the UK regarding the Online Safety Act (“OSA”) and generative AI (the “Open Letter”).  In the Open Letter, Ofcom reminds online service providers that generative AI tools, such as chatbots and search assistants may fall within the scope of regulated services under the OSA.  More recently, Ofcom also published several pieces of guidance (some of which are under consultation) that include further commentary on how the OSA applies to generative AI services.Continue Reading Ofcom Explains How the UK Online Safety Act Will Apply to Generative AI

On February 20, 2025, the European Commission’s AI Office held a webinar explaining the AI literacy obligation under Article 4 of the EU’s AI Act.  This obligation started to apply on February 2, 2025.  At this webinar, the Commission highlighted the recently published repository of AI literacy practices.  This repository compiles the practices that some AI Pact companies have adopted to ensure a sufficient level of AI literacy in their workforce.  Continue Reading European Commission Provides Guidance on AI Literacy Requirement under the EU AI Act

On February 7, 2025, the OECD launched a voluntary framework for companies to report on their efforts to promote safe, secure and trustworthy AI.  This global reporting framework is intended to monitor and support the application of the International Code of Conduct for Organisations Developing Advanced AI Systems delivered by the 2023 G7 Hiroshima AI Process (“HAIP Code of Conduct”).*  Organizations can choose to comply with the HAIP Code of Conduct and participate in the HAIP reporting framework on a voluntary basis.  This reporting framework will allow participating organizations that comply with the HAIP Code of Conduct to showcase the efforts they have made towards ensuring responsible AI practices – in a way that is standardized and comparable with other companies.Continue Reading OECD Launches Voluntary Reporting Framework on AI Risk Management Practices

On October 16, 2024, the New York Department of Financial Services (“NYDFS”) issued an industry letter (the “Guidance”) highlighting the cybersecurity risks arising from the use of artificial intelligence (“AI”) and providing strategies to address these risks.  While the Guidance “does not impose any new requirements,” it clarifies how Covered Entities should address AI-related risks as part of NYDFS’s landmark cybersecurity regulation, codified at 23 NYCRR Part 500 (“Cybersecurity Regulation”).  The Cybersecurity Regulation, as revised in November 2023, requires Covered Entities to implement certain detailed cybersecurity controls, including governance and board oversight requirements.  Covered Entities subject to the Cybersecurity Regulation should pay close attention to the new Guidance not only if they are using or planning on using AI, but also if they could be subject to any of the AI-related risks or attacks described below. Continue Reading NYDFS Issues Industry Guidance on Risks Arising from Artificial Intelligence

On 12 July 2024, EU lawmakers published the EU Artificial Intelligence Act (“AI Act”), a first-of-its-kind regulation aiming to harmonise rules on AI models and systems across the EU. The AI Act prohibits certain AI practices, and sets out regulations on “high-risk” AI systems, certain AI systems that pose transparency risks, and general-purpose AI (GPAI) models.Continue Reading EU Artificial Intelligence Act Published

On May 17, 2024, the Council of Europe adopted the Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (the “Convention”).  The Convention represents the first international treaty on AI that will be legally binding on the signatories.  The Convention will be open for signature on September 5, 2024. 

The Convention was drafted by representatives from the 46 Council of Europe member states, the European Union and 11 non-member states (Argentina, Australia, Canada, Costa Rica, the Holy See, Israel, Japan, Mexico, Peru, the United States of America, and Uruguay).  The Convention is not directly applicable to businesses – it requires the signatories (the “CoE signatories”) to implement laws or other legal measures to give it effect.  The Convention represents an international consensus on the key aspects of AI legislation that are likely to emerge among the CoE signatories.Continue Reading Council of Europe Adopts International Treaty on Artificial Intelligence

On May 20, 2024, a proposal for a law on artificial intelligence (“AI”) was laid before the Italian Senate.

The proposed law sets out (1) general principles for the development and use of AI systems and models; (2) sectorial provisions, particularly in the healthcare sector and for scientific research for healthcare; (3) rules on the national strategy on AI and governance, including designating the national competent authorities in accordance with the EU AI Act; and (4) amendments to copyright law. 

We provide below an overview of the proposal’s key provisions.Continue Reading Italy Proposes New Artificial Intelligence Law

On March 28, the White House Office of Management and Budget (OMB) released guidance on governance and risk management for federal agency use of artificial intelligence (AI).  The guidance was issued in furtherance of last fall’s White House AI Executive Order, which established goals to promote the safe, secure, and trustworthy use and development of AI systems.Continue Reading OMB Issues First Governmentwide AI Policy for Federal Agencies