Artificial Intelligence (AI)

As agentic AI systems move from research labs to enterprise workflows, regulators worldwide are grappling with how to address the potential risks these systems may pose (as discussed in prior blog posts here and here).  In January 2026, Singapore’s Infocomm Media Development Authority (“IMDA”) launched a non-binding Model AI Governance Framework for Agentic AI (“Framework”), just a few months after the Cyber Security Agency released a discussion paper titled “Securing Agentic AI” (“Discussion Paper”).

Together, these documents provide organizations with a structured, operational roadmap to consider when navigating some of the potential security and governance challenges posed by agentic AI.  This blog post highlights some of their key points.

Continue Reading Singapore Issues Governance and Security Guidance for Agentic AI

On April 20, 2026, the Spanish Data Protection Agency (AEPD) has published new guidance on how to comply with the GDPR when using AI‑powered voice transcription tools. The guidance builds on earlier AEPD guidance on this topic from January 2026. This blog post sets out the key takeaways of both guidance documents, which are only available in Spanish.

The AEPD’s guidance confirms a risk‑based approach to AI‑powered voice transcription. Organizations using these tools should not treat transcription as a purely technical feature, but as a processing activity that requires continuous governance, clear transparency, and proactive safeguards. Given the widespread and growing use of transcription tools across business functions, this guidance is likely to be relevant well beyond Spain.

Continue Reading Spain’s Supervisory Authority Issues New Guidance on AI‑Based Voice Transcription

U.S. state lawmakers have introduced more than 40 bills across at least 24 states to regulate personalized algorithmic pricing in 2026 thus far, already outpacing the number of personalized algorithmic pricing bills introduced in all of 2025.  While their definitions and scope vary, the 2026 bills broadly refer to “personalized

Continue Reading State Lawmakers Introduce New Wave of Personalized Algorithmic Pricing Bills

In February 2026, the Spanish data protection authority (Agencia Española de Protección de Datos, “AEPD”) published guidance on data protection issues related to the use of AI agents. The guidance follows an earlier, similar analysis by the UK Information Commissioner’s Office, which we discussed in a prior blog

Continue Reading Spanish Supervisory Authority Issues Detailed Guidance on Agentic AI and GDPR Compliance

On February 10, 2026, federal district court Judge Jed S. Rakoff ruled from the bench in the Southern District of New York that the attorney-client privilege and the work product doctrine did not protect legal strategy materials that a criminal defendant generated using a generative AI tool, when he used

Continue Reading AI and Legal Privilege: Key Takeaways from US v. Heppner

On 3 February 2026, the second International AI Safety Report (the “Report”) was published—providing a comprehensive, science-based assessment of the capabilities and risks of general-purpose AI (“GPAI”). The Report touts itself as the largest global collaboration on AI safety to date—led by Turing Award winner Yoshua Bengio, backed by an Expert Advisory Panel with nominees from more than 30 countries and international organizations, and authored by over 100 AI experts.

Continue Reading International AI Safety Report 2026 Examines AI Capabilities, Risks, and Safeguards

AI agents have arrived. Although the technology is not new, agents are rapidly becoming more sophisticated—capable of operating with greater autonomy, executing multi-step tasks, and interacting with other agents in ways that were largely theoretical just a few years ago. Organizations are already deploying agentic AI across software development, workflow automation, customer service, and e-commerce, with more ambitious applications on the horizon. As these systems grow in capability and prevalence, a pressing question has emerged: can existing legal frameworks—generally designed with human decision-makers in mind—be applied coherently to machines that operate with significant independence?

In January 2026, as part of its Tech Futures series, the UK Information Commissioner’s Office (“ICO”) published a report setting out its early thinking on the data protection implications of agentic AI. The report explicitly states that it is not intended to constitute “guidance” or “formal regulatory expectations.” Nevertheless, it provides meaningful insight into the ICO’s emerging view of agentic AI and its approach to applying data protection obligations to this context—insight that may foreshadow the regulator’s direction of travel.

The full report is lengthy and worth the read. This blog focuses on the data protection and privacy risks identified by the ICO, with the aim of helping product and legal teams anticipate potential regulatory issues early in the development process.

Continue Reading ICO Shares Early Views on Agentic AI & Data Protection

On December 16, 2025, the U.S. National Institute of Standards and Technology (“NIST”) published a preliminary draft of the Cybersecurity Framework Profile for Artificial Intelligence (“Cyber AI Profile” or “Profile”).  According to the draft, the Cyber AI Profile is intended to “provide guidelines for managing cybersecurity risk related to AI

Continue Reading NIST Publishes Preliminary Draft of Cybersecurity Framework Profile for Artificial Intelligence for Public Comment

The European Commission (“Commission”) recently launched two stakeholder consultations under the EU AI Act. The first (see here), closing on 9 January 2026, relates to the copyright-related obligations for General Purpose AI (“GPAI”) providers under the AI Act and GPAI Code of Practice. The second (see here)

Continue Reading European Commission Launches Consultations on the EU AI Act’s Copyright Provisions and AI Regulatory Sandboxes