On 3 February 2026, the second International AI Safety Report (the “Report”) was published—providing a comprehensive, science-based assessment of the capabilities and risks of general-purpose AI (“GPAI”). The Report touts itself as the largest global collaboration on AI safety to date—led by Turing Award winner Yoshua Bengio, backed by an Expert Advisory Panel with nominees from more than 30 countries and international organizations, and authored by over 100 AI experts.Continue Reading International AI Safety Report 2026 Examines AI Capabilities, Risks, and Safeguards
Jadzia Pierce
Jadzia Pierce advises clients developing and deploying technology on a range of regulatory matters, including the intersection of AI governance and data protection. Jadzia draws on her experience in senior in house leadership roles and extensive, hands on engagement with regulators worldwide. Prior to rejoining Covington in 2026, Jadzia served as Global Data Protection Officer at Microsoft, where she oversaw and advised on the company’s GDPR/UK GDPR program and acted as a primary point of contact for supervisory authorities on matters including AI, children’s data, advertising, and data subject rights.
Jadzia previously was Director of Microsoft’s Global Privacy Policy function and served as Associate General Counsel for Cybersecurity at McKinsey & Company. She began her career at Covington, advising Fortune 100 companies on privacy, cybersecurity, incident preparedness and response, investigations, and data driven transactions.
At Covington, Jadzia helps clients operationalize defensible, scalable approaches to AI enabled products and services, aligning privacy and security obligations with rapidly evolving regulatory frameworks across jurisdictions—with a particular focus on anticipating enforcement trends and navigating inter regulator dynamics.
ICO Shares Early Views on Agentic AI & Data Protection
AI agents have arrived. Although the technology is not new, agents are rapidly becoming more sophisticated—capable of operating with greater autonomy, executing multi-step tasks, and interacting with other agents in ways that were largely theoretical just a few years ago. Organizations are already deploying agentic AI across software development, workflow automation, customer service, and e-commerce, with more ambitious applications on the horizon. As these systems grow in capability and prevalence, a pressing question has emerged: can existing legal frameworks—generally designed with human decision-makers in mind—be applied coherently to machines that operate with significant independence?
In January 2026, as part of its Tech Futures series, the UK Information Commissioner’s Office (“ICO”) published a report setting out its early thinking on the data protection implications of agentic AI. The report explicitly states that it is not intended to constitute “guidance” or “formal regulatory expectations.” Nevertheless, it provides meaningful insight into the ICO’s emerging view of agentic AI and its approach to applying data protection obligations to this context—insight that may foreshadow the regulator’s direction of travel.
The full report is lengthy and worth the read. This blog focuses on the data protection and privacy risks identified by the ICO, with the aim of helping product and legal teams anticipate potential regulatory issues early in the development process.Continue Reading ICO Shares Early Views on Agentic AI & Data Protection