Photo of Jadzia Pierce

Jadzia Pierce

Jadzia Pierce advises clients developing and deploying technology on a range of regulatory matters, including the intersection of AI governance and data protection. Jadzia draws on her experience in senior in house leadership roles and extensive, hands on engagement with regulators worldwide. Prior to rejoining Covington in 2026, Jadzia served as Global Data Protection Officer at Microsoft, where she oversaw and advised on the company’s GDPR/UK GDPR program and acted as a primary point of contact for supervisory authorities on matters including AI, children’s data, advertising, and data subject rights.

Jadzia previously was Director of Microsoft’s Global Privacy Policy function and served as Associate General Counsel for Cybersecurity at McKinsey & Company. She began her career at Covington, advising Fortune 100 companies on privacy, cybersecurity, incident preparedness and response, investigations, and data driven transactions.

At Covington, Jadzia helps clients operationalize defensible, scalable approaches to AI enabled products and services, aligning privacy and security obligations with rapidly evolving regulatory frameworks across jurisdictions—with a particular focus on anticipating enforcement trends and navigating inter regulator dynamics.

On March 25, 2026, the UK’s Office of Communications (“Ofcom”) and the Information Commissioner’s Office (“ICO”) published a joint statement setting out their common expectations for age assurance on online services (“Joint Statement”). The Joint Statement is aimed at services likely to be accessed by children that fall within the scope of the Online Safety Act 2023 (“OSA”) and UK data protection legislation, and is designed to help providers comply with both their online safety and data protection obligations when deploying age assurance.

The Joint Statement arrives alongside a broader push from both regulators—including Ofcom’s recent call to action directed at major tech firms, an open letter from the ICO urging platforms to strengthen their age checks, and several enforcement actions by both regulators.

Continue Reading Ofcom and ICO Issue Joint Statement on Age Assurance

On 18 March 2026, the European Parliament’s Committee on the Internal Market and Consumer Protection (“IMCO”) and the Committee on Civil Liberties, Justice and Home Affairs (“LIBE”) adopted their joint negotiating position on the European Commission’s proposed Digital Omnibus on AI (which we previously analysed here). The position will

Continue Reading MEPs Adopt Joint Position on Proposed Digital Omnibus on AI

On March 2, 2026, the UK Department for Science, Innovation and Technology (“DSIT”) launched its consultation, titled “Growing up in the online world: a national conversation”. The consultation is open until 26 May 2026, after which the government will publish a summary of responses and its proposed approach. DSIT has indicated that it intends to move quickly on the consultation’s findings, drawing on newly granted powers that allow for accelerated implementation of online safety measures.

The consultation seeks views on a wide range of potential measures to strengthen children’s safety and wellbeing online, including more robust age‑assurance mechanisms, a statutory minimum age for social media, raising the UK’s age of digital consent, restrictions on certain features (such as livestreaming and disappearing messages), and new obligations for AI chatbots and generative‑AI services.

DSIT’s proposals could significantly expand regulatory expectations beyond the Online Safety Act 2023 (“OSA”)—including potential age‑based access limits (including differing safeguards as between teens and younger children), feature‑level restrictions, and enhanced duties for AI‑enabled services. Early engagement will be important to ensure that the government takes account of the views of affected service providers and understands the operational and technical implications of the measures proposed.

Continue Reading UK Government Launches Consultation on Children’s Online Experiences, Including New Obligations for AI

In February 2026, the Spanish data protection authority (Agencia Española de Protección de Datos, “AEPD”) published guidance on data protection issues related to the use of AI agents. The guidance follows an earlier, similar analysis by the UK Information Commissioner’s Office, which we discussed in a prior blog

Continue Reading Spanish Supervisory Authority Issues Detailed Guidance on Agentic AI and GDPR Compliance

On February 19, 2026, the UK Court of Appeal handed down its decision in DSG Retail Limited v The Information Commissioner [2026] EWCA Civ 140. The Court ruled that a controller’s data security duty applies to all personal data for which it acts as controller – irrespective of whether the information would constitute personal data in the hands of a third party (in this case, an attacker). Note that the case is concerned with events before the GDPR came into force, so the legal context is provided by UK Data Protection Act 1998 (“DPA 1998”), although the Court did take into account more recent jurisprudence, including CJEU case law.

The case adds useful colour to ongoing debates surrounding the definition of “personal data.” The Court of Appeal confirmed that a controller’s duty to implement appropriate measures to protect personal data applies to data that is “personal” from the perspective of the controller —even if a third-party attacker could not identify individuals from the exfiltrated dataset. This dovetails with the SRB v EDPS’s clarification that whether data is “personal” can depend on the context, while a controller’s obligations (such as transparency) must be assessed from the controller’s perspective at the relevant time (which, for the transparency principle, is at the time of collection of the data). (For more information on SRB v EDPS, see our prior post here.)

Continue Reading UK Court of Appeal Rules on the Concept of Personal Data in the Context of Data Security

On February 18, 2026, the European Data Protection Board (“EDPB”) published its Report on Stakeholder Event on Anonymisation and Pseudonymisation of 12 December 2025 (the Report). The Report summarises feedback from a remote stakeholder event convened to inform the EDPB’s ongoing work on Guidelines 01/2025 on Pseudonymisation (version for public consultation available here) and forthcoming guidance on anonymisation. The event gathered input from 115 participants spanning industry, NGOs, academia, law firms, and public sector bodies.

The objective of the Report is to capture stakeholder insights on how the General Data Protection Regulation (“GDPR”) applies to anonymisation and pseudonymisation, particularly following the Court of Justice of the European Union’s (“CJEU”) judgment in EDPS v SRB (C‑413/23 P). (See our previous blog post here.)

Continue Reading EDPB Publishes Report on Stakeholder Event on Anonymisation and Pseudonymisation

On 3 February 2026, the second International AI Safety Report (the “Report”) was published—providing a comprehensive, science-based assessment of the capabilities and risks of general-purpose AI (“GPAI”). The Report touts itself as the largest global collaboration on AI safety to date—led by Turing Award winner Yoshua Bengio, backed by an Expert Advisory Panel with nominees from more than 30 countries and international organizations, and authored by over 100 AI experts.

Continue Reading International AI Safety Report 2026 Examines AI Capabilities, Risks, and Safeguards

AI agents have arrived. Although the technology is not new, agents are rapidly becoming more sophisticated—capable of operating with greater autonomy, executing multi-step tasks, and interacting with other agents in ways that were largely theoretical just a few years ago. Organizations are already deploying agentic AI across software development, workflow automation, customer service, and e-commerce, with more ambitious applications on the horizon. As these systems grow in capability and prevalence, a pressing question has emerged: can existing legal frameworks—generally designed with human decision-makers in mind—be applied coherently to machines that operate with significant independence?

In January 2026, as part of its Tech Futures series, the UK Information Commissioner’s Office (“ICO”) published a report setting out its early thinking on the data protection implications of agentic AI. The report explicitly states that it is not intended to constitute “guidance” or “formal regulatory expectations.” Nevertheless, it provides meaningful insight into the ICO’s emerging view of agentic AI and its approach to applying data protection obligations to this context—insight that may foreshadow the regulator’s direction of travel.

The full report is lengthy and worth the read. This blog focuses on the data protection and privacy risks identified by the ICO, with the aim of helping product and legal teams anticipate potential regulatory issues early in the development process.

Continue Reading ICO Shares Early Views on Agentic AI & Data Protection