In February 2026, the Spanish data protection authority (Agencia Española de Protección de Datos, “AEPD”) published guidance on data protection issues related to the use of AI agents. The guidance follows an earlier, similar analysis by the UK Information Commissioner’s Office, which we discussed in a prior blog post.

Helpfully, AEPD’s guidance maps key GDPR obligations to agentic AI architectures, taking into account common characteristics of AI agents—such as autonomy, environmental perception, action-taking, proactivity, planning and reasoning, and memory and adaptability—and the various ways in which agentic systems may operate. It also provides several mitigation measures to consider in light of the report’s highlighted risks.

This post summarizes a few of the key takeaways for organizations using or considering agentic AI.

What Is an AI Agent?

The AEPD describes an AI agent as a system that “acts appropriately according to their circumstances and objectives, is flexible in the face of changing environments and goals, learns from experience and makes appropriate decisions given their perceptual and computational limitations”. Its defining characteristic is operational autonomy: an agent can plan and adapt actions independently in pursuit of a goal, interacting with internal data stores and external services with limited human intervention.

The guidance illustrates this with a practical example of an AI agent automatically organizing a business trip: when a trip appears in an employee’s calendar, the agent books transport and accommodation, gathers relevant information such as weather or exchange rates, and sends the employee a complete travel plan.

Who’s the Controller? Who’s the Processor?

The guidance notes that from a data protection perspective, AI agents may carry out operations on personal data. By design, they can autonomously access data, combine information from different sources, store context in memory, and generate outputs or trigger actions. Where those operations relate to an identified or identifiable natural person, they fall within the GDPR’s broad concept of “processing”.

From a legal perspective, however, this does not mean that the AI agent itself is responsible for the processing. AI agents are treated as a technical means through which processing is carried out, not as autonomous legal actors. Autonomy at the technical level does not alter the legal qualification of the processing or the allocation of responsibilities and liabilities under the GDPR as between controllers, joint controllers, and processors.

The key distinction lies, according to the AEPD, between execution and responsibility. While an AI agent may autonomously perform data‑handling operations in practice, the processing remains legally attributable to the controller (or processor) that deploys the system and determines its purposes and essential means. As the AEPD emphasizes, technological innovation does not, by itself, disrupt the application of existing data protection concepts.

This clarification underpins the guidance’s broader analysis: although agentic AI may change how processing is carried out, it does not displace the GDPR framework that determines who remains responsible for that processing.

The guidance also addresses when actions taken by AI agents may amount to automated decision‑making under Article 22 GDPR, emphasizing that this depends on the effects of the decision and the degree of meaningful human intervention, rather than on the mere use of autonomous technology.

How Do AI Agents Use External Services?

The AEDP observes that AI agents often connect to third‑party tools, APIs, databases, or online platforms to get things done. This makes them powerful, but it also extends the processing chain and can bring more actors into the mix.

The AEPD says controllers should check: (i) whether personal data are sent to third parties; (ii) how reliable and traceable the external sources are; and (iii) whether contracts, governance, and technical controls keep these interactions GDPR‑compliant. In practice, this may mean updating processor agreements, onward‑transfer terms, and technical/organizational measures, especially where agents pick tools or sources on their own.

Why Is “Memory” A Compliance Risk?

Agentic AI can keep data in different layers of memory—short‑term context, long‑term stores, and technical logs—and each raises its own data protection issues.

The AEPD endorses clear rules on what the agent may store, why, and for how long. Keeping lots of data “just in case” or to “optimize performance” can clash with purpose limitation and data minimization principles in the GDPR. If one agent serves several processing activities, there is a risk of purpose drift, so logical/technical separation of memories becomes important.

Data‑subject rights also can relate to such memories and logs (access, rectification, or erasure). And while logging helps with traceability and audits, excessive logs can create their own risks (too much data or intrusive monitoring).

What Should Organizations Do Now?

At 71 pages, the AEDP guidance is one of the most comprehensive assessments of the data protection implications of agentic AI to date. For organizations deploying or considering agentic AI, the guidance points to a few practical priorities:

  • Clear accountability for AI‑enabled processing;
  • A solid understanding of data flows (including the use of external tools and services);
  • Well‑defined rules for agent memory and retention; and
  • The early application of data protection by design and by default concepts.

Depending on the context and risk profile of the processing, the guidance also highlights the need to reassess existing risk analyses and, where applicable, update or conduct a data protection impact assessment.

As EU supervisory authorities continue to engage with increasingly autonomous AI systems, the guidance signals that greater technical autonomy does not reduce legal responsibility. Organizations will be expected to demonstrate effective governance and accountability over agentic AI processing. This message is reinforced by a recent warning from the Dutch Data Protection Authority, which in February 2026 cautioned that highly autonomous AI agents with broad system access can introduce serious security and data protection risks. The agency emphasized that organizations deploying such systems remain fully accountable under the GDPR for mitigating those risks.

*           *           *

The Covington team continues to monitor regulatory developments relating to AI and emerging technologies, and regularly advises leading technology companies on complex regulatory and compliance issues in the EU and other major markets. If you have questions about AI regulation, agentic AI, or related technology regulatory matters, we would be pleased to assist.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Dan Cooper Dan Cooper

Daniel Cooper is co-chair of Covington’s Data Privacy and Cyber Security Practice, and advises clients on information technology regulatory and policy issues, particularly data protection, consumer protection, AI, and data security matters. He has over 20 years of experience in the field, representing…

Daniel Cooper is co-chair of Covington’s Data Privacy and Cyber Security Practice, and advises clients on information technology regulatory and policy issues, particularly data protection, consumer protection, AI, and data security matters. He has over 20 years of experience in the field, representing clients in regulatory proceedings before privacy authorities in Europe and counseling them on their global compliance and government affairs strategies. Dan regularly lectures on the topic, and was instrumental in drafting the privacy standards applied in professional sport.

According to Chambers UK, his “level of expertise is second to none, but it’s also equally paired with a keen understanding of our business and direction.” It was noted that “he is very good at calibrating and helping to gauge risk.”

Dan is qualified to practice law in the United States, the United Kingdom, Ireland and Belgium. He has also been appointed to the advisory and expert boards of privacy NGOs and agencies, such as the IAPP’s European Advisory Board, Privacy International and the European security agency, ENISA.

Photo of Jadzia Pierce Jadzia Pierce

Jadzia Pierce advises clients developing and deploying technology on a range of regulatory matters, including the intersection of AI governance and data protection. Jadzia draws on her experience in senior in house leadership roles and extensive, hands on engagement with regulators worldwide. Prior…

Jadzia Pierce advises clients developing and deploying technology on a range of regulatory matters, including the intersection of AI governance and data protection. Jadzia draws on her experience in senior in house leadership roles and extensive, hands on engagement with regulators worldwide. Prior to rejoining Covington in 2026, Jadzia served as Global Data Protection Officer at Microsoft, where she oversaw and advised on the company’s GDPR/UK GDPR program and acted as a primary point of contact for supervisory authorities on matters including AI, children’s data, advertising, and data subject rights.

Jadzia previously was Director of Microsoft’s Global Privacy Policy function and served as Associate General Counsel for Cybersecurity at McKinsey & Company. She began her career at Covington, advising Fortune 100 companies on privacy, cybersecurity, incident preparedness and response, investigations, and data driven transactions.

At Covington, Jadzia helps clients operationalize defensible, scalable approaches to AI enabled products and services, aligning privacy and security obligations with rapidly evolving regulatory frameworks across jurisdictions—with a particular focus on anticipating enforcement trends and navigating inter regulator dynamics.

Photo of Anna Sophia Oberschelp de Meneses Anna Sophia Oberschelp de Meneses

Anna Sophia Oberschelp de Meneses advises on EU data protection, cybersecurity, and consumer law. Her practice covers the full range of Europe’s digital regulatory framework, including GDPR, ePrivacy, NIS2, the Cyber Resilience Act, the AI Act, the Digital Services Act, the Data Act…

Anna Sophia Oberschelp de Meneses advises on EU data protection, cybersecurity, and consumer law. Her practice covers the full range of Europe’s digital regulatory framework, including GDPR, ePrivacy, NIS2, the Cyber Resilience Act, the AI Act, the Digital Services Act, the Data Act, the European Health Data Space, and EU consumer protection law, including product safety, product liability, and consumer rights legislation. She focuses on the operational side of compliance — helping clients design policies and processes, draft documentation, and build the internal frameworks needed to meet regulatory requirements in practice.

She also advises on contentious matters, drawing on experience managing investigations before national regulators and proceedings before national courts and the Court of Justice of the European Union. She works closely with Covington’s disputes teams on matters at the intersection of regulatory compliance and litigation.

Virginie de France

Virginie de France is an associate in the Data Privacy and Cybersecurity Practice Group. She advises clients on the full range of EU technology, data protection, and digital regulatory matters. Virginie supports clients with data protection compliance projects, assisting with investigations led by…

Virginie de France is an associate in the Data Privacy and Cybersecurity Practice Group. She advises clients on the full range of EU technology, data protection, and digital regulatory matters. Virginie supports clients with data protection compliance projects, assisting with investigations led by national authorities, and acting in litigation. She also has substantial experience helping organizations meet European and national cybersecurity obligations.