In February 2026, the Spanish data protection authority (Agencia Española de Protección de Datos, “AEPD”) published guidance on data protection issues related to the use of AI agents. The guidance follows an earlier, similar analysis by the UK Information Commissioner’s Office, which we discussed in a prior blog post.
Helpfully, AEPD’s guidance maps key GDPR obligations to agentic AI architectures, taking into account common characteristics of AI agents—such as autonomy, environmental perception, action-taking, proactivity, planning and reasoning, and memory and adaptability—and the various ways in which agentic systems may operate. It also provides several mitigation measures to consider in light of the report’s highlighted risks.
This post summarizes a few of the key takeaways for organizations using or considering agentic AI.
What Is an AI Agent?
The AEPD describes an AI agent as a system that “acts appropriately according to their circumstances and objectives, is flexible in the face of changing environments and goals, learns from experience and makes appropriate decisions given their perceptual and computational limitations”. Its defining characteristic is operational autonomy: an agent can plan and adapt actions independently in pursuit of a goal, interacting with internal data stores and external services with limited human intervention.
The guidance illustrates this with a practical example of an AI agent automatically organizing a business trip: when a trip appears in an employee’s calendar, the agent books transport and accommodation, gathers relevant information such as weather or exchange rates, and sends the employee a complete travel plan.
Who’s the Controller? Who’s the Processor?
The guidance notes that from a data protection perspective, AI agents may carry out operations on personal data. By design, they can autonomously access data, combine information from different sources, store context in memory, and generate outputs or trigger actions. Where those operations relate to an identified or identifiable natural person, they fall within the GDPR’s broad concept of “processing”.
From a legal perspective, however, this does not mean that the AI agent itself is responsible for the processing. AI agents are treated as a technical means through which processing is carried out, not as autonomous legal actors. Autonomy at the technical level does not alter the legal qualification of the processing or the allocation of responsibilities and liabilities under the GDPR as between controllers, joint controllers, and processors.
The key distinction lies, according to the AEPD, between execution and responsibility. While an AI agent may autonomously perform data‑handling operations in practice, the processing remains legally attributable to the controller (or processor) that deploys the system and determines its purposes and essential means. As the AEPD emphasizes, technological innovation does not, by itself, disrupt the application of existing data protection concepts.
This clarification underpins the guidance’s broader analysis: although agentic AI may change how processing is carried out, it does not displace the GDPR framework that determines who remains responsible for that processing.
The guidance also addresses when actions taken by AI agents may amount to automated decision‑making under Article 22 GDPR, emphasizing that this depends on the effects of the decision and the degree of meaningful human intervention, rather than on the mere use of autonomous technology.
How Do AI Agents Use External Services?
The AEDP observes that AI agents often connect to third‑party tools, APIs, databases, or online platforms to get things done. This makes them powerful, but it also extends the processing chain and can bring more actors into the mix.
The AEPD says controllers should check: (i) whether personal data are sent to third parties; (ii) how reliable and traceable the external sources are; and (iii) whether contracts, governance, and technical controls keep these interactions GDPR‑compliant. In practice, this may mean updating processor agreements, onward‑transfer terms, and technical/organizational measures, especially where agents pick tools or sources on their own.
Why Is “Memory” A Compliance Risk?
Agentic AI can keep data in different layers of memory—short‑term context, long‑term stores, and technical logs—and each raises its own data protection issues.
The AEPD endorses clear rules on what the agent may store, why, and for how long. Keeping lots of data “just in case” or to “optimize performance” can clash with purpose limitation and data minimization principles in the GDPR. If one agent serves several processing activities, there is a risk of purpose drift, so logical/technical separation of memories becomes important.
Data‑subject rights also can relate to such memories and logs (access, rectification, or erasure). And while logging helps with traceability and audits, excessive logs can create their own risks (too much data or intrusive monitoring).
What Should Organizations Do Now?
At 71 pages, the AEDP guidance is one of the most comprehensive assessments of the data protection implications of agentic AI to date. For organizations deploying or considering agentic AI, the guidance points to a few practical priorities:
- Clear accountability for AI‑enabled processing;
- A solid understanding of data flows (including the use of external tools and services);
- Well‑defined rules for agent memory and retention; and
- The early application of data protection by design and by default concepts.
Depending on the context and risk profile of the processing, the guidance also highlights the need to reassess existing risk analyses and, where applicable, update or conduct a data protection impact assessment.
As EU supervisory authorities continue to engage with increasingly autonomous AI systems, the guidance signals that greater technical autonomy does not reduce legal responsibility. Organizations will be expected to demonstrate effective governance and accountability over agentic AI processing. This message is reinforced by a recent warning from the Dutch Data Protection Authority, which in February 2026 cautioned that highly autonomous AI agents with broad system access can introduce serious security and data protection risks. The agency emphasized that organizations deploying such systems remain fully accountable under the GDPR for mitigating those risks.
* * *
The Covington team continues to monitor regulatory developments relating to AI and emerging technologies, and regularly advises leading technology companies on complex regulatory and compliance issues in the EU and other major markets. If you have questions about AI regulation, agentic AI, or related technology regulatory matters, we would be pleased to assist.