AI agents have arrived. Although the technology is not new, agents are rapidly becoming more sophisticated—capable of operating with greater autonomy, executing multi-step tasks, and interacting with other agents in ways that were largely theoretical just a few years ago. Organizations are already deploying agentic AI across software development, workflow automation, customer service, and e-commerce, with more ambitious applications on the horizon. As these systems grow in capability and prevalence, a pressing question has emerged: can existing legal frameworks—generally designed with human decision-makers in mind—be applied coherently to machines that operate with significant independence?

In January 2026, as part of its Tech Futures series, the UK Information Commissioner’s Office (“ICO”) published a report setting out its early thinking on the data protection implications of agentic AI. The report explicitly states that it is not intended to constitute “guidance” or “formal regulatory expectations.” Nevertheless, it provides meaningful insight into the ICO’s emerging view of agentic AI and its approach to applying data protection obligations to this context—insight that may foreshadow the regulator’s direction of travel.

The full report is lengthy and worth the read. This blog focuses on the data protection and privacy risks identified by the ICO, with the aim of helping product and legal teams anticipate potential regulatory issues early in the development process.

Link to I.                   Key Data Protection Issues I.                   Key Data Protection Issues

At the outset, the ICO’s report emphasizes that agentic AI can both exacerbate existing data protection issues and introduce new ones—particularly as human oversight becomes more difficult when agents operate with greater autonomy in less predictable environments. Despite this increasing independence, the report makes clear that organizations remain fully responsible for ensuring personal information is used appropriately.

The ICO also stresses that placing governance responsibility on end users is unlikely to be workable in all cases. As a result, the burden may fall on suppliers to implement robust controls prior to deployment and to ensure that agentic AI systems are fit for their intended purposes.

Link to A.                 Automated Decision-Making A.                 Automated Decision-Making

The report notes that the drive to automate complex tasks may embed automated decision-making (ADM) into agentic AI operations. When such decisions have legal or similarly significant effects on individuals, enhanced obligations under UK data protection law are triggered.

Takeaways: The ICO recommends that organizations assess when their agentic AI system may make or contribute to a decision affecting individuals. When the impact could be “legal or similarly significant,” organizations should (i) clearly inform affected individuals about the system’s use; (ii) enable individuals to contest decisions; and (iii) allow for meaningful human intervention. Further guidance is expected from the ICO in the form of a code of practice on AI and ADM, with an interim update on ADM and profiling anticipated early this year.

Link to B.                 Purpose Limitation and Data Minimization B.                 Purpose Limitation and Data Minimization

Art. 5(1)(c) of the UK GDPR requires personal information to be “adequate, relevant, and limited to what is necessary” for the specified purposes of processing. The report recognizes that what’s “necessary” becomes harder to ascertain when the scope of an agent’s activities is uncertain. For versatile agentic systems, use cases may seem nearly endless, creating pressure to define purposes broadly. The ICO flags this as a potential compliance risk.

Takeaways: The ICO advises organizations to resist drafting expansive purpose statements that attempt to cover every conceivable use. Instead, it recommends assessing and defining purposes at each processing stage—which, the regulator points out, may help scope activities, support compliance assessments, and document such compliance. On data minimization, the ICO offers several principles to bear in mind: avoid processing personal information “just because” it may be useful someday; enable users to select which tools and databases the AI system can access; and consider additional safeguards such as requiring human approval before accessing personal information, data masking, observability techniques, and transparency notices.

Link to C.                 Accuracy and Rapid Generation of Personal Information C.                 Accuracy and Rapid Generation of Personal Information

The UK GDPR requires personal information to be accurate (and promptly rectified if it is not). The ICO notes, however, that agentic systems built on probabilistic models rather than logical reasoning may be prone to hallucinations. The report also highlights that agentic AI’s ability to infer and generate new personal information at scale may compound accuracy challenges.

Takeaways: The ICO underscores that the “significant and growing quantities” of personal information processed by agentic AI remain fully subject to the UK GDPR’s data protection obligations. With respect to accuracy, the ICO notes that different contexts may require different accuracy thresholds depending on the risk of harm. In addition, although techniques such as chain of thought reasoning and retrieval augmented generation (RAG) can enhance accuracy, the ICO cautions that they do not change the fact that LLMs generate text based on patterns—so hallucinations may still occur. Notably, the ICO has already provided additional guidance on the data protection implications of hallucinations in its response to its call for views on generative AI.

Link to D.                Special Category Data D.                Special Category Data

The report flags that agentic systems, particularly when pursuing open-ended goals, may encounter or infer special category data in unexpected ways, even when the system’s purpose does not directly involve processing special category data.

Takeaways: The ICO emphasizes that processing special category data requires a valid lawful basis under Article 9 UK GDPR and that individuals must be informed when their special category data may be used or inferred. The report warns that explicit consent (a commonly relied upon basis) may be difficult in the agentic AI context unless individuals have a genuine choice (e.g., the system can be used without special category data if the user chooses). When obtaining a valid Article 9 basis (such as consent) is not feasible, the ICO recommends considering technical measures to restrict the system’s ability to infer or use special category data.

Link to E.                 Transparency E.                 Transparency

The report acknowledges that the complexity of agentic AI systems and their data flows may make it difficult to explain to individuals how and why their personal information may be processed. This challenge may be exacerbated as agents increasingly communicate with other agents in ways that are not directly observable to humans.

Takeaways: The report identifies several features of agentic AI that can hinder organizations’ ability to understand and clearly articulate how personal data is processed. Despite these difficulties, the ICO reiterates that transparency requirements still apply. In particular, the report notes that organizations should clearly identify themselves as the supplier, how and why they use people’s personal data, and the data rights available to individuals. The report also stresses that organizations must consider how they will meet their transparency obligations before data processing occurs and, where appropriate, conduct data protection impact assessments (DPIAs).

Link to F.                 Individual Rights and Fairness F.                 Individual Rights and Fairness

The report notes that opaque data flows and multi‑agent interactions can make it difficult to honor data subject rights—such as access and rectification—because locating and correcting personal data across interconnected components may be challenging. The ICO also cautions that agents that learn from their environment may drift from their original scope, potentially processing personal information in unexpected ways and raising concerns under the UK GDPR’s fairness principle.

Takeaways: The ICO stresses that organisations must embed data protection by design from the outset—building mechanisms for data subject rights compliance directly into agentic systems. Without having robust technical and organizational measures in place, the complexity of agentic AI may make it harder to respond to rights requests or detect unfair processing.

Link to G.                Accountability and the Role of the DPO G.                Accountability and the Role of the DPO

Technical approaches exist to review what an agent is doing in real time or retrospectively, but the report notes that it remains unclear whether organizations can monitor agents effectively at scale. The ICO also flags that accountability challenges may extend to Data Protection Officers (DPOs), who are tasked with monitoring compliance and advising on DPIAs. In particular, ad hoc experimentation by employees with agentic systems can complicate DPO oversight, as unstructured deployments may lead to unanticipated processing of personal information.

Takeaways: The ICO suggests that strong governance structures and clearly defined parameters for employee experimentation with agents may help mitigate some of the risks. Interestingly, the report also points to emerging opportunities for agentic AI to support oversight functions—for example, through “DPO agents” that can assist human staff by scanning for high-risk activities, assisting with compliance tasks, or detecting security issues.

Link to H.                Security and the Concentration of Personal Information H.                Security and the Concentration of Personal Information

The report explains that agentic AI systems may introduce novel attack surfaces. For example, malicious actors may attempt to distort an agent’s goals, manipulate its reasoning, compromise supply chains, or poison data in the agent’s memory. The ICO also warns that risks of surveillance and data breaches may increase when tools are designed to concentrate large volumes of personal information, such as personal assistant-style agents with access to communications, calendars, accounts, and credentials.

Takeaways: The ICO’s analysis suggests that organizations should consider threat models tailored to the unique characteristics of agentic architectures. The report notes that potential taxonomies of threats and mitigations are under development, including the Open Web Application Security Project’s threats and mitigations list, which highlights new attack vectors that agentic AI might introduce.

Link to II.                Looking Ahead II.                Looking Ahead

The ICO’s report signals that agentic AI is an area of active regulatory focus. As part of its AI and biometrics strategy, the regulator has committed to monitoring developments, engaging with industry through workshops to deepen its understanding, and offering support to organizations navigating these issues.

Photo of Jadzia Pierce Jadzia Pierce

Jadzia Pierce advises clients developing and deploying technology on a range of regulatory matters, including the intersection of AI governance and data protection. Jadzia draws on her experience in senior in house leadership roles and extensive, hands on engagement with regulators worldwide. Prior…

Jadzia Pierce advises clients developing and deploying technology on a range of regulatory matters, including the intersection of AI governance and data protection. Jadzia draws on her experience in senior in house leadership roles and extensive, hands on engagement with regulators worldwide. Prior to rejoining Covington in 2026, Jadzia served as Global Data Protection Officer at Microsoft, where she oversaw and advised on the company’s GDPR/UK GDPR program and acted as a primary point of contact for supervisory authorities on matters including AI, children’s data, advertising, and data subject rights.

Jadzia previously was Director of Microsoft’s Global Privacy Policy function and served as Associate General Counsel for Cybersecurity at McKinsey & Company. She began her career at Covington, advising Fortune 100 companies on privacy, cybersecurity, incident preparedness and response, investigations, and data driven transactions.

At Covington, Jadzia helps clients operationalize defensible, scalable approaches to AI enabled products and services, aligning privacy and security obligations with rapidly evolving regulatory frameworks across jurisdictions—with a particular focus on anticipating enforcement trends and navigating inter regulator dynamics.