Skip to content

Menu

Network by SubjectChannelsBlogsHomeAboutContact
AI Legal Journal logo
Subscribe
Search
Close
PublishersBlogsNetwork by SubjectChannels
Subscribe

Agentic AI: the ICO’s early thoughts on the data protection implications

By Marcus Evans (UK) & Rosie Nance on January 12, 2026
Email this postTweet this postLike this postShare this post on LinkedIn

The ICO has kicked off 2026 with sharing its early thoughts on the data protection implications of agentic AI in its ICO tech futures: Agentic AI report. The report considers the novel data protection risks presented by agentic AI. It also considers how adoption could impact the risks and the effect that will have on the ICO’s work and priorities. It signals potential challenges and compliance obligations under the UK GDPR that continue to apply, but does not look to set out detailed or prescriptive thoughts on mitigations at this stage.

The ICO also looks to encourage opportunities for innovation within agentic AI that support data protection and information rights.

For the purposes of this report, the ICO uses the terms:

  • agent: software or a system that can carry out processes or tasks with varying levels of sophistication and automation;
  • agentic AI: where large language models (LLMs) and other types of foundation model are integrated (‘scaffolded’) with other tools, including databases, memory, computer operating systems and ways of interacting with the world; and
  • agentic system: any computing system that makes use of this agentic capability. The agentic nature of a foundational model can vary significantly depending on which tools it is scaffolded with.

Data protection and privacy risks

The ICO considers the specific data protection and privacy risks agentic AI could pose:

  • human responsibility and controllership – organisations remain responsible for their use of agentic AI systems and complying with their obligations;
  • potential for increased use of automated decision-making (ADM), and the need to comply with legal obligations, including the UK GDPR obligations where automated decisions have significant or legal effects;
  • purposes for agentic processing of personal information being set too broadly to allow for open-ended tasks and general-purpose agents;
  • agentic systems processing personal data beyond what is necessary to achieve instructions or aims;
  • potential unintended use or inference of special category data;
  • increased complexity impacting transparency and the ease with which people can exercise their information rights;
  • new threats to cyber security resulting from the nature of agentic AI; and
  • concentration of personal information facilitating personal assistant agents.

Agentic AI’s opportunities arise from being able to connect systems, tools, and data, and being adaptable to a range of tasks. However, these characteristics could also present challenges from a data protection perspective. For example, organisations might be tempted to set purposes too broadly or grant unfettered access to data and systems. The agentic AI system might also misinterpret human instructions, or act in unexpected ways even where complying with the instructions. These issues would raise concerns around transparency, fairness, purpose limitation, and data minimisation.

Similarly, the complexity of data flows may mean that it is challenging to identify data about a particular individual or amend it, meaning that it is difficult to comply with individual rights requests. Hallucinations or other inaccurate information could ‘cascade’ across tools, databases, and other agents, presenting challenges with complying with the accuracy principle. New attack surfaces also present new cybersecurity risks.

Choices such as the data and tools that a system can access and which governance and control measures to put in place will be key in managing and mitigating these risks.

Innovation opportunities

The ICO notes that agentic AI could present solutions for data protection compliance as well as creating challenges. It would like to identify, encourage, and support opportunities for innovation within agentic AI that support data protection and information rights. It notes that Data Protection Officers and governance teams are likely to face an evolving and challenging role as organisations experiment with agents. It suggests that organisations may need a standalone monitoring system to monitor logs, interpret them, and intervene if necessary. Agentic AI could even lead to ‘virtual employees’ supporting the human DPO.

How agentic AI could be adopted and the impact on risks

The ICO considers various scenarios around adoption, which will determine how it will regulate.  These are likely to be of interest to organisations to flag regulatory focus, as well as providing insight into potential risks. The ICO flags that high adoption is not inevitable. Similarly, there is a wide spectrum of capabilities we might see in practice. At one end, we might see AI agents that are not much more sophisticated than current chatbots and ADM tools.  At the other, we might see highly capable agents that can handle more complex problems, act with more autonomy, and in a wider range of contexts.

The ICO sets out four possible scenarios:

  1. low capability, low adoption – scarce, simple agents;
  2. low capability, high adoption – just good enough to be everywhere;
  3. high capability, low adoption – agents in waiting; and
  4. high capability, high adoption – ubiquitous agents.

The two scenarios dealing with high adoption, scenarios two and four, are likely to be of greatest interest to organisations.

Under scenario two (low capability, high adoption – just good enough to be everywhere), the ICO emphasises the potential for harms around failures of agentic AI or ill-considered deployment. This could lead to misinterpreted tasks, superficial approaches to tasks or failures on edge cases.

In scenario four (high capability, low adoption), harms could arise from agents working as intended, for example, accessing large amounts of personal and special category data, and loss of privacy where agents search for and collate information from multiple sources.

Our take

The ICO’s analysis is perhaps the most comprehensive regulatory commentary to date on this complex topic. The ICO has flagged the areas that will need attention when organisations are building and deploying agents, and given a flavour of the challenges that may arise depending on how agents are adopted. Detailed guidance on specific mitigations was, understandably, beyond the scope of the current report.

In terms of next steps, the ICO:

  • invites further engagement from stakeholders to contribute to its thinking on agentic AI;
  • is updating the guidance on ADM and profiling, with an interim update planned early 2026; and
  • is developing a statutory code on AI and ADM, with implications for agentic AI.

In the meantime, building and deploying agents will require careful consideration of the data protection implications in context, as well as the wider legal implications.

Photo of Marcus Evans (UK) Marcus Evans (UK)

Marcus is a communications, media and technology lawyer based in London. He focuses on data privacy and IT services.

Read more about Marcus Evans (UK)Marcus's Linkedin Profile
  • Posted in:
    Privacy & Data Security
  • Blog:
    Data Protection Report
  • Organization:
    Norton Rose Fulbright
  • Article: View Original Source

LexBlog logo
Copyright © 2026, LexBlog. All Rights Reserved.
Legal content Portal by LexBlog LexBlog Logo