The ICO has, this week, published extensive guidance on its expectations on Agentic AI, ICO tech futures: Agentic AI | ICO. The UK data protection regulator’s core message is clear: the future of the success of this technology is rooted in accountability.
Investor expectations on the realisation of commercial benefits from AI deployment are increasing. Meanwhile, confidence in regulatory compliance via accountability principles is fundamental for trust. Of course, with trust comes commercial opportunities. As data protection lawyers at DLA Piper, we’ve distilled the core takeaways from the ICO’s guidance to flag the data protection innovation hotspots investors should ensure the UK tech sector is prioritising.
By focusing on privacy-centric AI, the ICO outlines that innovative technology companies can demonstrate they meet legal obligations by building products responsibly from the point of design.
Key points to note are:
- Personal Privacy Management Agents: The ICO is keen to see agents that empower users to manage their own privacy. This includes AI that can interpret complex privacy notices or cookie banners on a user’s behalf, to avoid consent fatigue and build consumer trust that preferences will be respected. We’ve seen from experience that solutions which streamline user journeys gain more significant adoption facilitating higher risk processing, whilst preserving consumer trust.
- Automating Compliance Responses: Agents could revolutionise how organisations handle data subject requests (e.g., DSARs), by ensuring tools can accurately search and compile relevant information, helping organisations respond more timeously and cost-effectively. Of course, important guardrails will be required to ensure organisations don’t then provide people with hallucinated incorrect information. SLMs could ensure a privacy-focused approach (as the ICO recognises that smaller, more specialist, data training sets result in greater accuracy of outputs). Otherwise, human-in-the-loop remains necessary for this given the regulatory implications of getting DSARs wrong. As volumes increase, we can help ensure safety is prioritised whilst still ensuring companies obtain the benefit of efficiency gains.
- Local Agents and Trusted Computing: There is a strong appetite for agents that process data locally on a user’s device. For example, an agent could scan for vulnerabilities without needing access to user’s personal information. For multi-agent tools, the ICO outlines opportunities to develop standardised secure communication protocols between agents. This is technically complex, and with the ICO’s new focus on the right to complain, we can help ensure / verify organisations have effective mechanisms for redress when things go wrong and multi-agents are involved.
Far from posing a roadblock, this section of the ICO’s guidance actively highlights significant opportunities for technology innovators. For companies building the next generation of AI tools, embedding ‘privacy by design’ is no longer just a compliance checkbox. The ICO considers demonstrating such responsible design could become a powerful market differentiator. We work closely with organisations developing privacy‑centric AI models, and good data governance. As such, reach out if you’d like guidance on whether an organisation is implementing the ICO’s expectations in practice, particularly with any data protection impact assessments for Agentic AI / SLM architectures.
You can also explore our Algorithm to Advantage Hub, offering key insights into Agentic AI and how you can make Agentic AI work for you.