Artificial Intelligence (AI) – Guidance for Judicial Office Holders (31 October 2025)
In the introduction, this Guidance note announces that “It updates and replaces the guidance document issued in April 2025”, which shows the speed at which AI is developing. It “sets out key risks and issues associated with using AI and some suggestions for minimising them”. And there have indeed been problems facing the judiciary lately arising particularly out of “AI hallucinations”. These are incorrect or misleading results that AI models generate.
Whatever its drawbacks, AI is here to stay and growing by the second. Lord Justice Birss, Lead Judge for Artificial Intelligence, said:
“The use of AI by the judiciary must be consistent with its overarching obligation to protect the integrity of the administration of justice and uphold the rule of law. I welcome the publication of the latest AI Guidance, which reinforces this principle and the personal responsibility judicial office holders have for all material produced in their name. I encourage all judicial office holders to read the guidance and apply it with care.”
I give a short summary below.
Core Principles for Responsible Use
Judicial users must grasp AI limitations before engagement, as public chatbots like ChatGPT or Google Gemini rely on non-authoritative training data, often skewed toward US law, yielding potentially biased, outdated, or hallucinatory outputs. Strict confidentiality rules prohibit the inclusion of non-public data. Any information that a judge puts into a public AI chatbot should be seen as being published to all the world. The guidance reminds us that:
the current publicly available AI chatbots remember every question that we ask them, as well as any other information we put into them. That information is then available to be used to respond to queries from other users. As a result, anything that is typed into it could become publicly known. [please see UPDATE below]
Users should therefore disable chat histories where possible and report breaches via Judicial Office protocols. Accountability demands verifying all AI outputs against primary sources, given the risks of fabricated cases or facts. Judges remain personally responsible and liable for any named material.
Even if it purports to represent the law of England and Wales, [AI] may not do so. This includes cited source material which might also be hallucinated.
Holders of judicial office are enjoined to use work devices (rather than personal devices) to access AI tools.
Addressing Bias and Security
AI inherits training data biases, therefore judges are required to carry out cross-checks against resources like the Equal Treatment Bench Book. Security measures include using work devices, obtaining HMCTS approvals, and discussing staff AI use to mitigate risks. Judges must directly review evidence, viewing AI as a secondary aid, not a substitute for judicial reasoning.
Handling AI by Litigants and Lawyers
Courts should anticipate AI in submissions: lawyers bear verification duties under professional obligations, with reminders appropriate during adaptation; unrepresented parties often lack verification skills, so judges may query AI involvement, accuracy checks, and litigant responsibility. It is possible to spot AI generation where text includes unfamiliar US-centric citations, American spelling, persuasive yet erroneous text, or retained prompts like “as an AI language model, I can’t.” Emerging threats like deepfakes and white text heighten forgery concerns, reinforcing judicial oversight. (“White text” consists of hidden prompts or concealed text inserted into a document so as to be visible to the computer or system but not to the human reader. This possibility underscores the importance of judicial office holders’ personal responsibility for anything produced in their name.)
Uses of AI that are recommended or discouraged
AI is no doubt an indispensable tool for summarisation of texts (with verification), presentation outlines, and administrative tasks like email handling or meeting transcription. But it is a poor way of conducting research to find new information that cannot be verified. It is not to be relied upon for analysis (its reasoning is often unconvincing and relies largely on US and historic law).
Even with the best prompts, the information provided may be inaccurate, incomplete, misleading, or biased. It must be borne in mind that “wrong” answers are not infrequent.
TAR (Technology Assisted Review) remains acceptable for disclosures when responsibly applied.
This guidance balances AI’s efficiencies against judicial imperatives for accuracy, fairness, and public trust, urging proactive risk management without prohibiting tools outright.
UPDATE: Shortly after publishing this post, I received a message from an expert in AI technology advising me that part of this guidance might be inaccurate. According to one AI bot, “Claude”, this advice to judges
“contains a significant technical inaccuracy that could lead to excessive caution or misunderstanding about AI tools.
The inaccurate claim: The statement that “current publicly available AI chatbots remember every question that we ask them” and that “information is then available to be used to respond to queries from other users” is incorrect for major AI chatbots like Claude, ChatGPT, and similar services.
The reality:
- Modern AI chatbots from major providers (Anthropic’s Claude, OpenAI’s ChatGPT, Google’s Gemini) do not retain conversation content to train models or answer other users’ queries by default
- These conversations are isolated and private to each user
- Your input in one conversation does not become available to other users or get fed into responses for others
- The AI doesn’t “remember” your previous conversations when you start a new one (unless you’re in the same conversation thread)
What the guidance gets right:
The warning to treat public AI chatbots “as being published to all the world” is actually reasonable practical advice from a security and confidentiality perspective
This postscript demonstrates the very intractable problems thrown up by this latest development in the information age. I cannot check whether Claude is right without turning to another AI Chat Bot, and how do I know that that one would be more reliable? It is what one Lord Justice of Appeal called a “circulis inextractibilis” – and I will not provide a citation for that quote because although it is embedded somewhere in my carbon based memory, I cannot rely on the information on a silicon substrate to match what I remember.
The post The Latest Judicial Guidance on AI: White text, bias, fakes, hallucinations, and the use of AI by litigants in person and lawyers appeared first on UK Human Rights Blog.
