Legal professionals are using AI to tidy drafting, summarise documents or test arguments. The Upper Tribunal’s recent decision in UK and R (on the application of Munir) v Secretary of State for the Home Department (AI hallucinations; supervision; Hamid) [2026] UKUT 00081 (IAC) (heard 15 October and 5 November 2025; promulgated 17 November 2025; published 19 February 2026) (UKUT 81) constitutes a reminder about professional standards, supervision and, crucially, the protection of confidentiality and privilege. Although the case arose in immigration judicial review, its reasoning is of general application. Every practising solicitor who uses AI needs to understand what the tribunal has said.

The case and the Hamid lens
The matter came before the Upper Tribunal (Immigration and Asylum Chamber) in its ‘Hamid’ jurisdiction, under which it examines whether lawyers appearing before it have complied with proper professional standards. In two separate claims, those acting for claimants had used AI, either directly or through junior staff, to help prepare judicial review documents. False case citations appeared in the material put before the court: classic AI ‘hallucinations’.
The tribunal reminded practitioners that ‘any legal practitioner who commences or pursues proceedings in a court or tribunal owes certain obligations to it’ (paragraph 36). The case was ‘principally about supervision and the obligation to ensure that the tribunal is not misled’ (37). It did not matter how the phantom authorities had been generated. The qualified lawyer with conduct of the matter was expected to ensure that documents were checked and accurate before filing. The failures were serious enough to justify referral to the regulators, though in one instance the solicitor had already self-referred.
On supervision, the tribunal’s warning was explicit. A solicitor who delegates work ‘remains responsible for the supervision of their work and for ensuring its accuracy’. Supervisors must ensure junior fee-earners ‘are aware of the dangers of using non-specialist AI for legal research and drafting’, and must carry out appropriate checks. A supervisor who allows junior work to reach the court containing hallucinated citations is, the tribunal suggested, likely to be more culpable than the lawyer who fails to check their own AI-assisted drafting (paragraph 58).
Open AI tools and confidentiality
The most striking passage for privilege lawyers is the tribunal’s treatment of ‘open’ AI tools. Drawing a distinction between open systems (such as the freely accessible version of ChatGPT) and closed systems (for example, private instances of Microsoft Copilot), it held that: ‘Uploading confidential documents into an open-source AI tool, such as ChatGPT, is to place this information on the internet in the public domain, and thus to breach client confidentiality and waive legal privilege, and any such conduct might itself warrant referral to the [regulator] and should, in any event, be referred to the Information Commissioner’s Office.’ (Headnote 4; reasons expanded at paragraph 21).
Strictly speaking, ‘open-source AI’ is used colloquially here: in software terms, ‘open-source’ refers to models whose code is published under licences allowing anyone to inspect, modify and reuse them. In this context, read ‘open-source AI’ as public/consumer AI tools. The judgment also contrasts these with ‘closed-source’ enterprise tools (e.g. Microsoft Copilot) which, the tribunal notes, do not place information in the public domain.
On the tribunal’s analysis, putting confidential draft advice, investigation report or witness statement into an uncontrolled AI tool is treated as putting it into the public domain. If that is right, the confidentiality limb of privilege falls away and privilege is lost. While this was an observation in a Hamid decision rather than a binding ruling in a privilege dispute, it is likely to be the first UK judgment to draw the link so explicitly. Even if future courts nuance that position, the risk is obvious. It is hard to justify using open tools with privileged or sensitive material in any contentious or potentially contentious context.
Crucially, the tribunal was not seeking to ban AI. It explicitly makes the point that it was not suggesting that ‘the use of legal AI programmes by properly trained professionals is anything other than a step forward in legal practice’, and described current software as being ‘of enormous benefit in properly focused legal research… [and] large disclosure exercises’ (paragraph 18). The problem is not AI as such, but its careless use with confidential material and without proper supervision.
What this means for everyday practice
The most visible risk is hallucinated authorities in pleadings and skeleton arguments. UKUT 81 makes it clear that claiming that ‘the AI did it’ is no excuse where the court is misled.
The privilege risk arises earlier in the lifecycle of matters. In internal investigations, teams increasingly experiment with AI to summarise interviews, sift large document sets or polish reports. If a team member pastes a draft investigation report into a free AI tool ‘to improve the language’, the tribunal’s reasoning suggests that confidentiality may have been compromised and privilege put in doubt, particularly in later regulatory or follow-on civil proceedings.
Multi-party communications are another weak point. In-house lawyers often work in cross-functional teams with colleagues in compliance, HR and communications. If a non-lawyer recipient forwards a privileged email chain to a personal account and then pastes its contents into an open AI system to prepare a management briefing, the confidentiality circle may have been broken well outside any formal process. AI prompt histories and outputs may themselves become disclosable and may sit outside the organisation’s normal IT controls.
The longstanding challenges around mixed business and legal advice for in-house counsel are also heightened. Where it is already arguable whether the dominant purpose of a communication was to obtain legal advice, the involvement of an uncontrolled third-party AI provider makes it even harder to maintain that the communication was confidential between lawyer and client.
Lessons for practitioners
Privilege should be considered carefully when using AI. AI usage policies, platform procurement and privilege strategy need to be aligned across the organisation. Adopt a clear ‘no‑paste’ rule for privileged/sensitive material into public AI tools, and route any inadvertent use through your incident response and (where appropriate) the Information Commissioner’s Office, consistent with the tribunal’s observation. Supervision now has an AI dimension. Senior lawyers should assume that juniors are experimenting with tools and must give clear instructions on what is permitted, especially for court documents and privileged materials.
UKUT 81 sits alongside the Law Society’s guidance on generative AI, the Academy of Experts’ guidance for expert witnesses and the Civil Justice Council’s consultation on AI in court documents. These updates show that courts expect the profession to harness AI’s benefits while maintaining traditional duties of candour, confidentiality and privilege.
- Simmons & Simmons recently published an AI & Legal Advice Privilege Guide and Policy Framework
Peter Lee is a partner at Simmons & Simmons, where he leads the firm’s AI governance law practice























No comments yet