Insights and Analysis

Generative AI in UK disclosure – the rules haven’t changed yet, but the baseline is moving

court
court

Disclosure is still one of the quickest ways for a commercial dispute to become slow and expensive. Generative AI (GenAI) has landed in that reality. It is no longer a speculative future tool, but something already being tested in review workflows – alongside, and together with, more familiar Technology Assisted Review (TAR) and analytics.

In the UK, the right starting point is PD 57AD and the Disclosure Review Document (DRD): disclosure is meant to be scoped, cooperative and proportionate, with technology forming part of the solution rather than an optional extra. PD 57AD has been in force since 1 October 2022. What we do not yet have is a UK disclosure-specific rulebook for GenAI or large language models. But judges and working groups are plainly engaging with the topic, and 2026 is likely to bring more explicit judicial statements about what responsible, proportionate AI-enabled disclosure looks like in practice.

Where we are today – PD 57AD already points towards tech-enabled disclosure

PD 57AD is designed around early scoping, cooperation and proportionality, with the DRD as the operational centre of gravity.

Two features matter for GenAI:

  • The disclosure framework assumes process discipline – issue-based scoping, early engagement, and documenting methodology in the DRD.
  • “Technology Assisted Review” is defined broadly. PD 57AD defines TAR as “all forms of document review… undertaken or assisted by the use of technology, including but not limited to predictive coding and computer assisted review”.

That means the direction of travel is not a conceptual leap from “manual review” to “AI” – the system already expects parties to use tools where that is proportionate. The live question is what courts will accept as defensible when the “technology” is probabilistic GenAI rather than classic supervised learning or TAR.

A glance abroad – why it matters

UK courts are not operating in a vacuum. Other common law jurisdictions are already moving in two directions at once: (i) normalising AI-enabled workflows as part of discovery / disclosure, and (ii) tightening the guardrails around defensibility.

One closely relevant comparator is Bermuda. In Fourworld Global Opportunities Fund Ltd and others v Enstar Group Limited (Supreme Court of Bermuda, August 2025), the court was confronted with a familiar proportionality argument: one side relied on manual-review assumptions to put the cost and duration of discovery into the realm of the absurd – including an estimate of over 400,000 man-hours at an estimated cost of US$100 million – while the other side responded with expert evidence that modern eDiscovery techniques (including “AI and other forensic software”) could dramatically reduce both. The court rejected manual-review economics as “not realistic” in modern commercial litigation and accepted that appropriate use of available technology and AI resources could materially compress time and cost. That is not binding in England and Wales, but it is the clearest recent example of a common law commercial court treating AI-enabled eDiscovery as part of the proportionality analysis.

In the US, GenAI is beginning to be addressed through the mechanics that actually govern discovery. In EEOC v Tesla the Court approved the use of AI in a lawyer-led responsiveness review.  The discovery protocol proposed by the parties in that case allowed for the parties to use TAR and/or GenAI (or similar analytics) “as a substitute for attorney responsiveness review”. That was subject to a requirement that, if such tools were to be used, the parties would attempt in good faith to agree the technology to be used and a “statistically sound methodology to determine the recall rate and other measures of the effectiveness of the tool”. While it accepted a GenAI role in the discovery process, the protocol in Tesla therefore required that to be exposed to evidence-based validation. Although this is not a court judgment holding precedential weight (the Court simply approved the protocol put forward by the parties), the approach is noteworthy.

Australia, meanwhile, has hardened “integrity controls” first (notably around evidential materials), while still expecting verification duties to be met whenever AI is used. In a 27 June 2025 speech, Justice Jane Needham of the Federal Court of Australia described a fast-moving, patchwork regulatory picture – including NSW’s Practice Note SC Gen 23, which generally restricts GenAI use in affidavits/witness statements and expert reports without leave, alongside broader judicial expectations of disclosure (where required) and verification. The UK will chart its own course, but these developments influence what parties regard as “reasonable safeguards” – and what UK judges may find unsurprising when pressed on process.

The UK judiciary’s GenAI signals – competence, integrity, confidentiality

UK judicial messaging on GenAI is currently sharper on professional responsibility than on disclosure mechanics.

1) The Ayinde / Al-Haroun warning line

In Ayinde v London Borough of Haringey (heard with Al-Haroun v Qatar National Bank), the Divisional Court dealt with material filed that included false citations and made plain that responsibility sits with the legal team. The judgment includes the point that it would be negligent to use AI output without checking it (while noting the court was not in a position to determine whether AI had in fact been used).

That isn’t a disclosure case – but it is the climate in which disclosure disputes will now be decided. If GenAI is used in review (summaries, clustering, issue coding, privilege triage), the non-delegable duty point lands directly: GenAI may assist the workflow, but it is no substitute for the lawyer as the accountable decision-maker. Getting it wrong can carry real consequences, including costs sanctions and regulatory referrals.

2) Judicial guidance for office holders

The updated Judiciary AI Guidance (published 31 October 2025) reinforces the same themes: protect the integrity of the process, avoid confidentiality missteps, understand limitations, and verify outputs before relying on them.

Again, it isn’t a GenAI disclosure protocol. But it supports a predictable judicial instinct: courts will be open to efficiency, but impatient with anything that risks misinformation, privilege leakage or an absence of human supervision.

It is also, by design, high-level guidance. In places it can feel a little simplistic when set against the practical reality of modern disclosure tooling and the range of deployment models. That is one reason why parties proposing GenAI-enabled review should assume they may need to educate the court (and the other side) on the workflow: what the tool is doing, what it is not doing, and the validation and controls that make the approach defensible.

The soft-law layer – ILTA’s guide is not binding, but it is designed for PD 57AD

The International Legal Technology Association’s (ILTA) “Generative AI in Outgoing Disclosure” guide is now publicly available and being promoted through mainstream UK litigation channels.

It is also notable that the Law Society of England and Wales hosts the guide on its website and presents it as a practical framework for dealing with GenAI, including in the DRD under PD 57AD. That is not a regulatory “endorsement” in the strict sense (and it is not SRA guidance), but it is a meaningful signal about where mainstream practice is coalescing.

Two practical implications follow:

  • It does not “come into force” on its own – it isn’t part of the CPR or a Practice Direction.
  • It is nevertheless the sort of document that can become de facto standard in DRD negotiations: parties start adopting common positions because it reduces friction, and judges get used to seeing (and expecting) certain controls.

In practice, the ILTA guide could become the benchmark for how GenAI is dealt with in the DRD – and it could happen quickly.

The concrete signal: a judge-led review of PD 57AD with TAR/AI explicitly in scope

There is an active review process underway. The Disclosure Review Working Group recently published an online survey seeking views on how disclosure under PD 57AD is operating – and nearly 20% of the survey questions relate to the role of TAR/AI within disclosure. The Working Group encompasses Butcher J as Chair, Waksman J, Master Kaye and Professor Rachael Mulheron, and the deadline for responses has passed.

This matters for two reasons:

  1. it frames TAR/AI as part of the mainstream operation of PD 57AD, not a niche “legal tech” conversation (indeed, one survey question asks whether respondents consider that TAR/AI use should be mandatory in cases involving a volume of data above a given threshold); and
  2. it creates a credible pathway for reform that is judge-led and practice-driven – particularly around the DRD, where negotiation friction tends to surface.

Likely direction of travel in 2026

The Working Group review creates a credible route to more positive, disclosure-specific judicial pronouncements on AI-enabled workflows – and given the pace at which the technology is developing, it is hard to see how the rules and guidance can avoid evolving in step. The onus is therefore on parties and their advisers to help get the court up to speed when proposing GenAI-enabled disclosure: what the tool is doing, what it is not doing, and why the approach is safe and proportionate – though it remains to be seen whether that judicial steer is delivered through case management, DRD practice and incremental guidance, or some form of single “big bang” rewrite.

Based on what is publicly visible today, three themes look most likely:

  • DRD tweaks and clearer expectations (including supporting notes) to reduce repeated negotiation friction – particularly around technology choices and what “good practice” transparency looks like.
  • Less patience for manual-only burden models in complex matters where modern tooling could reduce cost and time.
  • A tighter focus on defensibility: validation, an audit trail, confidentiality protections and clear human sign-off.

The practical point is therefore that courts are likely to become increasingly clear that proportionate disclosure assumes responsible use of available technology (including AI), while maintaining the core guardrails already signalled elsewhere – verification, confidentiality, transparency and human accountability.

Practical takeaways – what teams should be lining up now

Most disclosure fights will not be won on ideology (“GenAI is risky” vs “GenAI is efficient”). They will be won on whether the proposing party can point to a workflow that is explainable and secure under PD 57AD.

In this context, parties should consider the following:

  • A GenAI clause pack for the DRD – targeted drafting (building on the recommended considerations in the ILTA guide) covering permitted uses, exclusions, confidentiality constraints (including public vs closed tools), and a sensible meet-and-confer / challenge mechanism if the other side wants comfort.
  • Human in the loop – GenAI may drive relevance calls, but it needs active lawyer direction: clear issue framing, prompt/workflow discipline (and development), escalation for edge cases and particular care around privilege.
  • Validation by design – sampling/QC and escalation rules should be part of the plan from day one, not bolted on once the other side asks awkward questions.
  • A defensibility checklist – a short internal record of what was used (e.g. the particular GenAI model, and the individual prompts), who supervised, what QC was done, how exceptions were handled, and what evidential trail exists if challenged later.
  • Be ready to explain the confidentiality position to your client – assume scrutiny on where data went, how it was stored, whether it could be used for training, and how privilege was protected.

The bottom line is simple: PD 57AD already expects serious parties to use technology to achieve proportionality. 2026 is where GenAI stops being a side conversation and becomes part of mainstream disclosure best practice. Parties who are prepared to explain how they have used GenAI and back that up with a clear audit trail will be best placed to take advantage of the clear efficiency gains without getting bogged down in unnecessary satellite disputes.

 

 

Authored by Reuben Vandercruyssen, Lydia Savill, Antonia Croke, and Thomas Evans.

View more insights and analysis

Register now to receive personalized content and more!