Panoramic: Automotive and Mobility 2025
Disclosure is still one of the quickest ways for a commercial dispute to become slow and expensive. Generative AI (GenAI) has landed in that reality. It is no longer a speculative future tool, but something already being tested in review workflows – alongside, and together with, more familiar Technology Assisted Review (TAR) and analytics.
In the UK, the right starting point is PD 57AD and the Disclosure Review Document (DRD): disclosure is meant to be scoped, cooperative and proportionate, with technology forming part of the solution rather than an optional extra. PD 57AD has been in force since 1 October 2022. What we do not yet have is a UK disclosure-specific rulebook for GenAI or large language models. But judges and working groups are plainly engaging with the topic, and 2026 is likely to bring more explicit judicial statements about what responsible, proportionate AI-enabled disclosure looks like in practice.
PD 57AD is designed around early scoping, cooperation and proportionality, with the DRD as the operational centre of gravity.
Two features matter for GenAI:
That means the direction of travel is not a conceptual leap from “manual review” to “AI” – the system already expects parties to use tools where that is proportionate. The live question is what courts will accept as defensible when the “technology” is probabilistic GenAI rather than classic supervised learning or TAR.
UK courts are not operating in a vacuum. Other common law jurisdictions are already moving in two directions at once: (i) normalising AI-enabled workflows as part of discovery / disclosure, and (ii) tightening the guardrails around defensibility.
One closely relevant comparator is Bermuda. In Fourworld Global Opportunities Fund Ltd and others v Enstar Group Limited (Supreme Court of Bermuda, August 2025), the court was confronted with a familiar proportionality argument: one side relied on manual-review assumptions to put the cost and duration of discovery into the realm of the absurd – including an estimate of over 400,000 man-hours at an estimated cost of US$100 million – while the other side responded with expert evidence that modern eDiscovery techniques (including “AI and other forensic software”) could dramatically reduce both. The court rejected manual-review economics as “not realistic” in modern commercial litigation and accepted that appropriate use of available technology and AI resources could materially compress time and cost. That is not binding in England and Wales, but it is the clearest recent example of a common law commercial court treating AI-enabled eDiscovery as part of the proportionality analysis.
In the US, GenAI is beginning to be addressed through the mechanics that actually govern discovery. In EEOC v Tesla the Court approved the use of AI in a lawyer-led responsiveness review. The discovery protocol proposed by the parties in that case allowed for the parties to use TAR and/or GenAI (or similar analytics) “as a substitute for attorney responsiveness review”. That was subject to a requirement that, if such tools were to be used, the parties would attempt in good faith to agree the technology to be used and a “statistically sound methodology to determine the recall rate and other measures of the effectiveness of the tool”. While it accepted a GenAI role in the discovery process, the protocol in Tesla therefore required that to be exposed to evidence-based validation. Although this is not a court judgment holding precedential weight (the Court simply approved the protocol put forward by the parties), the approach is noteworthy.
Australia, meanwhile, has hardened “integrity controls” first (notably around evidential materials), while still expecting verification duties to be met whenever AI is used. In a 27 June 2025 speech, Justice Jane Needham of the Federal Court of Australia described a fast-moving, patchwork regulatory picture – including NSW’s Practice Note SC Gen 23, which generally restricts GenAI use in affidavits/witness statements and expert reports without leave, alongside broader judicial expectations of disclosure (where required) and verification. The UK will chart its own course, but these developments influence what parties regard as “reasonable safeguards” – and what UK judges may find unsurprising when pressed on process.
UK judicial messaging on GenAI is currently sharper on professional responsibility than on disclosure mechanics.
In Ayinde v London Borough of Haringey (heard with Al-Haroun v Qatar National Bank), the Divisional Court dealt with material filed that included false citations and made plain that responsibility sits with the legal team. The judgment includes the point that it would be negligent to use AI output without checking it (while noting the court was not in a position to determine whether AI had in fact been used).
That isn’t a disclosure case – but it is the climate in which disclosure disputes will now be decided. If GenAI is used in review (summaries, clustering, issue coding, privilege triage), the non-delegable duty point lands directly: GenAI may assist the workflow, but it is no substitute for the lawyer as the accountable decision-maker. Getting it wrong can carry real consequences, including costs sanctions and regulatory referrals.
The updated Judiciary AI Guidance (published 31 October 2025) reinforces the same themes: protect the integrity of the process, avoid confidentiality missteps, understand limitations, and verify outputs before relying on them.
Again, it isn’t a GenAI disclosure protocol. But it supports a predictable judicial instinct: courts will be open to efficiency, but impatient with anything that risks misinformation, privilege leakage or an absence of human supervision.
It is also, by design, high-level guidance. In places it can feel a little simplistic when set against the practical reality of modern disclosure tooling and the range of deployment models. That is one reason why parties proposing GenAI-enabled review should assume they may need to educate the court (and the other side) on the workflow: what the tool is doing, what it is not doing, and the validation and controls that make the approach defensible.
The International Legal Technology Association’s (ILTA) “Generative AI in Outgoing Disclosure” guide is now publicly available and being promoted through mainstream UK litigation channels.
It is also notable that the Law Society of England and Wales hosts the guide on its website and presents it as a practical framework for dealing with GenAI, including in the DRD under PD 57AD. That is not a regulatory “endorsement” in the strict sense (and it is not SRA guidance), but it is a meaningful signal about where mainstream practice is coalescing.
Two practical implications follow:
In practice, the ILTA guide could become the benchmark for how GenAI is dealt with in the DRD – and it could happen quickly.
There is an active review process underway. The Disclosure Review Working Group recently published an online survey seeking views on how disclosure under PD 57AD is operating – and nearly 20% of the survey questions relate to the role of TAR/AI within disclosure. The Working Group encompasses Butcher J as Chair, Waksman J, Master Kaye and Professor Rachael Mulheron, and the deadline for responses has passed.
This matters for two reasons:
The Working Group review creates a credible route to more positive, disclosure-specific judicial pronouncements on AI-enabled workflows – and given the pace at which the technology is developing, it is hard to see how the rules and guidance can avoid evolving in step. The onus is therefore on parties and their advisers to help get the court up to speed when proposing GenAI-enabled disclosure: what the tool is doing, what it is not doing, and why the approach is safe and proportionate – though it remains to be seen whether that judicial steer is delivered through case management, DRD practice and incremental guidance, or some form of single “big bang” rewrite.
Based on what is publicly visible today, three themes look most likely:
The practical point is therefore that courts are likely to become increasingly clear that proportionate disclosure assumes responsible use of available technology (including AI), while maintaining the core guardrails already signalled elsewhere – verification, confidentiality, transparency and human accountability.
Most disclosure fights will not be won on ideology (“GenAI is risky” vs “GenAI is efficient”). They will be won on whether the proposing party can point to a workflow that is explainable and secure under PD 57AD.
In this context, parties should consider the following:
The bottom line is simple: PD 57AD already expects serious parties to use technology to achieve proportionality. 2026 is where GenAI stops being a side conversation and becomes part of mainstream disclosure best practice. Parties who are prepared to explain how they have used GenAI and back that up with a clear audit trail will be best placed to take advantage of the clear efficiency gains without getting bogged down in unnecessary satellite disputes.
Authored by Reuben Vandercruyssen, Lydia Savill, Antonia Croke, and Thomas Evans.