Generative AI in the Courtroom: A Wake-Up Call for Litigators

Generative AI is quickly transforming modern legal practice, but courts have been slow to regulate its use in litigation. That’s rapidly changing.

A recent Surrogate's Court decision suggests that courts will demand more transparency and reliability from AI-generated evidence. 

Litigators should take note.

You may remember the 2023 case, Mata v. Avianca, that made headlines when the plaintiff’s lawyer cited fake cases generated by ChatGPT and then doubled down when opposing counsel began asking questions.  The lawyer was hit with a $5,000 sanction. 

What We Learned from Mata v. Avianca

It was an obvious cautionary tale, but, in truth, it’s an easily-fixed problem. Double-check citations, ensure all cited cases actually stand for the propositions you say they do, and re-read briefs for accuracy before filing. 

Generative AI is a phenomenal tool for legal research, but you still have to use your skill and training as a lawyer to transform the computer-generated output into actionable legal advice. 

Aside from the obvious problem of hallucinations, generative AI presents a more subtle challenge in the context of expert testimony: How does AI-assisted analysis impact the traditional tests used by courts to determine admissibility?

AI-Assisted Expert Testimony

In Matter of Weber, 2024 N.Y. Slip Op. 24258, 2024 WL 4471664 (N.Y. Sur. Ct., Oct. 10, 2024), the court held a hearing on a breach of fiduciary duty claim, where both sides presented expert testimony on damages. One expert testified that, rather than relying solely on traditional methods, he used Microsoft Copilot to cross-check his calculations. 

The court found the expert’s calculations unreliable for other reasons but then took the opportunity to specifically discuss the reliance on artificial intelligence, which it characterized as “an emerging issue that trial courts are beginning to grapple with and for which it does not appear that a bright-line rule exists.” 

The issue wasn’t outright fabrication but whether AI-generated analysis can be trusted when the methodology is unclear. The expert couldn’t explain how it worked, what sources it used, or how to verify its accuracy.

The court conducted an experiment. It ran the same query in Copilot on three different computers and got three different answers. The court concluded that AI-assisted expert calculations must be independently verified by the traditional methods used when seeking to admit expert testimony. 

What This Means for Litigators

Lawyers and experts can use AI, but AI-generated work must hold up under the same scrutiny as anything else in court. An expert can’t rely on AI the way they rely on a calculator. They must be able to explain and justify their conclusions using accepted methodologies.

How to Use AI without Getting Sanctioned

The Weber decision sets a precedent and signals what’s coming. Courts will continue applying traditional evidentiary standards in an AI-driven legal world.

If you or your experts use AI-generated data, don’t assume it will pass muster. Reverse engineer the final product. Make sure you can explain and defend it using traditional methods. Because if you can’t, a judge will ask why.


Previous
Previous

Convertible Notes and the Maturity Date Dilemma: A Case Study from Delaware

Next
Next

The ACA Convertible Note: A Step Toward Standardization in Early-Stage Investing