It’s Not Just the Fake Cases, AI Doesn’t Think Like a Lawyer

By Katie Parker
Associate Clinical Professor, USD School of Law

It was over two decades ago, but I still vividly remember how starting law school felt like drinking from a fire hose. The reading, the Socratic method, the exams, the new (to me) way of writing: everything was overwhelming. I felt like the proverbial fish out of water. At first, reading cases was like reading a foreign language. And then the cold calls . . . What are the facts? What’s the holding? No, not the dicta, the holding, and the reasoning? What is dicta for, anyway? What did the dissent say? Why is this case different from the one we just talked about? What’s the legal test? What are the elements? Are they elements or factors? What if the facts were different? And so forth. 

Legal analysis of course became more comfortable for me, but it never became easy. Reading multiple statutes, regulations, and judicial opinions, then synthesizing them, and then applying the law to a new set of facts for each case is a new challenge every time. It’s fun and rewarding and that’s one of many reasons I’ve continued doing it. 

Given how nuanced substantive legal analysis can be, it’s no wonder that generative AI struggles to get it right. Most of the buzz about AI for lawyers focuses on hallucinations. This fall, California joined the long list of jurisdictions whose courts have sanctioned attorneys for filings containing fake quotations and citations. The two appellate opinions published recently, one civil and one criminal, underscore a fundamental lesson for litigators. In the first of the two cases, the court opened by delineating a bright-line rule for court filings in California:

Simply stated, no brief, pleading, motion, or any other paper filed in any court should contain any citations–whether provided by generative AI or any other source–that the attorney responsible for submitting the pleading has not personally read and verified.

Noland v. Land of the Free, L.P., 114 Cal. App. 5th 426, 431 (Sept. 12, 2025) (emphasis in original). See also People v. Alvarez, __ Cal. Rptr. 3d __, 2025 WL 2814789 (Oct. 2, 2025) (sanctioning criminal defense attorney who cited one nonexistent case and cited other cases for propositions unrelated to their substance). Noland and Alvarez confirm that California is in line with the many jurisdictions that have issued sanctions for irresponsible use of AI. But these cases add little to what we actually know about AI, or what the ethical rules are. (Although it’s notable that in Noland, the court did call out the non-sanctioned party for failing to alert the court to the other party’s errors!)

At this point, generative AI’s tendency to fabricate legal principles, quotations, and even entire cases should be old news to any practicing litigator. French legal scholar Damien Charlotin maintains a database of worldwide AI hallucinations in court filings, and as of this writing, had catalogued 455 such instances: https://www.damiencharlotin.com/hallucinations/. The first half of October 2025 saw 22 AI-based hallucinations in American courts alone, and of course this includes only those instances that someone detected and reported to Charlotin. 

So, at a minimum, the ethical duty of competence requires attorneys to understand that AI systems fabricate legal authority, and creates a corresponding duty to independently verify every source before citing it. Comment 1 to Rule 1.1 of the California Rules of Professional Conduct provides that attorneys have a duty to “keep abreast of the changes in the law and its practice, including the benefits and risks associated with relevant technology.”

But AI’s risks aren’t limited to fabricated authority, and the risks can’t be avoided by only using AI systems housed in legal databases like Lexis and Westlaw. A detailed study by researchers at Stanford earlier this year found that Westlaw’s and Lexis’s AI systems do reduce hallucinations when compared to general purpose LLMs like ChatGPT, but even in these law-focused systems, “hallucinations remain substantial, wide-ranging, and potentially insidious.” Magesh, Varun, et al. Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools, Journal of Empirical Legal Studies, 2025: 22:216-242. The paper is open access and is available here

The Stanford study reveals that AI-generated errors persist for reasons beyond AI’s tendency to create information. Indeed, errors are common because, as we learned as law students, legal analysis is nuanced and complex. The authors explain: a “helpful generative legal research tool would have to do far more than simple document summarization: it would need to synthesize facts, holdings, and rules from different pieces of text while keeping the appropriate legal context in mind.” Id. at 220. After testing Westlaw’s and Lexis’s AI systems, and comparing them to ChatGPT for a variety of legal query types, the authors conclude that “these legal systems continue to struggle with elementary legal comprehension.” Id. at 225. The authors noted several repeated categories of analytical error in legal AI systems: inability to identify a case’s holding as distinct from other parts of a case, failure to distinguish between legal actors (i.e. confusing the court’s statements with the litigants’ arguments), and inability to grasp legal hierarchies (i.e. knowledge of which courts can overrule which other courts, for example). Id. at 226. 

Like a first-year law student (or worse?), legal AI systems struggle to understand cases and laws at the level necessary to reliably analyze those authorities. So the duty of competence requires more than knowing that AI fabricates authorities. The duty of competence dictates that attorneys understand that even when AI provides a real case, or provides an accurate quotation, that source may not be relevant to the issues presented, or may have been taken out of context. There’s no getting around doing the hard work. It’s always our duty to roll up our sleeves, dig through the morass of authorities, and develop a sound, compelling analysis. It’s what the law requires and what we were trained to do. 

You may also like...