AI in the Courtroom: “A Cautious and Reasonable Approach”
AI in the Courtroom: “A Cautious and Reasonable Approach”

California Western School of Law
Constant technological advancements have led to increasingly widespread use of artificial intelligence (AI). But what happens when AI becomes more prevalent in the legal profession? The fear of lawyers being replaced by AI is a recurring topic among attorneys, but is AI truly a threat to lawyers?
U.S. District Judge Xavier Rodriguez and U.S. Magistrate Judge Allison Goddard engaged in an in-depth discussion about how they are using AI tools within judicial chambers and the courtroom during the discussion, “How Judges Are Embracing AI – And Why You Should Take Notice.” If judges are using AI tools, surely AI can’t be a huge threat to the business model of law. Two words stood out during their conversation: “cautious” and “reasonable.” Both Judge Rodriguez and Judge Goddard agreed that AI should be viewed as a tool, one that must be used in a cautious and reasonable manner.
What Should Lawyers Consider When Using AI?
Both judges emphasized that the use of AI requires careful attention to several key concerns: confidentiality, information security, bias, and complacency. The effectiveness and safety of any AI system depend on how it is designed, what information it is fed, and how it “learns.” As legal professionals, we are no strangers to reading terms and conditions, and we should apply that same diligence when using AI tools.
Guarding Confidentiality in the Age of AI
Judge Rodriguez cautioned against using AI systems that rely on user inputs and prompts to “train” or “improve” their models. Information entered into these systems could expose confidential client data if lawyers fail to read the terms of service carefully or neglect to safeguard their prompts. He also explained the importance of using paid versions of these AI tools, as the free models often use user inputs for further training. By paying for more sophisticated versions, we are paying for additional safeguards. Protecting client information remains of the utmost importance.
Bias, Accuracy, and Complacency
When it comes to using AI for legal research, additional risks emerge, including biased outputs, inaccurate summaries, and professional complacency. At the end of the day, AI is an input-and-output system. While it can synthesize information rapidly, it may misinterpret or oversimplify complex legal reasoning, leading to inaccurate applications of the law.
After all, as lawyers, aren’t you being paid to think critically and reason in ways that AI cannot? Bias can also surface in the language AI produces, reflecting limitations in the data it was trained on. This can manifest in subtle ways, such as gendered pronouns, selective responses that “fit the bill” of the user’s question, or incomplete analyses that omit critical context.
If AI only gives you the answers you want rather than the full picture, is that truly accurate research? These are the questions that underscore the need for clearly stated standards and guidelines for AI use in the legal profession.
Setting Ground Rules in Chambers
Both Judge Rodriguez and Judge Goddard have established clear guidelines for how AI tools are used in chambers. With the rising adoption of AI across the legal system, there must be equally clear expectations for paralegals, lawyers, and judges to follow. Both judges emphasized the importance of personal experience with any AI tools they authorize. They make sure to understand how these tools operate, what kind of work they produce, and where their limitations lie. Judge Goddard stresses the importance of not just knowing how to use AI, but when and why it should be used. Judge Goddard acknowledges that AI tends to summarize the facts of a case correctly. But the question remains, where does it go wrong?
A Caution Against Hallucinations
AI can often miscite the law or “hallucinate,” meaning it cites made-up cases. Judge Goddard noted that, as hallucinations have become more prevalent (sometimes leading to sanctions), she requires her clerks to double-check all citations before finalizing any orders. The importance of verifying one’s work should come as no surprise, but in the age of AI, it has never been more critical. Every case, citation, and analysis must be confirmed.
The Human Advantage
So, are lawyers replaceable? The simple answer is no. As Judge Goddard points out, the more important goal is to “make yourself irreplaceable.” AI may change the way lawyers work, but it cannot replace the judgment, ethics, and reasoning that define the profession. The future of law will not belong to artificial intelligence, but to those who learn to use it as tool wisely, cautiously, and reasonably.
Arianna Lara Bonilla is a 2L at California Western School of Law focused on building a career in public interest law.

