top of page

Mediator’s Minute: News Flash! AI Not Actually As Powerful As It Thinks It Is

  • Writer: Shireen Wetmore
    Shireen Wetmore
  • Nov 17
  • 4 min read
Sunset
Screenshot of AI's inaccurate prediction (stated as fact) that the No Robo Bosses Act would be/was signed into law on October 13, 2025. Photo taken on October 9, 2025.

If you have been paying attention to the Mediator’s Minute updates or your favorite legal bloggers, you know that Governor Newsom vetoed the “No Robo Bosses” bill on October 13, 2025.  However, as you can see from the above photo, Google’s AI search results on October 9, 2025, several days before the October 13th veto, incorrectly announced that the bill, SB-7, had in fact been signed into law by the Governor on that then-future date.  So, at least for the moment, it appears AI cannot yet accurately predict the future in every instance.  As attorneys, and the search engines they rely upon, increasingly turn to generative AI for assistance, this incident is a great reminder that AI-generated search results and citations must be checked thoroughly. 


Courts and legislatures are weighing in as well, creating, and enforcing, penalties for those who fail to take heed.


California Court of Appeal to Attorneys: Check Yourself

In September of 2025, the California Court of Appeal published an opinion expressly for the purpose of issuing a warning to attorneys about the improper use of AI in preparing briefs.  See Noland v. Land of the Free, L.P., No. B331918, Slip Op. at 2 (Cal. Ct. App. Sept. 12, 2025).  In Nolan v. Land of the Free, L.P., the court stated that the appeal itself was otherwise “unremarkable” but that it was publishing the decision to sanction the plaintiff’s attorney in that case “as a warning,” emphasizing its finding that counsel had “violated a basic duty to his client and the court” by failing to personally review and verify each of the matters cited in his briefing to the court.  Id. at 2-3. 


In analyzing the issue, the court cited several other courts’ decisions nationwide discussing the prevalence of AI-hallucinated cases or quotations.  See id. at 22-25.  One in particular offered the explanation that these AI tools are more likely to hallucinate in those instances where counsel is searching for a citation where little authority exists.  See id. at 23 (analyzing a Forbes article from May 2025 and citing to the decision in In re Richburg (Bankr. D.S.C., Aug. 27, 2025, No. AP 25-80037-EG) 2025 WL 2470473, at *5, fn. 11).  Of note, the court issued a $10,000 sanction payable to the clerk of the court, but declined to award attorneys fees or costs to the defendant, stating that it was the court, and not defense counsel, that alerted the parties to the hallucinated case citations.  Id. at 30. 


The court also took pains to flag, with numerous citations, the breadth of literature, rule-making, and precedent addressing, and thereby putting attorneys on notice of, the dangers of utilizing this new technology without proper review.  In short, courts are unlikely to give the benefit of the doubt to attorneys going forward on issues involving fabricated cases or case citations.


There are many examples of courts increasingly becoming fed up with the additional time and resources wasted by false citations.  In the October 22, 2025, order in the matter of Mattox v. Product Innovations Research LLC, Magistrate Judge Robertson of the Eastern District of Oklahoma opened with a quote from President Ronald Reagan, “Trust, but verify,” and ended with the admonition:


Generative tools may assist, but they can never replace the moral nerve that transforms thought into advocacy. Before this Court, artificial intelligence is optional. Actual intelligence is mandatory.



The Rules Are Changing

It is not only counsel that are tasked with responsibly using AI in their practice.  As Khari Johnson reported in “California issues historic fine over lawyer’s ChatGPT fabrications,” CalMatters, (Sept. 22, 2025), https://calmatters.org/economy/technology/2025/09/chatgpt-lawyer-fine-ai-regulation/, the California Rules of Court Standards on Judicial Administration specifically address the use of generative AI by judicial officers.  See Cal. R. Ct., Standard 10.80 (2025).  Across the country, universities, task forces, courts, and legislatures are scrambling to address how to use—and regulate the use of—AI in the legal industry.


Law360 has prepared a handy tracker that identifies federal court orders regarding the use of AI, along with a convenient graphic that allows for searching by circuit, available here: https://www.law360.com/pulse/ai-tracker.  The author thanks attorney Jenn French, who has been collecting sanctions decisions on her LinkedIn page, for sharing the Mattox case referenced above.  Another resource for collections of AI-hallucination-related decisions is housed with the University of Illinois Chicago Law Library under the header “AI’s Siren Song,” available here: https://libraryguides.law.uic.edu/c.php?g=1431863&p=10840747.  It is likely that many more such tools will be developed as AI continues to be integrated more and more into not only legal industry tools, but also the everyday lives of legal practitioners.


This article (but maybe not future ones) was not generated by AI.


Shireen Wetmore is a mediator specializing in class actions and employment matters and can be reached for questions, comment, or booking at www.shireenwetmoremediation.com.


This Mediator’s Minute is for informational purposes only and does not constitute legal advice.





bottom of page