A Georgia-based attorney is sounding the alarm over a troubling trend—lawyers being sanctioned for submitting court filings containing fabricated case law generated by artificial intelligence tools like ChatGPT.
Attorney Stephanie R. Lindsey recently took to Instagram to stress the dangers, saying AI can be a helpful tool in certain areas of legal practice, but it is no substitute for thorough research.
Lindsey highlighted reports of attorneys in both state and federal courts facing penalties for filing briefs with citations to cases that simply do not exist.
“In litigation, legal arguments must be supported by valid case precedent—cases that have been appealed, decided, and published with proper citation,” Lindsey explained. She noted that AI software, when tasked with drafting legal arguments, can produce convincing but entirely fictitious case names and citations.
This phenomenon, known as “hallucination” in AI development, poses a direct threat to attorneys’ credibility and licensure.
Lindsey urged lawyers to return to foundational research practices, such as verifying citations in physical law books or trusted legal databases, warning that an overreliance on AI without verification could leave only a few attorneys capable of practicing effectively when technology fails.
Her message serves as a reminder that while AI is advancing rapidly, the legal profession’s standards remain rooted in accuracy, ethics, and verified precedent.