Insurer Sues OpenAI After ChatGPT Allegedly Posed as Lawyer, Fueling Frivolous Litigation

In a novel legal challenge targeting artificial intelligence, Nippon Life Insurance Company has filed suit against OpenAI, alleging that its ChatGPT chatbot effectively practiced law without a license, misled an Illinois woman into abandoning her attorney, and generated fabricated case law that prolonged baseless litigation.

The dispute stems from a 2019 workplace injury claim by Graciela Dela Torre of Des Plaines, Illinois, who alleged carpal tunnel and tennis elbow. After a settlement in January 2024 that included a waiver of future claims against Nippon, Dela Torre sought to reopen the matter in late 2024. When her attorney advised against it, she turned to ChatGPT, asking whether she had been “gaslighted” by her lawyer.

According to Nippon’s complaint, the chatbot encouraged Dela Torre to fire her counsel and pursue reopening the case pro se. On January 22, 2025, she filed a motion to vacate the settlement; a judge denied it on February 13, 2025. Undeterred, Dela Torre launched a new lawsuit against Nippon that remains pending.

CLICK HERE TO ACCESS NIPPON’S LAWSUIT

Nippon alleges ChatGPT drafted or assisted with at least 44 filings, including 21 motions, one subpoena, and eight notices/statements filed after the denial. Most critically, the complaint highlights a nonexistent precedent—“Carr v. Gateway, Inc. 9”—that appeared only in Dela Torre’s submissions and ChatGPT’s output, which Nippon describes as a hallucinated case invented by the AI.

The insurer claims it has already spent approximately $300,000 (£224,000) defending against the revived litigation and related filings. The suit seeks recovery of those costs plus $10 million (£7.5 million) in punitive damages, arguing OpenAI’s tool caused unnecessary legal expense and abused the judicial process.

Nippon states that ChatGPT, while capable of scoring 297 on the Uniform Bar Examination, “is not an attorney” and has not been admitted to practice in Illinois or any U.S. jurisdiction. The complaint frames the AI’s role as unauthorized practice of law, misleading a litigant, and generating false legal authority.

OpenAI has rejected the allegations, with a spokesperson calling the complaint “without any merit whatsoever.”

The case highlights growing legal friction around generative AI in the justice system, including concerns over hallucinations, unauthorized practice of law, and liability for AI-generated content used in court filings. Courts in multiple jurisdictions have already sanctioned lawyers for submitting briefs containing fictitious citations produced by ChatGPT and similar tools.

This lawsuit takes the issue further by targeting the AI developer directly, potentially setting precedent on whether companies can be held liable when their models are used to simulate legal advice or fabricate authority.