A federal appeals court in New Orleans decided on Monday not to adopt a first-of-its-kind rule at the appellate level regulating the use of generative artificial intelligence by lawyers. The 5th U.S. Circuit Court of Appeals announced this decision after considering the use of AI in legal practice and receiving mostly negative public comments from lawyers.
The proposed rule, first introduced in November, aimed to regulate lawyers’ use of generative AI tools like OpenAI’s ChatGPT. It would have applied to both attorneys and litigants appearing before the court without counsel. The rule required them to certify that any AI-generated filings had been reviewed for accuracy in citations and legal analysis. Lawyers who misrepresented their compliance with the rule could have faced sanctions and the possibility of their filings being stricken.
If adopted, the 5th Circuit would have been the first of the 13 federal appeals courts to implement a rule specifically governing AI use, although a few district courts and judges across the country have already adopted similar policies. Several other federal appeals courts have also been exploring ways to regulate AI use by lawyers, especially after incidents of AI “hallucination” in which attorneys submitted briefs with citations to non-existent cases.
Members of the bar largely opposed the 5th Circuit’s proposal in public comments, arguing that existing rules were sufficient to address any issues with the technology, including ensuring the accuracy of court filings. In its notice on Monday, the 5th Circuit reminded parties and counsel that they remain responsible for ensuring the truthfulness and accuracy of their filings, as the current rules already require.
“‘I used AI’ will not be an excuse for an otherwise sanctionable offense,” the court stated.