Judge Runs ‘Mini-Experiment’ With AI to Help Decide Case

AI to Help Decide Case

A federal judge revealed that he turned to artificial intelligence programs, including ChatGPT, to help interpret a key legal term in a man’s appeal of an 11-year prison sentence. U.S. Circuit Judge Kevin Newsom, who initially felt “spooked” by slight differences in the AI-generated responses, ultimately believes the technology can serve as a “valuable” tool.

Newsom detailed his experience in a concurring opinion issued by the Atlanta-based 11th U.S. Circuit Court of Appeals. The court rejected Joseph Deleon’s appeal, who had been sentenced for an armed robbery at a Florida convenience store. Newsom called his opinion a “sequel of sorts” to a previous one from May, where he suggested that courts could use AI programs to help interpret words and phrases in contracts, regulations, and other legal texts.

In Deleon’s case, the defendant argued that the sentencing judge wrongly applied an enhancement under the federal sentencing guidelines. This enhancement applies when someone is “physically restrained” to facilitate an armed robbery or escape. Deleon had walked into the store, pointed a gun at a cashier, demanded money, and then left. Although Deleon never touched the victim, the judge ruled that his actions “physically restrained” the cashier.

The panel upheld the ruling, with U.S. Circuit Judges Robin Rosenbaum and Nancy Abudu concurring because of binding 11th Circuit precedent. These rulings state that a defendant physically restrains a victim if they create circumstances that leave the victim no choice but to comply. However, Rosenbaum argued that the full court should revisit the case to overrule this precedent. She maintained that under a “plain reading of the text,” Deleon needed to physically bind the cashier, not just psychologically compel compliance with a gun.

Newsom, a self-described “textualist” and an appointee of former President Donald Trump, agreed with Rosenbaum’s reasoning. Since no dictionary defined “physically restrained” as a combined phrase, Newsom decided to consult AI tools. He asked ChatGPT and two other AI programs to explain the phrase’s ordinary meaning. His “humble little mini-experiment” revealed that the programs produced similar answers, with ChatGPT stating that the phrase “refers to the act of limiting or preventing someone’s movement by using physical force or some kind of device.”

Initially concerned by slight variations in the wording and length of the AI responses, Newsom later realized these differences reflected normal speech patterns. He concluded that AI models can accurately predict the ordinary meaning of words, leading him to believe that such tools may serve a “valuable auxiliary role” in interpreting legal language.

The case is United States v. Deleon, 11th U.S. Circuit Court of Appeals, No. 23-10478.