A federal judicial panel on Friday agreed to begin developing rules to regulate the use of artificial intelligence (AI)-generated evidence and address concerns about “deep fakes” in courtrooms.
During a meeting in New York, the U.S. Judicial Conference’s Advisory Committee on Evidence Rules decided to move forward with two key initiatives. One will focus on evidence created by machine learning, while the other will consider how courts should handle claims that audio or video evidence has been manipulated using AI technology.
The panel’s members, including U.S. District Judge Jesse Furman of Manhattan, who chairs the committee, acknowledged the need for caution in crafting rules for rapidly evolving technology. However, Furman warned that delaying action could leave the judiciary unprepared if new challenges arise. “I think there’s an argument for moving forward to avoid getting caught completely flat-footed,” he said, recognizing that developing new rules can take years.
This meeting is part of a broader effort across federal and state courts to address the rise of generative AI, such as programs like OpenAI’s ChatGPT, which can generate text, images, and even videos after analyzing large datasets. Chief U.S. Supreme Court Justice John Roberts addressed the topic in his annual report, highlighting AI’s potential benefits for litigants and judges, but stressing that the judiciary must carefully consider its role in litigation.
At the Friday meeting held at New York University Law School, committee members unanimously agreed to proceed with creating a rule to govern machine learning-generated evidence. This rule will ensure that evidence produced by AI systems meets the same reliability standards applied to expert witness testimony under Rule 702 of the Federal Rules of Evidence. The committee aims to have the proposed rule ready for a public comment vote by May.
Panel members also discussed the growing concern over deep fakes—AI-manipulated videos or audio recordings designed to deceive. Some expressed skepticism about the potential for widespread misuse. “I still haven’t seen that this feared tsunami is really coming,” said U.S. Circuit Judge Richard Sullivan of the 2nd U.S. Circuit Court of Appeals. While acknowledging the fast-paced evolution of technology, Sullivan and others agreed it would be prudent to develop a potential rule now, even if they do not immediately vote to implement it. “It seems like a good idea to have something in the bullpen as it were rather than nothing,” said Daniel Capra, a professor at Fordham School of Law who serves as the committee’s reporter and will help draft the rule.
The committee’s decision to move forward with these initiatives reflects a growing recognition that courts must adapt to new technological challenges to ensure fair and reliable legal proceedings.