A U.S. District Judge has blocked the Trump administration from designating artificial intelligence company Anthropic as a “supply chain risk” and prohibiting federal contractors from using its Claude AI model, delivering a significant victory for the company in a high-stakes legal battle over government contracts and free speech.
In a ruling issued Thursday, U.S. District Judge Rita Lin granted Anthropic’s request for a preliminary injunction, halting a presidential directive that ordered all federal agencies to cease using the company’s technology.
The dispute originated during contract negotiations between Anthropic and the U.S. Department of Defense. The Pentagon sought to accelerate its use of AI for intelligence processing and military efficiency. Anthropic insisted on including safety guardrails, specifically prohibiting the use of its technology for mass surveillance of American citizens. A Pentagon official reportedly responded that the military only issues lawful orders.
After the negotiations stalled, President Donald Trump publicly criticized Anthropic in February, calling its stance a “disastrous mistake” that put American lives at risk. The administration subsequently labelled the company a national security threat and a supply chain risk.
Anthropic filed suit, arguing that the designation violated the Administrative Procedure Act (APA) and constituted illegal First Amendment retaliation for the company’s public advocacy on the ethical use of AI.
Judge Lin sided with Anthropic. In her decision, she stated that the administration’s actions “appear designed to push Anthropic” and that penalising the company for bringing public scrutiny to the government’s contracting positions amounted to “classic illegal First Amendment retaliation.”
The judge also found that the government failed to provide evidence supporting its “supply chain risk” designation and bypassed legally required procedures for making such a determination.
The ruling temporarily prevents the Trump administration from enforcing the ban while the underlying lawsuit proceeds. It highlights growing tensions between the U.S. government and leading AI companies over issues of safety guardrails, ethical constraints, and the limits of executive power in regulating emerging technology.
Legal experts say the case could set important precedent regarding the government’s ability to penalise private companies for exercising their First Amendment rights in the context of federal contracting.
Anthropic welcomed the decision but declined further comment pending ongoing litigation. The Department of Justice has not yet announced whether it will appeal.

