A federal judge has temporarily blocked the Pentagon from labeling Anthropic a supply chain risk, ruling that the government’s sweeping actions appeared punitive and arbitrary after the AI company resisted use of its technology for autonomous weapons and domestic surveillance.

Judge Blocks Pentagon Move Labeling Anthropic a Supply Chain Risk

The420 Correspondent
4 Min Read

San Francisco: A U.S. federal court has ruled in favor of artificial intelligence company Anthropic, temporarily preventing the Pentagon from labeling the company as a supply chain risk.

U.S. District Judge Rita Lin on Thursday also blocked enforcement of President Donald Trump’s social media directive, which ordered all federal agencies to stop using Anthropic and its chatbot Claude.

FCRF Launches Premier CISO Certification Amid Rising Demand for Cybersecurity Leadership

In her order, Judge Lin wrote that the “broad punitive measures” taken against the AI company by the Trump administration and Defense Secretary Pete Hegseth appeared arbitrary and capricious and could severely disrupt Anthropic’s business operations. She noted specifically that the rare military authority used by Hegseth is normally directed only at foreign adversaries.

Judge Lin said, “Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. simply for expressing disagreement with government policy.”

The ruling came after a 90-minute hearing, during which Judge Lin questioned why the Trump administration took extraordinary steps to punish Anthropic after negotiations over a defense contract broke down. The dispute arose from the company’s efforts to prevent its AI technology from being deployed in fully autonomous weapons or used to surveil American citizens.

Anthropic had requested the court issue an emergency order to remove the stigma applied to the company, which it described as part of an “unlawful campaign of retaliation.”

Judge Lin clarified that her ruling was not about the broader public policy debate but rather the government’s actions in response to it. She stated, “If the concern is the integrity of the operational chain of command, the Department of War could simply stop using Claude. Instead, these measures appear designed to punish Anthropic.”

Anthropic has also filed a separate, narrower case currently pending in the Federal Appeals Court in Washington, D.C., addressing a different Pentagon rule used to declare the company a supply chain risk.

Judge Lin noted that her order is delayed for a week and does not require the Pentagon to use Anthropic products or prevent the company from transitioning to other AI providers.

In a statement, Anthropic said it was “grateful to the court for moving swiftly” and pleased that the court agreed that Anthropic is likely to succeed on the merits. The company added that the case was necessary to protect its business and customers, but it remains focused on working productively with the government to ensure safe and reliable AI for all Americans.

Several third parties submitted legal briefs supporting Anthropic’s case, including Microsoft, industry trade groups, rank-and-file tech workers, retired U.S. military leaders, and a group of Catholic theologians.

The Pentagon did not immediately respond to requests for comment on the ruling.

Stay Connected