Washington: Artificial intelligence company Anthropic is preparing to challenge the U.S. Department of Defense (DoD) in court after the Pentagon officially designated the firm a “supply-chain risk,” a move that could restrict its involvement in military-related contracts. The decision has intensified tensions between the AI company and the U.S. defense establishment over the use of advanced AI technologies in military operations.
Anthropic’s chief executive Dario Amodei said the company believes the decision is legally flawed and plans to contest it in court. According to him, the designation is not justified under existing legal frameworks governing government procurement and supply-chain security.
Women in Cyber Policing: Nominations Open for Excellence Awards 2026
The Pentagon recently placed Anthropic under the supply-chain risk category, a classification that can prevent companies from working with the U.S. military and its contractors if authorities believe their technology could pose security concerns. The step follows weeks of disagreement between the company and defense officials regarding the acceptable scope of AI use in military systems.
Disagreement Over Military Use of AI
Sources familiar with the matter indicate that the dispute largely revolves around how Anthropic’s AI models could be deployed by defense agencies. The company has maintained that its technology should not be used for large-scale surveillance of American citizens or for fully autonomous weapons systems.
However, officials within the Department of Defense reportedly argued that AI technologies supplied to the military should be available for “all lawful purposes,” including broader defense and intelligence operations. The inability of both sides to reach a consensus ultimately led to the Pentagon’s decision to classify Anthropic as a supply-chain risk.
Company Says Impact on Customers Limited
Amodei said the designation is narrow in scope and will not affect the majority of Anthropic’s customers. According to the company, the restriction mainly applies to situations where its AI models are directly used in contracts tied to the Department of Defense.
He noted that businesses using Anthropic’s technology for commercial or unrelated purposes should not face any restrictions, even if those companies also maintain contracts with the U.S. government in other areas.
Legal Challenge Likely in Federal Court
Anthropic argues that the supply-chain risk provision is intended to protect government infrastructure rather than punish technology providers. The company maintains that the law requires authorities to adopt the “least restrictive measures necessary” to address potential risks.
Legal experts believe the case could be brought before a federal court in Washington. However, challenging such decisions can be difficult because national security considerations often grant the government broad discretion in procurement and risk assessments.
OpenAI Steps Into the Gap
Meanwhile, reports indicate that OpenAI has reached an agreement with the U.S. Department of Defense to collaborate on AI-related initiatives, effectively stepping in as Anthropic’s role becomes uncertain. The development has sparked debate within the technology sector about the growing involvement of AI companies in defense programs.
Some reports suggest the arrangement has also triggered internal discussions and criticism among technology workers concerned about the military use of artificial intelligence.
Support for Ongoing Security Operations
Despite the dispute, Anthropic’s leadership said the company remains committed to supporting national security efforts where appropriate. According to Amodei, Anthropic’s AI models are currently being used in certain operational contexts, and the company is willing to continue providing assistance during transitional periods.
He emphasized that the firm’s priority is to ensure that security personnel and defense professionals maintain access to advanced technological tools while also adhering to strict ethical and safety standards for AI deployment.
Analysts believe the dispute could influence the future relationship between governments and AI companies worldwide. As artificial intelligence becomes increasingly integrated into defense and intelligence systems, debates over regulation, ethical limits, and national security oversight are expected to intensify across the global technology landscape.
