New Delhi | Amid the rapid rise of AI technology, tensions are escalating between the US military and AI company Anthropic. The core of the dispute lies in the company’s stance of limiting the use of its AI models for autonomous weapons systems and large-scale surveillance technologies. The US Department of Defense wants unrestricted use of Anthropic’s AI systems, especially Claude AI, in national security operations, but the company is resisting.
According to reports, US Defense Secretary Pete Hegseth summoned Anthropic CEO Dario Amodei for a meeting, setting an implied deadline. The military reportedly warned that failure to provide full, unconditional AI access could classify the company as a “supply chain risk,” potentially harming its global reputation. Such a label is usually applied to foreign firms, but in this case, it is being considered for a domestic AI company.
FCRF Launches Flagship Certified Fraud Investigator (CFI) Program
Experts note that the US government could also invoke the Defense Production Act, which allows authorities to compel any company to provide production or technical support for national security needs.
At the heart of the dispute are AI-powered weapons, envisioned as “machine killing systems” or autonomous warfare technologies. Anthropic’s contracts explicitly prohibit using its AI models to create systems that can identify and attack targets without human oversight. The company also warns that large-scale AI surveillance networks could pose a threat to US citizens’ privacy.
The US military, however, has already used Claude AI to a limited extent. The Pentagon deployed custom Anthropic AI models on its classified networks under a $200 million, two-year contract for national security purposes. Reports also suggest AI support was utilized during operations involving Venezuelan President Nicolás Maduro.
The current friction centers on expanding AI’s role in military operations. The US military believes AI could provide strategic advantages in future conflicts and therefore demands broader access. Anthropic, on the other hand, argues that autonomous weapons could accelerate a global arms race and make warfare more inhumane. Machines, the company says, cannot make ethical decisions—they follow algorithms and logic, raising the risk of civilian harm.
AI experts highlight that this is not merely a dispute between a company and the military—it reflects broader ethical questions about technology in the future. Analysts suggest that military use of AI could also influence the international balance of power, especially given the ongoing AI race between China and the US.
For now, Anthropic says it aims to continue “good-faith dialogue” with the government while maintaining its safety and ethical use policies. The company believes AI development should proceed in a way that ensures human security and responsible technological transition.
Experts predict that global policies on AI weapons control, cybersecurity, and surveillance technologies may evolve in the near future. Meanwhile, the standoff between the US military and Anthropic is fueling a new debate on technology and national security.
