Sam Altman has reversed course following mounting criticism over a controversial agreement between OpenAI and the US Department of War (DoW). Acknowledging that the announcement of the sensitive partnership was made “prematurely,” Altman said the contract language will now be revised to clearly prohibit the use of the company’s AI systems for domestic surveillance of American citizens.
The controversy erupted after OpenAI disclosed that its models would be deployed within the DoW’s classified networks. Critics quickly raised concerns that the technology could be used for surveillance and intelligence operations in ways that might infringe on civil liberties.
What Will Change in the Agreement
In an internal update shared on social media platform X, Altman outlined key clarifications to be incorporated into the revised contract:
- AI systems will not be intentionally used for domestic surveillance of U.S. citizens or residents.
- Tracking or monitoring through commercially purchased personal data will also be prohibited.
- All uses must comply with U.S. law, including Fourth Amendment protections and national security statutes.
Additionally, without a separate contractual amendment, services will not be extended to intelligence agencies under the DoW umbrella, such as the National Security Agency.
FCRF Launches Flagship Certified Fraud Investigator (CFI) Program
Divisions Within the AI Industry
The episode has also exposed ideological fault lines within the artificial intelligence sector. Rival firm Anthropic reportedly declined to accept DoW terms that allowed for “any lawful use” of its technology—a clause critics argue is overly broad. Following that decision, Anthropic was labeled a potential “supply chain risk” in certain federal contracting contexts.
Altman stated that he has urged the DoW not to blacklist Anthropic and has advocated for uniform contractual standards across AI companies engaging with federal agencies.
Consumer Backlash
In the wake of the announcement, social media campaigns such as “Cancel ChatGPT” began trending. App review platforms saw a surge in negative ratings, while downloads of competing AI assistants reportedly increased. Although it remains unclear whether the backlash will have lasting effects, the episode underscores that public trust is becoming a central issue for AI companies operating at scale.
The Broader Debate
At the heart of the controversy lies a fundamental question: who should define the limits of AI deployment in military and national security contexts—governments or private technology firms? Altman has argued that democratically elected governments must ultimately set the rules, though he added that the company would refuse to comply with unconstitutional directives.
Critics, however, caution that phrases such as “in compliance with the law” and “not intentionally” may leave room for interpretation in future scenarios.
While the revised agreement may temper immediate criticism, the broader debate over AI’s role in military operations, civil liberties, and public accountability is far from settled. How policymakers and technology leaders navigate this terrain could significantly shape the trajectory of the AI industry in the years ahead.
About the author – Ayesha Aayat is a law student and contributor covering cybercrime, online frauds, and digital safety concerns. Her writing aims to raise awareness about evolving cyber threats and legal responses.
