Following the recent agreement between artificial intelligence company OpenAI and the U.S. Department of Defense, CEO Sam Altman is facing growing public pressure. The controversy has sparked debate across social media and the technology community, with a large number of ChatGPT users announcing their decision to cancel subscriptions. Critics argue that the company is helping build a “war machine” by making its AI systems available for military use.
ChatGPT Users Launch Subscription Cancellation Campaign
Altman acknowledged that the public perception of the agreement has not been favorable and hosted an online question-and-answer session to address concerns. He emphasized that OpenAI’s cooperation with the defense department would remain within constitutional boundaries and that the company would not follow any unlawful or civil-rights-violating orders. Despite this, trust among users appears to be weakening.
The controversy escalated further as a campaign to abandon ChatGPT spread across multiple online platforms. Some members of the tech community and even a few celebrity users reportedly joined the movement. A trending thread on social media is asking users to provide proof of subscription cancellation, which has quickly gone viral. Critics claim that OpenAI has compromised the principle under which the company has long promoted AI development for the benefit of humanity.
FCRF Launches Flagship Certified Fraud Investigator (CFI) Program
Claude Surges as ChatGPT Slips in App Rankings
Meanwhile, changes were observed in app store rankings. The AI chatbot Claude suddenly gained popularity and reached the top position in the download chart, while ChatGPT slipped to second place. Analysts believe this shift is a warning signal for OpenAI as consumers are increasingly prioritizing ethics and transparency in technology products.
Ethical Concerns Over AI in Military Operations
Another dimension of the dispute emerged after reports related to military operations in the Middle East surfaced. Allegations were made that AI technology was used in target selection during military strikes, raising further ethical concerns. Although the claim could not be independently verified, critics cited it as an example of the potential risks of AI involvement in warfare.
During the online discussion, Altman stated that if the U.S. government ever attempts to enforce large-scale domestic surveillance or unconstitutional orders, OpenAI would resist such demands even if it meant facing legal consequences. He also said that people working in military institutions are more committed to the Constitution than an average citizen, a remark that triggered strong reactions on social media.
Future Challenges for OpenAI and AI Ethics
Experts in the technology industry believe that collaboration between AI companies and defense institutions may increase in the future. However, they stress that transparency, ethical safeguards, and protection of civil rights must be ensured. A section of users remains concerned that the militarization of advanced AI technology could disturb the global security balance.
At present, the biggest challenge for OpenAI is maintaining user trust. Company leadership admitted that the agreement announcement was rushed and that its communication strategy should have been handled more carefully. The controversy is expected to have a broader impact on the technology industry and future AI policy formulation.
About the author – Ayesha Aayat is a law student and contributor covering cybercrime, online frauds, and digital safety concerns. Her writing aims to raise awareness about evolving cyber threats and legal responses.
