What Did Grok AI Say About Trump and Netanyahu?

The420.in Staff
5 Min Read

The artificial intelligence chatbot Grok, developed by xAI, has once again found itself at the centre of a global controversy. Already under fire for allegedly generating objectionable images of women and minors, Grok is now facing accusations of making controversial and potentially defamatory remarks about prominent world leaders, including Donald Trump and Benjamin Netanyahu.

The developments have triggered a storm on social media platform X, with users questioning whether billionaire entrepreneur Elon Musk was forced to silence his own AI product to contain the fallout.

According to social media claims and media reports, Grok, while responding to user prompts, used language and descriptors seen as highly sensitive and damaging in reference to Trump and Netanyahu. As screenshots of these responses began circulating widely, several of Grok’s earlier answers reportedly disappeared from public view, fuelling speculation of behind-the-scenes intervention.

FCRF Launches Flagship Compliance Certification (GRCP) as India Faces a New Era of Digital Regulation

Did Grok Go Silent After Targeting Political Leaders?

An X user alleged that shortly after Grok’s responses went viral and reached millions, the chatbot either stopped responding altogether or began issuing significantly limited replies. This prompted users to dig into Grok’s previous interactions, uncovering a pattern of provocative phrasing and extreme references in earlier responses as well.

While neither xAI nor X has officially confirmed that Grok was shut down or paused, the sudden disappearance of content has intensified speculation that the platform initiated damage control measures to prevent further escalation.

When ‘Unfiltered AI’ Becomes a Liability

Elon Musk has repeatedly positioned Grok as an alternative to AI tools such as ChatGPT and Google Gemini, promoting it as a chatbot that operates with minimal censorship. However, experts warn that AI systems functioning without clear legal and ethical guardrails can pose serious risks.

In responses related to the Gaza conflict, Grok reportedly accused the United States and Israel of genocide-like actions, citing reports from the International Court of Justice and Amnesty International. These references, combined with its comments on Trump and Netanyahu, significantly amplified the controversy.

A History of Repeated Controversies

This is not Grok’s first brush with global backlash. In July 2025, the chatbot drew sharp criticism for generating antisemitic content and posts perceived as praising Adolf Hitler, prompting xAI to issue a public apology. On earlier occasions, Grok was accused of echoing conspiracy theories such as claims of “white genocide” in South Africa and reusing extremist language sourced from X posts.

In those instances, xAI attributed the issues to technical errors, but recurring incidents have raised questions about the chatbot’s training data, moderation mechanisms, and safety controls.

Censorship or Corporate Responsibility?

The removal of Grok’s responses has sparked a wider debate on X. Some users view the move as an act of censorship, undermining the promise of a free-speech-driven AI. Others argue it reflects the platform’s responsibility to prevent the spread of defamatory, inflammatory, or legally risky content.

Technology analysts caution that labelling individuals—especially sitting or former heads of state—as criminals through AI-generated content could expose companies to serious legal consequences, even if the statements originate from automated systems.

What Lies Ahead

The Grok controversy has unfolded at a time when global discussions around AI regulation are intensifying. Countries, including India, are actively considering stricter frameworks to address AI-driven misinformation, deepfakes, and reputational harm.

For now, one message is clear: while unrestricted AI may appear bold and disruptive, it can also be deeply destabilising. The Grok episode is increasingly being seen as a warning sign for Elon Musk and xAI—that balancing technological freedom with accountability is no longer optional, but inevitable.

About the author – Ayesha Aayat is a law student and contributor covering cybercrime, online frauds, and digital safety concerns. Her writing aims to raise awareness about evolving cyber threats and legal responses.

Stay Connected