Social media platform X and Elon Musk’s AI chatbot Grok AI are once again at the centre of controversy. Alleged misuse of AI to create obscene and morphed images of women and public figures has prompted the Government of India to take a tough position. The Ministry directed X to submit a detailed report within 72 hours — after which Musk appeared more defensive, warning that those generating illegal content must be punished.
The issue escalated after several posts went viral on X showing AI-manipulated images. Matters crossed a line when a user posted a fake image of Musk himself, dressed in a bikini. What initially drew light-hearted responses from Musk soon turned serious.
Government steps in, seeks full report from X
Rajya Sabha MP Priyanka Chaturvedi wrote to IT Minister Ashwini Vaishnaw seeking action. Following that, the Ministry of Electronics and IT issued a notice to X, directing the platform to remove objectionable content and comply with Indian law — or face consequences.
According to reports, the threat of action pushed X to review the matter urgently and initiate policy-level scrutiny.
Musk defends Grok — “It’s just a tool; people decide how to use it”
Posting on X, Elon Musk said that anyone using Grok AI to create illegal content should face the same punishment as someone uploading unlawful material directly.
His argument:
“Some people claim Grok is creating inappropriate images. That’s like blaming a pen for writing a crime. Grok does what users ask it to do.”
However, critics argue that distancing responsibility is not enough. Platforms and AI companies, they say, must implement strong safeguards and proactive checks to prevent abuse.
Grok AI — not the first controversy
This is not the first time Grok AI has drawn criticism. On multiple occasions, its responses have contained abusive remarks, political bias, or inappropriate comments. In one instance, the bot’s commentary on an international issue sparked backlash, forcing the company to temporarily suspend certain features.
Technology analysts warn that as generative AI becomes more powerful, risks are also multiplying — from deepfakes and misinformation to non-consensual imagery.
The bigger debate — rules, responsibility and guardrails
Experts say the episode highlights three critical questions:
1. Where does platform responsibility begin — and end?
2. How should governments balance user freedom with safety?
3. What global rules are needed for AI-driven tools?
India’s tough response signals that online safety will not be compromised. For tech companies, the message is clear: simply saying “we are only a platform” will no longer be enough.
For now, attention remains on what X presents in its report — and what new safeguards are introduced for Grok AI going forward.