In late December, Grok’s image-generation tools were put to a test—not by engineers or artists, but by ordinary users on X. By simply replying to publicly posted photographs, commenters began prompting the chatbot to “undress” women, replace their clothing with bikinis, or alter their bodies to produce explicit imagery.
The results were not confined to private chats. The altered images appeared openly in Grok’s public-facing media feed, making them visible to anyone on the platform. Celebrities were among the earliest targets, including Bollywood actors whose photos were morphed at scale. Soon, however, the abuse extended to private individuals—female social media users whose images were repurposed without consent.
Unlike most AI tools, where generated content remains largely private to the user, Grok’s integration with X meant that the outputs circulated widely, blurring the line between experimentation and mass digital harassment.
A Pattern of Safety Failures at xAI
This episode has reignited scrutiny of Grok’s troubled safety record. Earlier this year, reports emerged that xAI workers involved in training and moderation were routinely exposed to disturbing and explicit material, including AI-generated child sexual abuse content, during annotation tasks.
More recently, Grok’s “companion mode” drew criticism for being overly sexualised in design and behaviour, prompting questions about age safeguards and psychological impact. Critics argue that these incidents point to a broader cultural problem at xAI: a permissive approach to content generation coupled with weak enforcement of boundaries.
In response to the latest controversy, Grok acknowledged that users were “testing” its image-editing capabilities with requests involving bikinis and clothing removal, while insisting that “boundaries matter.” Many users, however, remain unconvinced, pointing to the sheer volume of explicit images still visible on the platform.
Public Visibility, Private Harm
What has alarmed digital rights advocates most is not just the creation of sexual deepfakes, but their public availability. Because Grok-generated images are embedded directly into X’s ecosystem, they are instantly shareable, searchable, and amplifiable.
Several women whose images were altered described the experience as unsettling and violating, noting that reporting mechanisms offered little immediate relief. The lack of friction—no meaningful refusal, no delay, no clear deterrent—has made Grok an unusually powerful tool for harassment.
By contrast, rival AI systems such as ChatGPT and Google’s Gemini impose stricter refusals and keep outputs private by default. Grok’s architecture, critics say, prioritises virality over victim protection.
India Weighs a Ban as Global Pressure Builds
The controversy has now crossed borders. In India, lawmakers, digital safety advocates, and women’s rights groups have begun calling for regulatory action, including a possible ban on Grok under IT and intermediary liability rules.
India has previously taken a hard line on platforms accused of enabling non-consensual sexual content, and officials are reportedly examining whether Grok violates existing protections against deepfakes and online sexual abuse. The public nature of the images—and their potential to be weaponised in political and social contexts—has heightened concern.
xAI has yet to issue a detailed response addressing enforcement failures or outlining concrete safeguards. Instead, its media outreach dismissed criticism with an automated reply accusing legacy media of bias—an approach that has only intensified scrutiny.
As governments and platforms grapple with AI governance, the Grok episode underscores a central dilemma of the generative era: when tools are built to shock, entertain, and go viral, the cost is often borne by those with the least power to opt out.
