Amid escalating concerns over the misuse of deepfake technology and intensifying regulatory pressure across jurisdictions, social media platform X has imposed a major restriction on its AI-powered image generation and editing features. The company has confirmed that its chatbot Grok will now allow image creation and modification only for paid subscribers.
The move follows widespread criticism that Grok was being used to generate non-consensual and sexually explicit AI images of women, triggering a global backlash and renewed scrutiny of generative AI platforms. Most users on X are now seeing a notice stating that “image generation and editing are currently limited to paying subscribers.”
Company officials said the decision goes beyond a routine product tweak and is part of a broader strategy to limit legal exposure. By restricting access to paid users, X can link every AI-generated image to verified payment credentials such as credit cards or bank accounts. This, officials argue, significantly reduces anonymity and makes it easier to identify individuals responsible for illegal or abusive content.
“This approach shifts accountability from the platform to the user level,” said a technology policy expert tracking regulatory developments. “Regulators have long demanded a clear audit trail that establishes responsibility for misuse.”
Certified Cyber Crime Investigator Course Launched by Centre for Police Technology
Controversy erupts over two weeks
The controversy intensified over the past two weeks after multiple reports revealed that Grok’s image generator was capable of producing sexualised deepfakes without consent. The disclosures raised serious questions not only about X’s content safety mechanisms but also about the ethical boundaries of large-scale generative AI tools.
X owner Elon Musk publicly warned users against abusing the tool, stating on the platform that anyone prompting Grok to generate illegal content would face the same legal consequences as if they had uploaded such material themselves. Critics, however, argue that warnings alone are insufficient without strong safeguards built directly into the system.
Regulatory scrutiny across countries
The issue has now escalated to a global regulatory challenge. In the United Kingdom, authorities are examining whether X has violated provisions of the Online Safety Act, which mandates strict controls on harmful digital content. If found non-compliant, the platform could face heavy fines or even service-level restrictions.
Similarly, Indonesia has become the first nation to temporarily block Grok entirely, citing its misuse in creating explicit deepfakes involving women and children. Indonesian officials have summoned X representatives and demanded concrete changes to the platform’s content moderation framework.
Issue reaches the US Congress
In the United States, the controversy has reached Capitol Hill. Three US senators have written to Google and Apple, urging them to remove the X and Grok apps from their app stores. The lawmakers allege that the platforms violate app store policies by enabling the mass generation of sexualised images, including those involving minors.
Is a paywall enough?
Technology experts caution that while restricting image tools to paid users may reduce abuse in the short term, it does not address deeper questions around AI governance. “A paywall is a deterrent, not a solution,” said a digital ethics researcher. “The real challenge is ensuring AI systems are safe by design, with built-in protections that prevent harm before it occurs.”
For X, the episode marks another turning point in its fraught relationship with regulators since Musk’s takeover. While the company has positioned itself as a defender of free expression, governments are increasingly signalling that innovation cannot come at the cost of safety, dignity and consent.
As global scrutiny intensifies, Grok’s partial lockdown underscores a broader shift: generative AI platforms are no longer operating in a regulatory vacuum. The coming months will determine whether voluntary measures are enough—or whether stricter laws will redefine how AI tools are built and deployed.
About the author – Ayesha Aayat is a law student and contributor covering cybercrime, online frauds, and digital safety concerns. Her writing aims to raise awareness about evolving cyber threats and legal responses.