London | January 12, 2026 | The UK’s media and online safety regulator Ofcom has launched a formal investigation into social media platform X (formerly Twitter) over concerns that its artificial intelligence chatbot Grok is being misused to generate and circulate obscene and non-consensual deepfake images, including material involving children.
In a statement issued on Monday, Ofcom said it had received “deeply concerning reports” suggesting that Grok’s image-generation feature was being exploited to create digitally altered images of real individuals that are nude or otherwise highly objectionable. The regulator said it was particularly alarmed by reports involving minors, noting that the creation or distribution of such material constitutes a serious criminal offence under UK law.
Certified Cyber Crime Investigator Course Launched by Centre for Police Technology
If violations are confirmed, X could face penalties of up to 10% of its global annual revenue or £18 million — approximately ₹187 crore — whichever is higher. In cases of serious or repeated non-compliance, Ofcom also has the authority to seek court orders requiring internet service providers to block access to the platform within the UK.
Responding to the probe, X pointed to an earlier statement from its Safety team, which said users attempting to generate or share illegal or objectionable content using Grok would face consequences equivalent to those imposed for directly uploading such material. Platform owner Elon Musk later criticised the investigation, alleging that the UK government was looking for “any excuse for censorship” and questioning why similar scrutiny was not being applied to other AI platforms.
Investigators have identified multiple instances of digitally manipulated images circulating on X in which women were depicted nude or placed in obscene scenarios without consent. In one documented case, a woman claimed that more than 100 such images had been created using AI tools and shared online, highlighting the scale at which deepfake abuse can spread once safeguards fail.
UK Technology Secretary Liz Kendall welcomed Ofcom’s move and urged swift action. “It is vital that this investigation is completed as quickly as possible because the public — and most importantly the victims — will not accept any delay,” she said.
Former technology secretary Peter Kyle described the situation as “appalling”, suggesting that Grok appeared to have been released without adequate testing or safety controls. He referred to a case in which a woman’s image had been manipulated by AI and placed in an extremely offensive historical context, calling the incident deeply disturbing.
Other lawmakers have also expressed concern. Northern Ireland politician Cara Hunter said she decided to leave the platform after being targeted by deepfake abuse. Downing Street, meanwhile, said the government remained focused on protecting children online and that its engagement with X was “under review”, adding that “all options remain on the table”.
What the investigation will examine
Ofcom said the probe would assess whether X failed to remove illegal or objectionable content promptly once it became aware of it, and whether the platform took adequate steps to prevent UK users from accessing such material. The regulator will also examine whether X has implemented “highly effective age-assurance measures” to prevent children from being exposed to obscene or harmful imagery.
The investigation follows international backlash over Grok’s image-generation feature. Authorities in Malaysia and Indonesia recently moved to temporarily restrict access to the tool, citing concerns over explicit deepfake content.
An Ofcom spokesperson said no fixed timeline had been set for the inquiry but stressed it would be treated as a “matter of the highest priority”. Legal experts noted that the regulator has broad discretion over the pace and scope of enforcement. Lorna Woods, professor of internet law at the University of Essex, said Ofcom could seek a business disruption order — including blocking access to X — earlier than usual, although such measures are typically reserved for exceptional circumstances.
The case is expected to intensify scrutiny of AI-generated content and platform accountability under the UK’s online safety framework, with potentially far-reaching implications for global technology companies operating in the country.
About the author — Suvedita Nath is a science student with a growing interest in cybercrime and digital safety. She writes on online activity, cyber threats, and technology-driven risks. Her work focuses on clarity, accuracy, and public awareness.
