In a major escalation of global scrutiny against AI-generated harmful content, Britain’s media regulator Ofcom has launched a formal investigation into Elon Musk’s social media platform X over allegations that its generative AI chatbot Grok was used to create and disseminate sexualised deepfake images — including intimate depictions of real people and potentially unlawful content involving minors.
The probe, which began on January 12, 2026, centres on whether X failed to fulfil its legal obligations under the UK’s Online Safety Act by allowing sexually explicit AI images to be generated and shared on its platform without sufficient safeguards to prevent harm.
Certified Cyber Crime Investigator Course Launched by Centre for Police Technology
What Triggered the Investigation
According to Ofcom’s official statement, the watchdog received “deeply concerning reports” that Grok’s image-generation feature was used to create and circulate undressed or sexualised images of individuals — possibly amounting to non-consensual intimate imagery or child sexual abuse material (CSAM) under UK law.
Under the Online Safety Act, platforms providing services to UK users have a legal duty to prevent illegal content, including violent, sexually abusive or exploitative material, and to take effective steps to minimise risks, especially to children and vulnerable users. Ofcom’s investigation will examine whether X complied with these responsibilities.
The regulator also said it will assess whether X conducted adequate risk assessments for UK users before making Grok’s capabilities widely available.
How X Has Responded So Far
In response to mounting criticism, X and its parent AI company xAI have implemented technical restrictions on the Grok chatbot’s image features. These include:
- Blocking Grok’s ability to edit or undress photos of real people in jurisdictions where such content is illegal
- Limiting image creation and editing to paid subscribers, to help establish accountability where misuse occurs
However, regulators and safety advocates argue that these measures — including geoblocking based on user location — may not be sufficient, and that enforcement must be robust and verifiable to fully protect users.
Ofcom has welcomed the restrictions but stressed the formal investigation is still ongoing, saying it needs “answers into what went wrong and what’s being done to fix it.”
Political Pressure and Government Support
The investigation has drawn strong statements from British political leaders. Prime Minister Keir Starmer condemned Grok-generated sexual imagery as “disgusting” and “unlawful,” asserting government support for Ofcom’s actions and urging immediate compliance with UK laws.
British Technology Secretary Liz Kendall also publicly welcomed the probe, emphasising the importance of swiftly concluding the investigation to protect the public and the victims of online abuse.
Legal Stakes Under the Online Safety Act
Ofcom’s investigation under the Online Safety Act carries significant potential consequences for X if non-compliance is found. The regulator can:
- Order corrective actions to remedy breaches
- Impose fines of up to £18 million or 10% of global revenue (whichever is greater)
- Ask courts to impose business disruption measures, including blocking payment systems, advertising services, or even internet access in the UK if harmful content continues unabated
These powers reflect a broader trend toward holding digital platforms accountable for AI-generated harmful content, particularly when it intersects with privacy violations and child protection laws.
Growing Global Scrutiny of Grok
The UK investigation is only one of several international responses to concerns over Grok’s image generation capabilities. Authorities in countries including Canada, Japan, France, India, and parts of Southeast Asia have also raised issues or launched their own inquiries into sexualised AI deepfakes, highlighting a global reckoning with generative AI safety and ethics.
Regulators are increasingly emphasising that tech platforms must proactively detect and remove illegal or harmful content, especially content that could exploit or endanger minors.
About the author – Rehan Khan is a law student and legal journalist with a keen interest in cybercrime, digital fraud, and emerging technology laws. He writes on the intersection of law, cybersecurity, and online safety, focusing on developments that impact individuals and institutions in India.
