London | January 13, 2026 | The UK government has issued a clear warning that social media platform X could lose its right to self-regulate if it fails to rein in serious violations linked to its AI chatbot Grok. The warning comes amid growing nationwide concern over the creation and circulation of non-consensual, explicit AI-generated images, particularly targeting women.
Addressing members of Parliament from the Labour Party, Prime Minister Keir Starmer said that if digital platforms are unable or unwilling to control their systems, the government will step in decisively. “Accountability for digital platforms is no longer optional—it is essential,” he said, signalling a tougher regulatory approach.
Certified Cyber Crime Investigator Course Launched by Centre for Police Technology
New Law, Stricter Enforcement of Existing Provisions
The government announced that legislation explicitly criminalising the creation of non-consensual intimate images will be brought into force immediately. The provision had already been passed under the Data (Use and Access) Act, but had not yet been implemented.
In addition, authorities are preparing to classify such offences as “priority crimes” under the Online Safety Act. This move would require platforms to act swiftly to detect, remove and prevent the spread of such material, while exposing them to harsher penalties for non-compliance.
Officials said the objective is to close enforcement gaps that have allowed harmful AI-generated content to proliferate despite existing safeguards.
Crackdown on AI Tools and App Providers
The proposed measures go beyond user-generated content. The government is also considering legislation to criminalise the supply of online tools that are designed or knowingly used to create non-consensual intimate images. This includes so-called “nudification” apps and AI-powered image manipulation tools.
According to officials, the intent is to tackle the problem at its source. “Liability will not stop with the user,” a government briefing noted. “Companies that provide the technology enabling such abuse will also be held responsible.”
Ofcom Probe and Possible Heavy Penalties
Britain’s media and digital regulator Ofcom has already launched a formal investigation into X. The probe is examining whether the platform failed to remove illegal content in a timely manner and whether it put adequate safeguards in place to protect UK users.
If violations are established, Ofcom has the authority to impose fines of up to 10% of a company’s global annual turnover or £18 million (approximately ₹190 crore), whichever is higher. In extreme cases, regulators may seek court orders to block access to the platform in the UK.
Ofcom said the investigation is being conducted on a fast-track basis, given the seriousness of the alleged breaches.
‘Not a Free Speech Issue,’ Says Government
The government has rejected claims that the proposed measures threaten freedom of expression. Officials stressed that the issue concerns digital abuse and exploitation, particularly against women and minors.
Non-consensual AI-generated images, they said, are “not harmless content but instruments of harassment, coercion and reputational harm.” Under the new framework, legal responsibility would extend beyond creators to platforms that host or fail to act against such material.
Global Fallout and Rising Pressure
The controversy surrounding Grok’s image-generation capabilities has triggered responses beyond the UK. In recent weeks, Malaysia and Indonesia temporarily restricted access to the tool amid similar concerns.
In Britain, several women have reported discovering hundreds of explicit AI-generated images of themselves circulating online without consent, intensifying calls for urgent action.
A Test Case for Digital Accountability
The Grok episode is emerging as a critical test for how governments balance AI innovation, platform responsibility and user safety. The UK government’s position is unambiguous: if platforms do not enforce safeguards voluntarily, the law will compel them to do so.
As AI technologies evolve at breakneck speed, the case underscores the growing consensus that regulatory frameworks must move just as fast to prevent misuse and protect fundamental rights in the digital age.
About the author — Suvedita Nath is a science student with a growing interest in cybercrime and digital safety. She writes on online activity, cyber threats, and technology-driven risks. Her work focuses on clarity, accuracy, and public awareness.
