The European Commission has launched a formal investigation into Elon Musk’s social media platform X — specifically over its integrated AI chatbot Grok, which has been generating sexually explicit and manipulated images, including content that may involve minors. The probe aims to determine whether X complied with its legal obligations under the EU’s Digital Services Act (DSA) and whether adequate safeguards were put in place to avoid harm to users.
The action reflects growing international concern over the misuse of AI-generated content, especially deepfakes and sexually explicit imagery, and highlights regulatory scrutiny on tech platforms that deploy generative AI without robust guardrails.
Certified Cyber Crime Investigator Course Launched by Centre for Police Technology
What Triggered the EU’s Investigation
The investigation was triggered after reports emerged that Grok’s AI tools — particularly its image generation and editing features — had been used to create non-consensual sexually explicit content, including digital “undressing” of individuals and deepfake imagery involving women and children. These outputs spread rapidly across the platform, drawing alarm from regulators worldwide.
According to the European Commission, Grok’s capacity to generate or manipulate images raised serious concerns about user safety and the potential for illegal content, such as sexually explicit AI content and material that may amount to child sexual abuse material (CSAM). The Commission emphasised that protecting the rights and dignity of women and minors is central to its Digital Services Act enforcement.
Regulatory Framework: The Digital Services Act (DSA)
The Digital Services Act (DSA) is a comprehensive regulatory framework that requires large online platforms to prevent, remove, and mitigate illegal and harmful content — including deeply concerning manipulated media — in the EU. Under the DSA, platforms can be held accountable if they fail to perform risk assessments or adopt adequate measures to curb harmful or illegal content from spreading.
If X is found to have breached these obligations, the company could face significant penalties, including fines of up to 6% of its global annual turnover, as stipulated by the DSA. The inquiry into Grok extends earlier investigations into X’s recommendation systems and content moderation practices.
Global Backlash and Related Probes
The EU’s move follows widespread international backlash:
- In the United Kingdom, the communications regulator Ofcom opened its own investigation over Grok’s creation of undressed and sexualised AI images, including those that could amount to intimate image abuse or child sexual abuse material.
- Malaysia and Indonesia temporarily blocked access to Grok as authorities reviewed risks linked to sexually explicit outputs from the AI tool.
- In the United States, multiple state attorneys general have sought explanations and safeguards from X over how the company plans to prevent the spread of abusive AI-generated content.
This convergence of regulatory pressure reflects a broader consensus among global watchdogs that AI systems must not be permitted to produce harmful content unchecked.
X’s Response and Policy Changes
In response to the outcry and regulatory scrutiny, xAI — the AI company behind Grok — announced restrictions on the chatbot’s image editing capabilities. The company said it would block users in jurisdictions where such content is illegal and restrict certain image generation features to paying subscribers. However, regulators and critics have described these steps as insufficient, given the scope and scale of the problem.
X has also stated that it enforces a zero-tolerance policy for child sexual exploitation and non-consensual sexual content, and that accounts promoting illegal content will be subject to action. Nevertheless, the ongoing European investigation will examine whether this response aligns with the DSA’s requirements.
Wider Implications for AI Safety and Tech Regulation
The EU’s investigation into Grok represents one of the most significant regulatory challenges facing AI-driven generative tools. As AI systems become more powerful and widely accessible, governments are grappling with the need to balance innovation with responsibility, safety, and digital rights protections.
Henna Virkkunen, the European Commissioner for Technology, has emphasised that rights — especially of women and children — should not be treated as collateral damage in the deployment of AI tech. Regulators will use the DSA to assess whether X adequately identified and mitigated risks posed by Grok before deploying the feature within the EU.
The findings of this investigation could set important precedents for how AI platforms are regulated globally, particularly concerning deepfake generation, non-consensual imagery, and harmful content oversight.
About the author – Rehan Khan is a law student and legal journalist with a keen interest in cybercrime, digital fraud, and emerging technology laws. He writes on the intersection of law, cybersecurity, and online safety, focusing on developments that impact individuals and institutions in India.
