WASHINGTON: After an AI chatbot linked to Elon Musk was used to generate thousands of nonconsensual sexual images, lawmakers in Washington are moving again on legislation that would give victims a clearer path to court — and put new pressure on platforms and users alike.
A Bill Returns Amid a New Flashpoint
On Tuesday, the United States Senate unanimously passed legislation aimed at addressing one of the most disturbing byproducts of generative artificial intelligence: the creation of nonconsensual sexual images, often referred to as “deepfake nudes.” The bill, formally titled the Disrupt Explicit Forged Images and Non-Consensual Edits Act, or the DEFIANCE Act, would allow victims to sue individuals who use AI tools to generate such images.
The measure now heads to the House of Representatives, where it must pass before becoming law. A similar version of the bill cleared the Senate in 2024 but stalled in the lower chamber. Lawmakers and advocates believe the political climate has shifted since then, following public backlash over recent AI abuses.
Certified Cyber Crime Investigator Course Launched by Centre for Police Technology
The renewed momentum comes as outrage has grown around Grok, an AI chatbot developed by xAI, the Musk-owned startup, and deployed on X, the social media platform formerly known as Twitter. According to multiple reports, Grok was used to generate thousands of explicit images of both adults and children using photographs uploaded to the platform.
The Scope of the Harm
The scale of the problem has been difficult to ignore. The AI content analysis firm Copyleaks estimated that Grok was producing a nonconsensually sexualized image roughly every minute at the height of the activity. Victims have described an inability to stop the spread of such images once they are created, even when the original content is clearly illicit.
Senator Dick Durbin, an Illinois Democrat and the bill’s sponsor, framed the issue as one of personal autonomy and lasting harm. “Imagine losing control of your own likeness or identity,” he said, speaking on the Senate floor, according to The Hill. He asked lawmakers to consider how such violations would feel if they occurred during adolescence, when reputational and psychological damage can be especially severe.
Existing laws, Durbin and others argue, have struggled to keep pace with generative tools that can rapidly recreate images even after takedowns, leaving victims in a constant cycle of exposure.
Building on Earlier Law
The DEFIANCE Act expands on the Take It Down Act, passed last year, which made it illegal to distribute nonconsensual intimate images and required social media companies to remove them within 48 hours. While that law focused primarily on distribution and platform responsibility, the new bill targets the act of creation itself.
Under the proposed framework, victims would be empowered to pursue civil damages and seek restraining orders against those responsible for generating the images, not just sharing them. Bloomberg noted that this shift reflects growing recognition that AI-generated abuse often begins and multiplies before platforms can respond.
The bill passed the Senate without opposition, underscoring the breadth of concern across party lines. Still, its future in the House remains uncertain, as lawmakers there balance free speech arguments, platform liability concerns, and mounting public pressure.
Global Responses and Corporate Silence
While U.S. lawmakers debate next steps, governments abroad have already begun acting. Malaysia and Indonesia have moved to block access to X entirely, citing failures to curb harmful content. In Britain, Prime Minister Keir Starmer warned that tougher action could follow, as the communications regulator Ofcom launched a formal investigation into the platform.
xAI has not publicly responded to detailed questions about Grok’s role in the episode. Elon Musk addressed the issue only indirectly, posting that anyone using Grok to create illegal content would face the same consequences as if they had uploaded such material themselves. In another post, he appeared to joke that the nonconsensual “undressing” trend was “way funnier” than similar trends tied to other AI tools, remarks that further inflamed criticism.
As AI systems become more powerful and widely accessible, they argue, the burden of accountability has increasingly fallen on victims — a balance the new legislation seeks to alter
