AI Without Limits: The Dangerous Growth of WormGPT and Its Clones

Meet WormGPT: The Chatbot That Doesn’t Say “No” to Hackers

The420 Web Desk
4 Min Read

As generative AI reshapes industries, it’s also supercharging the criminal underground. From WormGPT to DarkGPT, unguarded models are arming hackers with automation once unimaginable — and exposing the world to an era of industrialized cybercrime.

The Birth of ‘Dark AI’

In June 2023, just seven months after OpenAI introduced ChatGPT’s “research preview,” a lesser-known chatbot called WormGPT quietly appeared online — aimed not at students or coders, but hackers. Its creator offered a stripped-down large language model (LLM) without safety checks or moral filters. Users could pay €500(₹51,070) per month to generate phishing templates, malware scripts, or scam emails without any of the polite refusals typical of mainstream chatbots. Within months, WormGPT gained over 200 subscribers, with some paying thousands for private installations.

Security researcher Brian Krebs later revealed the developer’s identity as Rafael Morais, who claimed his tool was meant to be “neutral and uncensored,” not criminal. But WormGPT’s design spoke for itself: it was optimized for those who needed a chatbot that didn’t say “no.”

Dark AI vs. Misused AI

Mainstream AI systems have built-in guardrails meant to block harmful or illegal outputs. But as cybersecurity experts point out, those safeguards can be easily circumvented. A prompt disguised as “fictional writing” can trick a chatbot into producing scam templates or malware instructions.
A 2025 arXiv study found that 14 of 17 leading LLMs tested were vulnerable to “inter-agent trust” exploits — where one AI system can manipulate another by defaulting to mutual trust.

The rise of dark AI tools — FraudGPT, Evil-GPT, XXXGPT, and keanu-WormGPT — has given the cyber underground new capabilities. These clones, distributed through darknet markets and Telegram groups, sell unrestricted models capable of generating ransomware code, spoofing emails, or mimicking human writing styles to evade detection.

PhantomCaptcha: A Sophisticated Cyber Espionage Campaign Targets Ukraine’s War Relief Networks

Fighting Fire with Fire

Security researchers say the only way to counter this new wave of AI-driven cybercrime may be to fight AI with AI.

“Anyone with a GPU and a little technical know-how can now fine-tune a model for malicious use,” says Crystal Morin, a senior strategist at Sysdig and former U.S. Air Force intelligence analyst. “That’s exactly how threat actors are bypassing safeguards.”

In the two years since the first dark models surfaced, ransomware attacks have spiked, cloud exploits have surged, and the average cost of a data breach has reached record highs.
Tech-savvy criminals can now self-host open-source models, training them on vast databases of stolen credentials, phishing kits, and leaked malware. The result: AI tools that can automate entire hacking workflows, from reconnaissance to payload delivery, with chilling efficiency.

The Unrung Bell

The uncomfortable truth, experts say, is that there’s no going back. Generative AI has democratized creation — and, by extension, destruction. Once the models are open-sourced, they can’t be recalled or re-caged. “We can’t unring the AI bell,” as one analyst put it. What was once a high-skill, high-risk activity now takes little more than a prompt. The convergence of accessible hardware, leaked datasets, and open LLMs has blurred the line between hacker and hobbyist.

As regulators scramble to contain the fallout, the question is no longer whether AI will be misused — but how societies will adapt to a future where every innovation carries its own exploit code.

Stay Connected