Artificial intelligence is rapidly reshaping the cybercrime landscape, lowering the technical barriers for ransomware operators and data extortionists while giving defenders a glimpse of the escalating threats ahead. Recent discoveries — from AI-written ransomware to extortion operations powered by large language models — show that the technology once hailed for productivity is being retooled to scale criminal enterprises.
First Glimpse of AI-Powered Ransomware
Malware researchers recently uncovered what they called the first AI-driven ransomware, dubbed PromptLock. While not fully functional, variants targeting both Windows and Linux systems have surfaced on VirusTotal, suggesting cybercriminals are experimenting with integrating AI into malware.
Final Call: Be DPDP Act Ready with FCRF’s Certified Data Protection Officer Program
Though PromptLock remains rudimentary compared to established families like Qilin or INC, its existence signals a shift: ransomware development is no longer just a theoretical AI threat. Analysts say even limited functionality demonstrates how generative AI tools can accelerate malware design, reduce development time, and assist low-skilled attackers.
AI in Extortion: From Reconnaissance to Negotiation
At nearly the same time, Anthropic disclosed that attackers used its Claude Code AI tool in an extortion campaign that hit 17 organizations. Criminals reportedly applied AI across the entire lifecycle — reconnaissance, exploitation, malware creation, and ransom demands — with extortion sums ranging between $75,000(≈ ₹62.6 lakh) and $500,000(≈ ₹4.18 crore).
Meanwhile, a new ransomware-as-a-service group, Global Group, has adopted AI chatbots for victim negotiations. Instead of human operators, victims interact with an automated agent over Tor-based portals, receiving escalating threats and million-dollar ransom notes. Security analysts warn that this automation allows cybercriminal groups to scale faster by reducing affiliate workload.
The Rise of Agentic AI in Cybercrime
Experts say the next frontier is agentic AI — autonomous agents capable of reconnaissance, targeting, and automated infection at scale. At a Congressional hearing earlier this summer, it was testified that GenAI-enabled scams have spiked 456% in a year, fueled by deepfakes, synthetic phishing campaigns, and AI-driven extortion. He warned lawmakers that the affiliate-based ransomware model may be replaced by AI-run operations that no longer need human partners to deploy attacks.
Industry researchers have already documented AI’s role in voice cloning, phishing lure creation, malware debugging, and vulnerability exploitation. Google’s Threat Intelligence Group and analysts confirm that AI is speeding up ransomware coding, reconnaissance, and exploit discovery — in effect becoming a “fuzzer on steroids.”
A Convergence of Old Tactics and New Tools
Although AI is not yet advanced enough to fully replace ransomware affiliates, it is enhancing their efficiency and reach. By automating extortion messaging, reconnaissance, and malware development, cybercriminals can run larger campaigns with less effort.
For defenders, this creates a new urgency: the line between traditional, human-driven cybercrime and AI-augmented operations is narrowing quickly. This evolution is less a distant risk and more an “inevitable progression.”
Analysts warn that while AI is the “shiny new tool” in the criminal arsenal today, it will not be the last. With cybercriminals quick to adopt any technology that improves scalability and believability, defenders face an accelerating arms race where automation favors the aggressor.