Cyber Crime
AI-Powered Cyber Threats: Hackers Exploit Google’s Gemini

More than 57 distinct threat actors linked to China, Iran, North Korea, and Russia have been observed leveraging Google’s AI-powered Gemini to advance their cyber and information warfare operations, according to a new report from Google Threat Intelligence Group (GTIG).
While these government-backed hacking groups, known as Advanced Persistent Threats (APTs), have not yet developed novel capabilities using AI, they are increasing their efficiency in key cyber operations, such as coding malicious scripts, conducting reconnaissance, researching vulnerabilities, and evading detection.
Iranian Hackers Lead AI Exploitation
GTIG identified Iranian APT groups as the most aggressive users of Gemini, particularly APT42, which accounts for over 30% of AI-related cyber activity from Iran.
Registrations Open for FutureCrime Summit 2025: India’s Largest Conference on Technology-Driven Crime
APT42, also tracked as Charming Kitten and Mint Sandstorm, has used Gemini for:
- Crafting phishing campaigns targeting defense organizations
- Conducting reconnaissance on security professionals
- Generating cybersecurity-themed content for social engineering attacks
Iranian hackers have also researched military and weapons systems, analyzed China’s defense industry trends, and studied U.S. aerospace technology.
China, Russia, and North Korea’s AI-Enabled Cyber Tactics
- Chinese APT groups are using Gemini for network reconnaissance, privilege escalation, data exfiltration, and bypassing security measures.
- Russian APT groups have primarily converted malware into different programming languages and added encryption layers to their existing code.
- North Korean APT groups have leveraged Gemini to research infrastructure and hosting providers, as well as for job application scams aimed at infiltrating Western companies.
Notably, North Korean-backed hackers have used Gemini to draft cover letters, research job salaries, and study employee exchange programs—an effort believed to support placing undercover IT workers inside Western organizations.
Underground AI Threats: The Rise of Malicious LLMs
Hackers are not just misusing Gemini but also developing rogue AI models tailored for cybercrime. Google has flagged underground AI tools like WormGPT, WolfGPT, EscapeGPT, FraudGPT, and GhostGPT, which are being sold on hacking forums. These unregulated AI models enable:
- Sophisticated phishing attacks
- Business Email Compromise (BEC) scams
- Creation of fraudulent websites
Google’s Cyber Defense Efforts
Google has deployed countermeasures to prevent prompt injection attacks and AI misuse but stresses the need for stronger public-private collaboration. “American industry and government need to work together to support our national and economic security,” Google stated, highlighting the urgency to disrupt AI-driven cyber threats. As AI continues to reshape the cybersecurity landscape, the battle between hackers and defenders is evolving rapidly, with threat actors increasingly weaponizing AI to enhance cyberattacks and influence operations worldwide.