September 30, 2025 – In a chilling sign of how cybercrime is evolving, Microsoft has revealed details of a phishing campaign where attackers allegedly used artificial intelligence (AI) to craft malicious code so sophisticated that it bypassed traditional security filters. The campaign, detected and blocked on August 18 by Microsoft Threat Intelligence, primarily targeted U.S.-based organizations and relied on Large Language Models (LLMs) to generate complex phishing payloads.
How the Phishing Attack Worked
The operation began with a fraudulent file-sharing email sent from a compromised small business account. The email, crafted to resemble a routine corporate communication, contained what appeared to be a 23MB PDF file. In reality, the attachment was an SVG (Scalable Vector Graphics) file, a format often overlooked by users and some security tools.
Inside, attackers embedded malicious code camouflaged within what looked like a business analytics dashboard, complete with chart bars and data visuals. Instead of using obvious obfuscation techniques, the payload was encoded using common business terms such as “revenue,” “operations,” and “risk,” making the file appear as harmless corporate data. Once opened, the file redirected victims to a fake sign-in page designed to steal their login credentials.
The First Firm to Assess Your DFIR Capability Maturity and Provide DFIR as a Service (DFIRaaS)
AI’s Role in the Scam
Microsoft’s researchers determined that the unusual coding style was likely generated by AI tools such as LLMs rather than written manually. According to the company, the code’s “verbosity, complexity, and lack of practical utility” suggested it was not human-authored but rather produced systematically by an advanced model.
To uncover this, Microsoft deployed its own AI defense tool, Security Copilot, which analyzed the over-engineered structure and confirmed the likelihood of AI involvement. This revelation underscores a worrying shift—cybercriminals are now leveraging AI to scale phishing attacks and make them harder to detect.
AI vs. AI: How Microsoft Stopped the Attack
Despite the sophistication of the scam, Microsoft’s Defender for Office 365 AI-based protection systems successfully intercepted the campaign. Instead of only scanning for suspicious code, these systems flagged behavioral red flags, such as:
- The use of self-addressed emails with hidden BCC recipients
- The unusual pairing of file type and file name
- The final redirect to a known malicious site
By detecting patterns that AI-generated disguises could not fully conceal, Microsoft was able to block the attack before widespread damage occurred.
Expert Reactions: The New Cybersecurity Battlefield
Security experts warn that this incident highlights a new era of AI vs. AI cyberwarfare.
- Anders Askasen, VP of Product Marketing at Radiant Logic, said that “the frontline isn’t the payload, it’s the person behind the login.” He emphasized the need for identity observability, enabling organizations to detect when accounts act out of character.
- Andrew Obadiaru, CISO at Cobalt, noted that AI-driven phishing now creates “camouflage that blends seamlessly into enterprise workflows.” He urged companies to invest in behavioral detection, AI-aware red teaming, and faster remediation cycles.
Both experts agreed that defending against AI-scaled deception requires shifting from traditional payload inspection to monitoring user behavior and identity activity, as these remain difficult for attackers to mimic convincingly.
The Bigger Picture
While this specific phishing campaign was relatively limited, its implications are far-reaching. Cybercriminals using AI to generate stealthy, business-like code marks a turning point for phishing attacks. With AI now making scams harder to detect and more convincing, organizations will need to match innovation with innovation, deploying advanced AI-driven defenses to stay ahead of adversaries.
Summary:
Microsoft blocked a novel phishing attack where cybercriminals allegedly used AI models to craft malicious SVG files disguised as PDFs. The code, structured with business terms and dashboards, redirected victims to fake login pages. Experts warn that this case illustrates the beginning of AI-powered cybercrime, where security will depend increasingly on behavioral monitoring and AI-based defense systems