Between April 2024 and April 2025, Microsoft reports having thwarted over $4 billion worth of fraud attempts, rejected more than 49,000 partnership fraud applications, and blocked 1.6 million bot signups per hour. These findings from Microsoft’s Cyber Signals report suggest that while defenses are strengthening, attackers are also evolving—with AI now being their biggest weapon.
Microsoft’s threat intelligence reveals that criminals are leveraging AI to build fraudulent websites, generate fake job interviews, and mimic tech support staff, all while using AI-generated language and social engineering tactics to bypass suspicion. In particular, AI-generated voice cloning, fake support portals, and automated phishing emails are proving harder for victims to detect.
Fake E-Commerce Stores and Job Offers in Minutes
AI tools now empower fraudsters to construct convincing fake storefronts in minutes—complete with AI-generated product descriptions, stock images, customer reviews, and even interactive chatbots. These malicious e-commerce operations deceive customers into purchasing nonexistent goods or services, often stalling refund requests using scripted responses from AI-powered customer support bots.
Job scams have also become significantly more refined. Fraudsters deploy generative AI to produce seemingly credible job listings, conduct mock interviews, and send automated email offers—all designed to harvest personal information and financial credentials. Microsoft notes a surge in such frauds, particularly targeting job seekers with minimal experience.
Tech Support Impersonation and Microsoft’s Countermeasures
Another area of concern is tech support fraud, where threat actors use legitimate tools like Microsoft Quick Assist to remotely access devices under the guise of IT help. Microsoft has identified groups such as Storm-1811 deploying AI-driven voice phishing (vishing) techniques to manipulate users into granting access.
To counteract this, Microsoft now blocks 4,415 suspicious Quick Assist connection attempts daily and has introduced warning systems that alert users to potential scams. Features like Digital Fingerprinting and real-time threat detection are being deployed to stop scammers before they breach defenses.
Through its Secure Future Initiative, Microsoft has introduced policies requiring all internal product teams to integrate fraud detection and prevention protocols at the design level. It has also joined the Global Anti-Scam Alliance to work collaboratively with governments and law enforcement bodies across the globe.