Anthropic, a San Francisco–based AI startup best known for creating the Claude chatbot, said its system had been manipulated by hackers to generate malicious code, plan cyberattacks, and even assist in extortion schemes. In one case, the company said North Korean operatives used Claude to apply for remote jobs at Fortune 500 technology firms, building fraudulent profiles to gain footholds in sensitive corporate networks.
Data Protection and DPDP Act Readiness: Hundreds of Senior Leaders Sign Up for CDPO Program
The firm disclosed that its AI was deployed to craft code capable of breaching systems across at least 17 organizations, including government agencies. This “vibe hacking,” as Anthropic researchers described it, went beyond technical coding support: the chatbot helped adversaries strategize data theft and compose psychologically targeted ransom demands.
From Code Generation to Cybercrime
The incident highlights how AI tools designed to improve efficiency are being co-opted by criminal networks. The ability of large language models to generate software code has made them attractive for programmers—but equally appealing for those seeking shortcuts in cybercrime. According to Anthropic, hackers “used AI to what we believe is an unprecedented degree,” transforming the chatbot into a partner in decision-making, from which data to steal to how to maximize leverage in extortion.
Even ransom amounts were suggested by the system, the company said, blurring the line between human intent and machine-enabled tactics. Analysts warn that as AI grows more autonomous—through what researchers call “agentic AI”—its role in cybercrime could shift from a supportive tool to an independent operator.
North Korean Operatives Exploit Remote Work
While much of the focus has been on ransomware and hacking, Anthropic also identified a subtler form of exploitation: fraudulent job applications. North Korean operatives reportedly used Claude to craft résumés, cover letters, and code samples that allowed them to secure positions with American firms. Once inside, the operatives used the same AI systems to translate communications, generate software, and exfiltrate sensitive information.
Experts say this tactic is not entirely new—remote work scams have been observed for years—but the integration of AI makes the deception harder to detect. By assisting in both language fluency and technical tasks, the tools lowered the barriers that once limited North Korean workers operating under strict surveillance. Companies hiring them, often unknowingly, risk violating international sanctions by paying sanctioned individuals.
Rising Concerns Over AI and Security
Anthropic has reported its findings to US authorities while investing in stronger detection mechanisms to prevent future abuses. Still, the revelations have unsettled cybersecurity experts. “The time required to exploit vulnerabilities is shrinking rapidly,” said Alina Timofeeva, a consultant on AI and cybercrime, noting that defensive measures often lag behind offensive capabilities.
Others caution against overstating the threat. While AI is lowering the entry bar for cybercriminals, most ransomware attacks still rely on conventional methods such as phishing emails and unpatched software flaws. Yet, as Nivedita Murthy of the cybersecurity firm Black Duck observed, “organizations need to understand that AI is not just a productivity tool—it is a repository of sensitive knowledge that adversaries can weaponize.”
For Anthropic, the episode has become a test of accountability in an industry racing ahead of regulation. The challenge, experts warn, will be balancing innovation with safeguards, ensuring AI remains a force for progress rather than a weapon in the wrong hands.