SEOUL — When a South Korean journalist opened what appeared to be an email from the country’s military, it came with a strikingly authentic attachment: a military identification card, complete with official-looking seals and formatting. What the recipient could not see was that the document had been fabricated by artificial intelligence.
According to research published by the South Korean cybersecurity firm Genians, the phishing attempt was the work of Kimsuky, a suspected North Korea–linked espionage group with a long history of targeting government agencies, academics, and activists. The fake ID, researchers said, was generated with the help of ChatGPT, marking another step in the use of commercial AI tools to supercharge cyber operations.
FutureCrime Summit 2026: Registrations to Open Soon for India’s Biggest Cybercrime Conference
A Familiar Adversary, New Tools
The United States Department of Homeland Security has previously described Kimsuky as “most likely tasked by the North Korean regime with a global intelligence-gathering mission.” The group’s activities have included spear-phishing campaigns against South Korean think tanks and efforts to collect intelligence on U.S. foreign policy.
The latest attack, uncovered in July, demonstrates how easily publicly available AI can be repurposed for espionage. When asked directly to produce a government ID, ChatGPT refused, Genians researchers said. But with carefully modified prompts, the hackers were able to bypass restrictions and create a realistic draft of a South Korean military identification card.
The phishing email, designed to appear credible, carried a malicious link rather than an actual image. Victims who clicked risked downloading malware capable of siphoning sensitive data. Targets included South Korean journalists, researchers, and human rights activists monitoring North Korea.
Expanding the Scope of AI-Powered Intrusions
North Korea’s cyber operations have increasingly embraced artificial intelligence to bolster deception. Anthropic reported in August that North Korean hackers had used its Claude tool to pass coding assessments and secure remote jobs with U.S. tech companies, embedding themselves inside corporate networks.
American officials have long warned that Pyongyang’s cyber strategy extends beyond intelligence gathering to include cryptocurrency theft and IT fraud, both of which help finance the regime’s nuclear weapons ambitions.
For researchers, the Genians report underscores a broader concern: that emerging AI systems can accelerate every stage of cyberattacks, from malware development to impersonation. “Attackers can leverage AI during the planning process, for building tools, and to make their messages more convincing,” said Mun Chong-hyun, a director at Genians.
While the scale of this latest campaign remains unclear, experts warn that the ease of combining generative AI with traditional hacking tactics points to a future where the line between authentic and artificial grows increasingly difficult to detect.
