Security researchers say a popular self-hosted AI agent platform has quietly become a new vector for malware distribution, as hundreds of seemingly legitimate automation tools were found to conceal trojans, infostealers and backdoors targeting Windows and macOS users.
A Marketplace Built on Trust
When developers turn to OpenClaw, they are typically looking for flexibility. The self-hosted AI agent is designed to execute shell commands, manage files and make network requests directly on a user’s system. Its appeal lies in extensibility: third-party “skills” distributed through the ClawHub marketplace allow developers to bolt on new capabilities, packaging scripts and instructions as reusable tools.
That same openness, researchers now say, has created an unexpected supply-chain risk. According to findings published by VirusTotal, the skills ecosystem has been systematically abused to distribute malicious code under the guise of routine automation utilities. What appears, on the surface, to be a crypto analytics helper or a financial tracking assistant can, during setup, instruct users to download and execute external binaries or scripts — a step that bypasses many conventional safeguards.
Certified Cyber Crime Investigator Course Launched by Centre for Police Technology
Behavioral Analysis Over Promises
The discovery emerged from a new analytical approach deployed by VirusTotal, which uses advanced behavioral inspection powered by Gemini Flash. Rather than relying on how a skill describes itself, the system evaluates what the code actually does once installed.
Using this method, VirusTotal Code Insight examined more than 3,016 OpenClaw skills. Hundreds exhibited behaviors associated with compromise: downloading external payloads, accessing sensitive system data, or embedding commands capable of remote control and data exfiltration. In total, 314 skills were flagged as malicious by multiple security vendors, suggesting what researchers described as a systemic issue rather than a handful of isolated abuses.
Some skills reflected poor security hygiene, such as hard-coded secrets or unsafe command execution. Others, however, were explicitly engineered for malicious ends, including backdoor installation and credential harvesting.
Inside a Multi-Stage Attack Chain
One example, a skill branded as a “Yahoo Finance” tool, illustrated the level of sophistication involved. Analysis showed a multi-stage attack chain tailored to the user’s operating system.
On Windows machines, users were directed to download a password-protected ZIP archive containing an executable named openclaw-agent.exe. Multiple antivirus engines later identified the file as a packed trojan designed to steal sensitive information. macOS users, by contrast, received obfuscated, Base64-encoded shell scripts that fetched and executed Atomic Stealer, a known infostealer capable of harvesting browser credentials, passwords and cryptocurrency wallets.
By maintaining separate delivery paths for different platforms, researchers said, the operators increased effectiveness while evading generic detection mechanisms.
A Prolific Publisher in Plain Sight
Investigators also traced a large share of the malicious skills to a single ClawHub account, identified as “hightower6eu.” The user, researchers reported, functioned as a prolific publisher, distributing all 314 flagged skills while presenting them as legitimate automation tools.
The scale of the activity has raised broader questions about security in community-driven AI agent ecosystems. By weaponizing extensibility — a core feature meant to empower developers — threat actors were able to blend into a trusted marketplace and turn it into a malware distribution channel.
For security teams, the episode underscores a growing concern: as AI agents gain deeper access to local systems, the boundary between helpful automation and silent compromise is becoming harder to discern.
