New Delhi | Cybercriminals are increasingly exploiting the growing popularity of artificial intelligence tools, and a recent case highlights how AI-powered search results can be manipulated to spread malware. A fake installer for the AI agent OpenClaw was reportedly promoted through Bing’s AI search suggestions, leading unsuspecting users to download malicious software that compromised their systems.
According to cybersecurity researchers, users searching for “OpenClaw Windows” on Bing were shown a GitHub repository link in the AI-generated search suggestions. The repository appeared legitimate but was actually malicious. Anyone who downloaded and executed the installer unknowingly infected their computer with multiple types of malware.
FCRF Launches Flagship Certified Fraud Investigator (CFI) Program
Investigations revealed that the fake repository was active on GitHub between February 2 and February 10. During this period, several users downloaded the installer believing it to be genuine. The issue came to light after security researchers detected suspicious activity linked to the installer and began analyzing the files distributed through the repository.
Experts say the scam succeeded mainly because of two factors. First, the malware was hosted on GitHub, a widely trusted platform for open-source projects. Since the real OpenClaw project has thousands of forks on GitHub, users were more likely to assume that the repository hosting the installer was legitimate.
The second factor was the credibility provided by Bing’s AI search results. By uploading the malicious repository and manipulating search visibility, attackers managed to push the fake download link to the top suggestion when users searched for “OpenClaw Windows.” As a result, many users believed it to be the official source for downloading the software.
Security researchers identified the threat on February 9, after a user downloaded and ran the installer. Further investigation revealed that the file triggered the installation of several data-stealing programs once executed on the system.
Analysis of the repository showed that much of the visible code was copied from a legitimate open-source project, making the repository appear authentic. However, a hidden executable file was placed in the “releases” section of the project. This file, named OpenClaw_x64.exe, was stored inside a compressed 7-Zip archive.
When the file was executed, it deployed multiple malicious components on the victim’s computer. One of the primary payloads was Vidar Stealer, a well-known malware designed to extract sensitive information from infected devices. It can steal credentials and account data from services such as Telegram and Steam, along with other stored information.
Another piece of malware delivered through the installer was GhostSocks, a proxy malware that converts infected computers into residential proxy nodes. Cybercriminals often use such compromised systems to route malicious traffic, hide their real locations, and access stolen accounts without triggering fraud detection systems.
Researchers also discovered that the attackers used a previously unseen tool known as Stealth Packer to conceal the malware. This tool helps attackers evade detection by hiding malicious code and performing several stealth operations on infected systems. These include creating hidden scheduled tasks, modifying firewall rules, and performing checks to detect whether the malware is running inside a virtual machine before activating the payload.
After the incident was reported, GitHub removed the malicious repository and the associated accounts. However, researchers warned that several similar accounts and organizations had been created to distribute comparable malware, suggesting that the campaign may have been larger than initially believed.
Cybersecurity experts say the incident demonstrates how quickly attackers exploit trending technologies to target users. With AI tools gaining popularity, fake installers and malicious downloads are becoming increasingly common.
Experts advise users to download software only from official sources and verify the authenticity of repositories before installing anything on their systems. They also recommend running new AI tools in isolated environments, restricting their access to sensitive data, and avoiding granting them high-level system permissions.
Taking these precautions, researchers say, can significantly reduce the risk of malware infections and protect users from sophisticated cyber threats linked to emerging AI technologies.
