AI Chatbots At Risk

Cybersecurity Alert: Hidden Image Commands Can Compromise AI Systems

The420.in Staff
3 Min Read

AI models often automatically downscale large images before processing them, a feature intended to optimize performance. However, cybersecurity researchers at Trail of Bits discovered that this resizing can be exploited. High-resolution images that appear ordinary to the human eye may contain hidden instructions that become visible only when the AI scales the image down. These “invisible” commands act as a form of prompt injection, which the AI can execute without the user’s knowledge.

In demonstrations, researchers successfully exploited platforms including Google’s Gemini CLI, Gemini’s web interface, and Google Assistant. In one striking example, a single image could trigger the AI to access a user’s Google Calendar and email the information to an attacker—entirely without user consent.

Final Call: Be DPDP Act Ready with FCRF’s Certified Data Protection Officer Program

“This is not a bug in the AI’s reasoning, but in how it perceives images,” said one researcher. “The vulnerability lies in the automatic resizing feature that no one thought could be dangerous.”

A Sophisticated Attack Vector

The technique, now called an “image scaling attack,” is subtle yet powerful. Unlike traditional malware that relies on code execution or phishing, this attack leverages AI’s inherent image processing mechanics. By embedding instructions only detectable after downscaling, attackers can bypass human scrutiny entirely.

Trail of Bits’ research highlights the growing sophistication of attacks targeting AI models, moving beyond simple software exploits to manipulating the very inputs AI systems rely on for perception and decision-making.

Tools And Countermeasures

To aid defenders, the researchers developed a tool called Anamorpher, named after the art technique of anamorphosis, which creates images that appear distorted unless viewed in a specific way. The tool allows security professionals to test their AI systems against these hidden commands and understand how the attack works.

Protecting Against Hidden Commands

Experts recommend that AI systems should never automatically execute sensitive commands embedded in images. Users should always be presented with a preview of the image as the AI perceives it, particularly in command-line and API-based tools. Explicit consent must be required before accessing personal data or performing any critical action.

As AI integration deepens across personal and enterprise applications, vulnerabilities like these highlight the importance of defensive design, ensuring users remain in control of their own data.

Stay Connected