Google has issued a serious warning to its 1.8 billion Gmail users worldwide about a new form of cyberattack powered by artificial intelligence. The attack, known as indirect prompt injection, involves hackers embedding malicious commands inside seemingly harmless emails, documents, or calendar invites.
In a detailed blog post, Google warned that this attack vector represents a significant threat to individuals, businesses, and governments. Unlike traditional phishing scams, these hidden instructions exploit generative AI systems to make them reveal sensitive information or perform unauthorized actions without the user realizing it.
Data Protection and DPDP Act Readiness: Hundreds of Senior Leaders Sign Up for CDPO Program
How Indirect Prompt Injections Work?
According to Google’s statement, generative AI is increasingly being integrated into everyday tools, from email to productivity software. Hackers are exploiting this reliance by planting instructions inside content that AI models, such as Google’s Gemini assistant, later process. Cybersecurity experts explained that once the AI reads these concealed prompts, it may unknowingly execute harmful actions, such as disclosing login details, leaking confidential data, or even altering system behaviours.
Tech specialist Scott Polderman, cited in multiple reports, said hackers are manipulating Google’s Gemini assistant to steal passwords. By embedding hidden commands in emails, attackers can trick Gemini into generating messages that falsely alert users or coax them into giving up sensitive credentials. Unlike conventional scams, no malicious link is required, the AI itself delivers the harmful outcome.
Growing Concern for Businesses and Governments
Google emphasized that as AI adoption accelerates across sectors, the potential for misuse through such AI-versus-AI cyberattacks is rising. “With the rapid adoption of generative AI, a new wave of threats is emerging across the industry,” Google noted, urging organizations to adopt robust AI safety measures.
Experts warn that the attacks are difficult to detect because the malicious commands are invisible to human users but are recognized by AI systems. Security analysts believe this could lead to a surge in AI-manipulated cybercrimes affecting not only personal accounts but also critical infrastructure.
Google reminded users that it will never request passwords or financial information through its AI platforms. The company is working on advanced detection tools and has advised both individuals and enterprises to remain vigilant.
