This post is also available in:
עברית (Hebrew)
Security researchers have uncovered a method that allows attackers to manipulate Google’s Gemini AI assistant by hiding invisible instructions within emails. The flaw enables malicious actors to embed commands that Gemini will follow when summarizing email content—potentially leading users to phishing scams or fake support lines.
The technique relies on a form of prompt injection, a growing concern in the context of AI assistants. In this case, researchers at security firm 0din demonstrated that attackers can embed seemingly harmless text inside an email using HTML and CSS styling tricks. For example, by setting the font size to zero and the text color to white, attackers ensure the prompt is invisible to the recipient but still processed by the AI.
Once the user interacts with the email—for instance, by clicking “Summarize this email” in Gmail—Gemini generates a manipulated response. One test showed Gemini inserting a warning that the user’s password had been compromised, along with a phone number supposedly for support. The entire message appeared to come from Google itself, despite being injected by an attacker.
The underlying issue is that Gemini, like many language models, interprets text content without fully accounting for visual formatting or distinguishing between intended content and embedded instructions. This is not unique to Gemini; similar weaknesses have been observed in other AI-powered assistants.
According to the researchers, the risk is considered moderate. Attackers can send poisoned content, but the exploit depends on user interaction—such as clicking a summarization button or requesting help with document content. The same injection technique could be extended to other Google Workspace tools like Docs, Drive, and Slides, wherever Gemini processes third-party text.
The research highlights a broader challenge: current AI systems lack strong safeguards to isolate context or verify instructions. Until more robust defenses are built into these models, experts say that users need to proceed with caution with any user-facing AI assistant.