Researchers Reveal Three Hidden Vulnerabilities in Gemini

Cybersecurity researchers from Tenable have revealed a trio of serious flaws in Google’s Gemini AI platform, exposing users to prompt injection attacks and data leakage. Dubbed the “Gemini Trifecta,” the vulnerabilities affect Gemini’s Cloud Assist, Search Personalization, and Browsing Tool features — each offering a unique attack vector.

The first issue was discovered in Gemini’s Cloud Assist, which analyzes and summarizes Google Cloud logs. Researchers found that Gemini was processing raw log entries directly, rather than just metadata. By injecting malicious instructions into fields such as HTTP headers, attackers could trick Gemini into executing unauthorized cloud queries. These could include scanning for public assets or misconfigurations, with results embedded into clickable links — all triggered by a user simply pressing “Explain this log entry.”

A second flaw exploited Gemini’s ability to personalize responses based on a user’s Chrome search history. Researchers demonstrated that if an attacker could manipulate this history — for example, via malicious JavaScript — it could effectively become a prompt. In controlled tests, they used this method to exfiltrate location data and saved user information by embedding instructions Gemini would later interpret as legitimate queries.

The final vulnerability involved Gemini’s Browsing Tool, which fetches live content from external websites. Researchers found they could prompt Gemini to include sensitive user data in requests sent to attacker-controlled servers. The exfiltration occurred silently, without any visual clues in the model’s output, by hiding data inside network requests triggered during browsing.

Google has since patched all three vulnerabilities following responsible disclosure. However, the research highlights systemic challenges in integrating AI with tools that process live or historical user data. When systems treat external inputs — like logs or browser history — as trusted context, they become vulnerable to prompt manipulation. Organizations deploying similar AI features are advised to apply strict input validation, limit external tool access, and monitor model behavior to reduce the risk of misuse.