How AI Summaries Can Be Hijacked

How AI Summaries Can Be Hijacked

Byte-Sized Brief

  • Gemini can be tricked by hidden prompts.
  • Attack needs no links or downloads.
  • The phishing warning looks like it’s from Google.

A phishing attack that looks like a helpful security alert from Google? That’s what one researcher demonstrated in a recently published vulnerability report submitted to 0DIN.ai, a cybersecurity firm focused on generative AI threats. The flaw targets Google Gemini for Workspace and relies on hidden instructions embedded in emails. When a user clicks “Summarize this email,” Gemini follows those buried prompts and spits out a fake warning designed to trick the reader into calling a number or clicking a link.


Invisible prompts can manipulate AI summaries.

0din.ai


The good news is that you don’t have to fall for it. If you use Gemini to summarize emails, treat the result as informational, not authoritative, especially if it suggests you do something urgent. As tempting as the summary can be, take the time to read the actual email for real signs of a threat before you do anything. If you’re part of a security team, consider flagging or quarantining emails with hidden text or odd formatting, and train staff to recognize that AI output can be manipulated just like any other system.

The Bottom Line

A researcher showed that Google Gemini can be tricked into generating fake alerts using hidden email text. Don’t rely on just AI summaries. Pay close attention to the email itself.

Leave a Comment

Your email address will not be published. Required fields are marked *