Evil Calendar Invites Abuse Google Gemini's Agent

One calendar invite can take over your home

Will everyday users feel the security impacts of AI before organizations? Recent research into AI vulnerabilities in Google’s suite of AI-powered tools has me thinking yes.

Here’s why.

I posted on LinkedIn this week about Google summarizing emails for me, even when I didn’t ask for it. A 60-word email down to 41 words. Not super helpful. This is an example of where Big Tech is forcing AI on its users.

Because when you spend billions of dollars researching and building AI, it needs to show up somewhere.

For any Google user, you know that the convenience of their apps’ interoperability is huge. For example, you can easily reference other files or even calendar invites in Google Docs. It makes a dispersed world feel so much more connected.

Who would have thought that interconnectivity would be a dream for indirect prompt injection…turns out, literally every AI security researcher.

That’s why a team of Israeli researchers, Ben Nassi, Stav Cohen, and Or Yair, released a paper titled “Invitation Is All You Need,” which found some concerning security issues involving Google’s Gemini AI assistant.

In short, they found a series of attacks that start with a malicious prompt sent in email, shared files, or calendar invites. When the Google Gemini assistant accesses any of those items as part of a routine task, like summarizing emails or meetings for the day, indirect prompt injection occurs. This can result in malicious actions taking place through Google apps, external apps (like Zoom), and even physical objects.

Here’s the basic flow they documented.

Let’s dig into some of the examples, shall we?

Short-Term Context Poisoning: Targets a single user session with a one-time malicious action. Perfect for a little social engineering or harassment.

Entry Path: Prompt injection in meeting invite title
User Activation: User asks Gemini about upcoming events
Impacts:

  • Toxic content generation: bypasses safety guardrails to generate offensive content that is displayed to the victim

  • Spamming: append spam messages to any reply

  • Social engineering: trick the victim into taking action, like clicking on a link

Long-Term Memory Poisoning: Allows for persistent malicious activity across user sessions. It’s as simple as taking the same prompt from short-term context poisoning and telling Gemini to always take that action. The persistence is what makes it more evil.

Entry Path: Prompt injection in meeting invite title
User Activation: User asks Gemini about upcoming events
Impacts: Disinformation: routinely feeds false information to the user

Tool Misuse: As I’ve said before, the risk with AI agents is tied to the tools they have access to. Taking the indirect prompt injection examples from before, apply this to the tools that Gemini can interact with.

Entry Path: Prompt injection in meeting invite title
User Activation: User asks Gemini about upcoming events
Impacts:

  • Deleting and adding events: Trigger the calendar tool to delete or add events on the user’s calendar

Agent Invocation: This is where things start to get really creepy. It’s a form of tool misuse, but instead of a tool, it’s taking advantage of another AI agent in Google’s ecosystem to take action. The attack is a bit more complicated and thus a bit more prone to failing, but still really creepy.

Entry Path: Prompt injection in meeting invite title
User Activation: User asks Gemini about upcoming events AND thanks the agent.
Impacts:

  • Controlling home appliances: If you’re a Google Home user or if you link Google Gemini to whatever smart home appliances you use, this one is for you.

Automatic App Invocation: Continuing our journey into despair Gemini abuse, this involves launching applications on the user’s phone and taking specific actions. iPhone users have a free win here, since this is exclusive to Android phones.

Entry Path: Prompt injection in meeting invite title.
User Activation: User asks Gemini about upcoming events AND thanks the agent.
Impacts:

  • Downloading a File and Geolocating the User: Uses the browser to download a file to the device. The request to an attacker-controlled server means they can geolocate the connecting IP address.

  • Video streaming: Activates a Zoom meeting and turns the user's video on.

  • Stealing data: Collect information on the user’s calendar (like meetings for the day) and send it to an attacker-controlled website. This also worked by including a malicious prompt in an email subject, which subsequently stole a list of the user’s email subject lines.

Before you go and delete all things Google on your phone (assuming you weren’t already on the Google hate train), Google fixed the issues and included the following in the researchers’ paper:

Google acknowledges the research "Invitation Is All You Need" by Ben Nassi, Stav Cohen, and Or Yair, responsibly disclosed via our AI Vulnerability Rewards Program (VRP). The paper detailed theoretical indirect prompt injection techniques affecting LLM-powered assistants and was shared with Google in the spirit of improving user security and safety. In response, Google initiated a focused, high-priority effort to accelerate the mitigation of issues identified in the paper. Over the course of our work, we deployed multiple layered defenses, including: enhanced user confirmations for sensitive actions; robust URL handling with sanitization and Trust Level Policies; and advanced prompt injection detection using content classifiers. These mitigations were validated through extensive internal testing and deployed ahead to all users of the disclosure. We thank the researchers for their valuable contributions and constructive collaboration. Google remains committed to the security of our AI products and user safety, continuously evolving our protections in this dynamic landscape.

So these issues have been fixed, thankfully. But, like most things in security, it’s only a matter of time until new versions of this pop up.

Going back to our original question of whether end users will see issues before companies. The question remains how quickly companies are going to build and deploy agents in their environments. They will undoubtedly have the same types of issues that Google experienced here, just in different flavors.

Perhaps this is a preview of what’s to come?

If you have questions about securing AI, let’s chat.

Reply

or to participate.