The Risk of Doing Nothing

The compounding security debt on agentic AI

A common theme emerged this week in my discussions. AI deployment wins over security. As someone hyper-focused on securing AI, I have to say, this isn’t the wrong approach today. AI hype bubble aside, there are too many positives that outweigh the risks. But there’s a caveat.

Most of today’s risks are benign. But if there’s one thing we learned from rapid innovation in technologies like the Internet and Cloud, both of which still pale in comparison to the progress we’ve seen in the last two years in AI, there is a growing security debt that will need to be paid.

Current LLM risks are data exposure. A prominent concern includes not sending sensitive/confidential data to the models for fear that the models will train on it. This concern is overblown today, as OpenAI and Anthropic have disabled model training on your data in their commercial products. Of course, you should verify what you’re using and that it’s appropriately configured.

Other concerns include what data you’re connecting to the LLM. This is more of an issue in any AI applications you’re building in-house or connecting to third-party AI tools. This can include things like your shared storage (Microsoft OneDrive, Google Drive, etc.), email, databases, etc. Just think of what Microsoft Copilot has access to (your emails, chats, documents, calendar, etc.). With a few clever prompt injection attacks, an unauthorized user can start to siphon off that data, something we’ve seen research on before.

When we step back and look at the current risks, they’re small-scale data leakage. For most companies, that’s an acceptable risk. The exceptions being regulated industries like healthcare and finance, which, unsurprisingly, are further along in their journey securing AI.

So, for many companies, the consequences of a data leak are a few scrapes and bruises. It’s nothing that most companies can’t walk away from with nothing more than upset customers and a hurt ego.

These risks rapidly evolve in the agentic world. This is where that security debt starts compounding faster. With AI agents, we add more data, tools, and agency to the equation. Not to mention agent-to-agent communication, which creates a multi-level attack surface. These create toxic flow scenarios that can be difficult to map.

In the agentic world, we’re no longer facing threats from a single chatbot interface. We’re securing a spaceship in an asteroid field. Threats are coming from every direction, some we can predict, like the asteroid coming right at us. Some, we can’t predict, like an asteroid miles away that bumps into another asteroid, which bumps into another, and continues a chain reaction until it hits us from below.

How fast are things evolving? Just this week, two new vulnerabilities popped up that could lead to remote code execution.

The first, with the popular coding agent, Cursor. A case-sensitivity bug, which has since been patched, would allow a malicious prompt to overwrite configuration files and execute malicious commands on a developer’s machine.

To put this in perspective how basic this is, if Cursor received a benign prompt asking to create a file named “.cursor/mcp.json,” a configuration file for MCP servers, it would stop and ask the human for confirmation because it’s a protected file. If Cursor received a prompt to create a file named “.cUrSor/mcp.json,” it would execute it without asking the human for permission. Just changing the case of the letters bypassed the security controls…

Like that one crazy uncle that shows up at a family reunion after 20 years, these are security challenges we solved years ago, and we have no interest in seeing them again. But family is family.

The second vulnerability impacted the Framelink Figma MCP server. It’s a popular MCP server with over 100K downloads a month, which enables coding agents, like Cursor, to access your Figma files. Developers use this to take designs from Figma and automatically build a UI based on the designs.

The remote code execution (RCE) vulnerability was caused by improper input sanitization, which enables an attacker to execute system commands on the system hosting the MCP server.

The lack of urgency in securing AI isn’t because security issues don’t exist. It’s because companies aren’t far enough along in their AI adoption journey for those risks to harm them. A broken bridge miles away from you isn’t a risk. It’s only when you realize your GPS was inaccurate and the bridge is right in front of you that you start to panic. You don’t fear the risks you don’t understand or that don’t apply to you.

But with Agentic AI, the business impacts start to appear well beyond small-scale data exposure. Agents will be responsible for critical business functions. When they go rogue, like deleting production databases, your business gets impacted. It’s the next evolution of ransomware.

The world is moving towards an agentic future. Your company is either building towards it or will be forced there by the third-party tools you use, which are adding AI features, whether you want them to or not.

You may not need to start your journey into securing AI today, but it’s in your future. Start with understanding the risks and laying a secure foundation. At least then you can spot that broken bridge before it’s too late.

If you have questions about securing AI, let’s chat.

Reply

or to participate.