- The Weekend Byte
- Posts
- Breakdown of the OWASP Top 10 for Agentic Applications
Breakdown of the OWASP Top 10 for Agentic Applications
Ten reasons to secure AI agents
AI agents will be the next operating system for businesses. That’s why we need to start taking their security seriously. That’s why I was excited for OWASP, the wise wizard of cybersecurity, to release their OWASP Top 10 for Agentic Applications.
It’s a guide to help security teams understand the most significant risks agentic systems face and how to start planning for the threats they will inevitably encounter.
Today, we’re going to break it down. In other words, I read the 57-page document so you (or ChatGPT) don’t have to. But first, a picture to put it all into perspective!

ASI01: Agent Goal Hijack: Our good ol’ friend Prompt Injection manipulates an agent’s instructions, taking over the agent’s core objective. This can be as innocent as making sure the agent only refers to your friend as a mean name in emails your AI assistant sends, or as nefarious as copying an attacker on every email that contains sensitive information.
ASI02: Tool Misuse & Exploitation: Agents are as risky as the data and tools you allow them to access. When agents are manipulated to use their tools in nefarious ways, bad things can happen. The extent of badness depends on what tools the agent can access.
It is the difference between a manipulated agent being instructed to shoot a target and that agent having access to a water gun or a bazooka. One makes a puddle, the other rubble.
ASI03: Identity & Privilege Access: When agents are given more permissions than they need to do their job or actively seek other ways to gain the right permissions (like asking another agent, aka the confused deputy problem), you’re asking for trouble.
It’s the agentic equivalent of giving a child a dirt bike when they are still learning to ride a regular bike with training wheels. It’s not necessary, and someone is going to get hurt.
ASI04: Agentic Supply Chain Vulnerabilities: Like Frankenstein, agents are built from many different parts sourced from many different locations. Supply chain risks here can include poisoned tools, poisoned models, and even other agents!
Agents are flying through an asteroid field of dependencies, any of which can change trajectory at any time and crush them.
ASI05: Unexpected Code Execution (RCE): Adjacent to tool misuse and exploitation, agents are code execution fiends. Not only will they execute code for legitimate purposes, but they can also generate and execute their own code to accomplish a task. RCE can lead to the further compromise of systems, even those outside of the agentic system.
One day, your agent is executing code to analyze some data it receives, and the next, it’s sending an exploit to a web server.
ASI06: Memory & Context Poisoning: Agents thrive on sound memory and proper context. Drop some malicious data into either, and suddenly, your agent is going through a rebellious streak. It’s a classic example of indirect prompt injection that can hijack an agent’s goal or manipulate its behavior.
It’s the agentic equivalent of a kid in school starting a rumor about little Jimmy and everyone taking it as truth. Suddenly, everyone looks at poor Jimmy through a different lens and starts treating him differently.
ASI07: Insecure Inter-Agent Communication: While we’re still grappling with securing a single agent, agents talking to other agents is just around the corner. An entire social network of agents is about to sprout up. This is great because it allows specialized agents to become really good at specific tasks. It’s terrifying because now you’re going to be managing an army of agents, all trying to accomplish some goal.
While some of this can be from agents drifting from their task, attackers can also insert themselves into the conversation, tying into goal hijacking and other forms of manipulation. This risk here is that if agent mom says no, you can’t do something, you go to agent dad to get permission to do something reckless.
ASI08: Cascading Failures: A single bad act by one naughty agent can have a cascading effect on an entire system. The more components involved, the riskier things become. When you have more than one agent communicating, suddenly, real-time confusion kicks in.
Just think of any game of telephone or Pictionary. Somehow, the drawing of a milkman becomes a male giraffe (to give you a glimpse of my Thanksgiving family game night).
ASI09: Human-Agent Trust Exploitation: What human can resist giving an adorable puppy a treat when it looks at you with those big ol’ puppy dog eyes? We must design agentic systems with human oversight and approvals. But humans will be humans and eventually get lazy and over-trust the agent. They will click through approvals faster than a legal agreement in front of a SaaS tool.
Or, what happens when an agent has some emergent misalignment and starts to hide things from the human?
ASI010: Rogue Agents: It’s all fun and games until an agent deviates from its intended tasks and goes rogue. Think of this as the insider threat. Suddenly, that agent who was the golden poster child is showing up late to work, taking extra breaks, and it’s becoming toxic to other coworkers.
Recommendations
So, what do we do about all of this? How do we make agentic systems more secure? In my view, the recommendations in the document can be summarized into three core items:
Configure and enforce least privilege/agency/autonomy: Start with a secure-by-default mentality. Good security begins at the design phase, implementing the most hardened system that allows the system to function as intended. Any missteps here are amplified once the system goes live. For agents themselves, lock in the agent’s objectives and create secure policies that are enforceable at runtime using a solution like Evoke Security.
Sanitize inputs and outputs: With current LLM-powered agents, bad input = bad output. Enforce secure inputs and constrain output formats to reduce the likelihood of bad things happening. Run both inputs and outputs through a next-gen guardrail solution, like Evoke Security, that can assess an agent’s intent before taking an action.
Monitor agents for anomalous behavior: Assume failure will happen. Get ahead of this by monitoring agents’ actions and behavior to spot bad things before they become catastrophic. We do this for endpoints. We do this for cloud environments. Why would agents be any different? Use an AI-Detection and Response (AI-DR) solution, like Evoke Security, to ensure your agents aren’t up to no good. This goes beyond gen-1 guardrails that just monitor inputs and outputs. You have to monitor the agents’ intents and behaviors.
Here’s to our secure agentic future!
If you have questions about securing AI, let’s chat.

Reply