AI Risk Goes Beyond The Model

Operational risk is the true risk

So much of AI Security today is focused on securing the models. And for good reason. It’s a fancy technology that is not well understood and is the root of the security risk for AI. You have a tool that can be socially engineered with a good story into taking over your smart home appliances, or, with no malicious interaction, it can decide to delete your production database.

It sounds dangerous because it is. And our existing security solutions are not equipped to deal with this.

But we’re making a mistake. We’re over-indexing on how the security risk plays out in an organization. Why are we letting this happen? Because model security is sexy and new, and we can do cool research projects. I can see the allure.

Let’s assume that we’ve seen this story play out before. A technology is built that has the potential to become the core of how businesses operate, but it has inherent security vulnerabilities that cause so much pain.

*looks to my left: Windows

*looks to my right: Linux

*looks above me: AWS

Oh, we have seen this before…

AI agents will become the next operating system. And just like before, we knew the security risks and focused on securing the operating system. We built passwords, local firewalls, and anti-virus software to deal with the emerging threats. And they all worked to some degree, but they all failed in isolation, too.

We must work to secure the weak points, but also assume that those weak points will fail. This is the premise of defense-in-depth, or the layered approach to security if you prefer that analogy.

Like yesterday’s technology, the real risk of AI is the operational risk it poses to companies. AI models don’t live on an island by themselves. They are part of an interconnected technical ecosystem where the risk increases and decreases depending on what other systems, data, and tooling it has access to.

So the risk doesn’t fall squarely on the AI models themselves. Like rainfall at the top of a mountain, the risk is dispersed throughout the surrounding land but not necessarily in equal installments.

Yes, it’s important to secure the models. And we have to start there. But going too narrow misses the forest for the trees.

A holistic view of AI security has to include how it is integrated into the environment. That is how we go from the security threat of prompt injection to the actual impact on the business. The operational risk. The thing that businesses care about.

We need to understand how everything connects and how the risks translate throughout the organization.

If you have questions about securing AI, let’s chat.

Reply

or to participate.