- The Weekend Byte
- Posts
- AI Adoption: It's Slower Than You Think
AI Adoption: It's Slower Than You Think
AI progress feels fast, but we're still driving on dirt roads with no seat belts
AI models are developing at a breakneck pace. Yet enterprise AI adoption isn’t progressing as fast as you may think. Companies are still working to determine where and how to deploy AI in their business. The model innovation is only part of the equation.
It’s the equivalent of going from horse buggies to Lamborghinis in just a few years instead of decades. But we’re still figuring out how to build paved roads while designing seatbelts and airbags. It’s a bumpy ride on a dirt road to the promised land of agentic AI.
Let’s get morbid for a second. The history of car fatalities shows what happens when technology outpaces infrastructure and safety. The first few years of automobiles saw the lowest number of fatalities in history. Not surprising given the speeds of the cars back in the early 1910s were around 20 MPH. That’s where we’re at right now with AI.
The engine exists, but there are very few places where it can go full throttle, so it’s not super dangerous today.
Paved roads paved the way for more fatalities. It allowed for faster speeds. The World Health Organization found that every 1% increase in mean speed produced a 4% increase in the fatal crash risk. This is where we’re arriving shortly with AI. As it becomes easier for organizations to deploy agentic AI and security lags, we’re going to see bigger issues pop up, like an agent accidentally deleting your production database.
Fatalities declined as focus shifted to survivability in crashes. In the 1930s, automakers widely installed hydraulic brakes in cars, which had massive improvements over mechanical brakes. Is it a coincidence that fatalities peaked then? As time went on, more safety features and regulations (like requiring seatbelts and airbags) rolled out, which supported a steady decline in the fatality rates.
What do car fatalities have to do with AI? We’re at the far left of that graph right now. We have a shiny new toy that has a high potential to change the way we live our lives. We’re also in the middle of a hype bubble where the expectations of what it can do have surpassed our ability to deploy it. But that won’t last for long. Easier ways to deploy and manage agents are arriving.
Enter stage left, agent orchestration platforms. The most common ways companies are deploying (or at least testing) agents today include:
SaaS Platforms: When workflows are needed, companies start to explore SaaS solutions like n8n to start testing agentic workflows in an easy-to-use interface.
Cloud Platforms: The major cloud providers have an easy starting point for deploying and managing agents. With Google AgentSpace and Amazon Bedrock, companies can start creating agents and workflows similar to n8n, but all housed within your existing Cloud environment.
Orchestration Frameworks: Open-source frameworks like LangChain and CrewAI make it easier to manage multi-agent systems. B2B AI SaaS companies often start here instead of the SaaS and Cloud platforms. That’s because the orchestration frameworks provide a more robust framework that can more easily be aligned to the specific product the company is building.
Custom code: As companies require more control over the agents, they’re finding that even the orchestration frameworks aren’t robust enough. This is leading them to roll out their custom framework for managing agents.
While the paths above seem robust, it’s still a challenge to effectively deploy and manage agents today. Many engineers are learning as they go along, which is expected given how old but new all of this is (AI agents have been around for a while now, it’s just starting to come into popularity thanks to LLMs). We are quite literally building the roads and bridges while driving a shiny new Lambo.
The problem of deploying and managing agents will be the first problem that will be solved, or at least made easier. Much like the development of paved roads, this is going to see a surge of security issues pop up as security features lag behind.
The security concerns are becoming clear. As engineers began tinkering with agents, they were the first to notice the security issues. We thought we were building an F1 track to race Lambos, but instead we realized we’re on a Mario Kart track where the roads can change on a whim and random red shells and bananas are being thrown at you.
I wrote a longer post about the current and future AI security impacts. Here is just a sample of the security challenges that will contribute to those bad outcomes:
Rogue agents: agents can be unpredictable and delete your production database. A common concern that has come up in my discussions is how you can trust that the agent is going to do what you want it to. It’s the same challenge you would have if you gave the new intern root-level access to your production systems. It might be okay, but mistakes could be disastrous.
Tool access: like humans, agents are going to only be as useful as the tools they have access to. And, those tools can be sharp. Soon, we’ll find ourselves yelling at agents, telling them not to run with hundreds of scissors around the house because they may hurt themselves (and us).
Data access: agents are only as helpful as the data you feed them. Look at any Microsoft Copilot implementation, and you’ll see all of the data oversharing that happens. Suddenly, that one Excel document containing all the employee salary information someone accidentally made public two years ago will be readily available with a simple prompt.
Identity challenges: we’ve never solved identity for humans, and the same will happen for agents. A good agent has a narrowly defined task and just the amount of permission to accomplish it. But in the real world, agents will have excessive permissions, and when they go rogue or when a bad actor compromises one of those agents, they are going to use the tools they have access to in ways we can’t imagine, all in the name of accomplishing a goal.
Shadow AI: When you take all the above into account, what happens when your employees are bringing third-party AI tools into your environment without your knowledge (because it is happening). It’s difficult for organizations to identify this, let alone understand what risk it introduces to the company. And no, your AI policy is not enough to protect against this.
Where do we go from here? We continue the path we’re on. The infrastructure challenges will be made easier. Our best shot at reducing our future risk is to align the security challenges to the infrastructure challenges.
Back to the car analogy, we need to add more sensors with better safety features as the road speeds increase.
If you have questions about securing AI, let’s chat.

Reply