- The Weekend Byte
- Posts
- AI: Security & Privacy Risks
AI: Security & Privacy Risks
The Security and Privacy Impacts from AI
The current state of AI can best be described as…chaotic.
Business leaders are handing down directives to AI all the things. CIO, CTOs, and engineering leaders are searching for problems where AI is the right solution. Legal teams are figuring out the privacy implications. CISOs are trying to wrap their head around the possible risks. All while users just want to use tools to make their jobs easier, without being replaced by the tools themselves.
If there’s anything we’ve learned from previous tech rushes (e.g., Internet and Cloud), security and privacy issues will pop up like weeds in that one neighbor you hate who ignores even the most basic lawn care.
To help bring clarity to the problem of AI, let’s break down the most likely privacy and security implications that AI will introduce to businesses.
Privacy & Data Violations
The most common scenarios companies will face here are unauthorized data usage for training models and unauthorized use of personal data.
Real-World Example: A woman called a California-based company and was directed to a third-party AI-enabled customer support agent. That call was recorded, transcribed, analyzed, and possibly used to improve the AI’s model. The woman stated that, even though she was aware the call might be recorded and monitored, she was never informed that a third-party AI tool was involved.
Impact: As you deploy AI tools throughout your organization, many privacy and security gotchas will pop up. These can lead to litigation, especially when lawyers spot a trend and start hunting companies using similar AI-enabled technology.
Security Approach: Begin by establishing a cross-functional governance team that can effectively oversee your company’s AI strategy. Next, build an inventory of the AI-enabled tech your business is currently using and build a review process for future implementations. For each AI tool (whether buying or building in-house), assess the privacy and security implications and develop a plan to mitigate the most likely risks that could have a material impact on your business. This doesn’t mean that every risk will be mitigated, as you must balance the potential efficiency gains with the potential risks.
Reputational Risk
The most common scenarios here arise from external-facing LLM-powered applications where embarrassing output can be shared with the world. The reputational harm can negatively impact future sales and require the company to take the application offline.
Real-world Example: What can Darth Vader teach us about reputational risks companies face with AI? Fortnite released an AI-powered Darth Vader, and gamers quickly found that they could trick it into cursing.
Impact: While this example seems benign (and hilarious), imagine a scenario where this goes a bit differently. A malicious actor can trick chatbots or other AI-powered applications tied to a company into spewing racist or vulgar comments. That’s recorded and released on the Internet, where it goes viral. That reputational risk can lead to monetary damages for a company and slow down future development of AI initiatives.
Security Approach: Ensure you monitor the inputs and outputs to your LLMs. For inputs, ensure you are monitoring for attempts to jailbreak the model’s safety mechanisms, which could result in inappropriate outputs. For outputs, assess the context of what is being returned to determine if it includes any inappropriate responses.
Contractual Risk
The most common scenarios companies will face are AI-powered customer support agents that provide inaccurate information, resulting in lawsuits and reputational harm.
Real-world Example: A customer asked Air Canada’s external chatbot about the airline’s bereavement policy before booking a flight to his grandmother’s funeral. The chatbot responded that he could apply for a discount after booking the full ticket price and provided a link to the airline’s policy.
The only problem with that response is that it wasn’t accurate. The airline’s policy was that customers had to apply for the discount before booking a full flight. Air Canada refused the discount and told the customer that the chatbot was wrong and that he should have followed the link to the full policy. Pouring salt on the wound, Air Canada then disowned the chatbot, saying it was a separate legal entity and responsible for its actions.
The courts disagreed and found that Air Canada was responsible for the output of the chatbot.
Impact: Companies are one hallucination away from a lawsuit. For any company that offers chatbots to their customers, keep in mind that you are responsible for what that chatbot says and does.
Security Approach: Similar to what Darth Vader taught us, monitoring inputs and outputs is critical here to limit issues. Additional checks should be implemented for output monitoring to ensure that responses are accurate and in line with actual company policies.
Data Leakage
As companies give more data to AI applications, the risk of data exposure increases.
Real-World Example: Most of the current real-world examples are fairly benign. We haven’t had our Colonial Pipeline of AI moment…yet. It’s the type of moment where your third cousin twice removed even knew about it.
One minor incident that has surfaced is where Asana had a misconfigured MCP server. This resulted in some customers receiving other customers’ data. This wasn’t used maliciously, and for most customers, they likely didn’t even realize what was happening.
Another small example is scenarios where organizations are deploying Microsoft Copilot or enterprise search tools, like Glean. These tools take all of your company data and make it easily searchable through prompts. This means that every file you have accidentally made accessible to the entire company now has the potential to appear in search results. Imagine a scenario where a file containing salary information and other sensitive employee data becomes searchable by everyone in your company. These misconfigurations suddenly become much more discoverable.
Impact: This will vary based on the available information. On one hand, the impact is confidential data being leaked to unauthorized users. If you think of something like Microsoft Copilot, which has access to SharePoint, this could be a user asking for information on salary bands, and Copilot sharing information from an HR file that was accidentally shared company-wide. The file was always there, but people were unaware of it. Copilot just helped find it.
Where this can get dicey is if the LLM leaks other things, like passwords or secrets. A malicious actor can attempt to extract those from an external chatbot, which then could allow access to other applications or assets. Think of it like this. A malicious actor extracts an AWS secret from an external chatbot. They then use that secret to access AWS resources and find other vulnerabilities that extend into a more traditional cloud compromise. The chatbot is just the new front door.
Security Approach: This is a multi-faceted approach. First, only allow the AI application to access data that is necessary for it to function. Keep in mind that any data it has access to is potentially vulnerable to leakage.
Second, channel your inner Darth Vader again. Monitoring inputs and outputs again. The input monitoring is really focused on identifying possible prompt injection attempts, where someone tries to trick the LLM into giving up information it shouldn’t. The output monitoring is the second effort to see if the LLM may have returned data that it shouldn’t have.
Business Interruption
This is the most significant future risk I see. As companies integrate AI into critical business functionality, we will need to rethink business resiliency.
Real-World Example: None yet, but it’s still early. As companies lean into Agentic workflows that automate business functions and deploy AI agents to operate certain aspects, this problem will come to light.
Think of ransomware today. An attacker encrypts laptops and servers, and the company can’t operate.
It’s the same challenge, but instead of systems being encrypted, AI agents go rogue, whether maliciously or not. When those agents run amok, the business functions that rely on them go down.
Impact: The business loses revenue because critical systems go down. It’s the same idea as ransomware, only with agentic systems going down.
Security Approach: Companies will need to build in a detection and response layer for Agentic AI, similar to what we have today with Endpoint Detection and Response (EDR). Additionally, the ability to roll back models and agents to known good states will be required, similar to what we have today with backup and recovery.
It’s mainly about taking many of the security and resiliency concepts we have today with existing systems and applying them to Agentic AI.
If you have questions on whether you’re securely deploying AI, let’s chat.

Reply