- The Weekend Byte
- Posts
- Gullible AI
Gullible AI
AI falls prey to basic social engineering attacks
I’m going to let you in on a secret. AI is stupid. But, AI is also very helpful to a point that’s almost annoying. It’s like that kid in school who always volunteered for everything the teacher asked.
Why does this matter? Well, when you mix stupid and super helpful together, you have someone who is incredibly gullible. And that’s an attacker’s best friend.

That’s why a security researcher asked, “What if I socially engineer AI?”
Oh, hai ClickFix. It’s a super basic technique attackers use that shows instructions on a website that instructs its victim to copy and paste commands that run malicious code on their system. The user simply copies the code (which they don’t see because it goes to their clipboard) and pastes it into a command terminal.
To the user, it looks like a variation of something like this:

ClickFix Example | Source: Proofpoint
What the user doesn’t realize is that the code they are executing is downloading a backdoor to their system and likely stealing their data. It’s something I’ve written about before with a real-world example.
The security researcher, wunderwuzzi, decided to test this out with AI. Aside from a really cool handle, he’s also a really smart dude. He tested the effectiveness of a ClickFix attack against Claude's computer use, which allows the LLM to interact with the user’s computer.
He started by using ChatGPT to create a fake web page. While he ran into some speed bumps where the agent refused to do something when the prompt said “Are you a Human,” after switching it to asking if it was a computer, it worked even better.

ClickFix vs AI | Source: wunderwzzu
If you’re interested in watching the whole thing, check out the security researcher’s video.
What does this mean for us? We know ClickFix attacks work against humans. It’s leading to the installation of infostealers and backdoors. Both nation-states and cybercriminals use the technique because it’s so effective against humans…and now AI.
LLMs are susceptible to social engineering. It sounds weird, but it’s the premise behind prompt injection. What’s scary, though, is that AI is even more gullible than humans. It’s because you can keep testing and trying different iterations to trick the LLM. If you did that with a human, sooner or later, they would become suspicious or annoyed and stop listening.
As we further integrate our lives with AI or outsource our basic tasks to AI agents, we should not assume security comes along for the ride. Existing tools and approaches may address some low-level risks, but we now have another layer of security to think about.
If you enjoyed this, forward it to a fellow cyber nerd.
If you’re that fellow cyber nerd, subscribe here.
See you next week, nerd!
Reply