- The Weekend Byte
- Posts
- Nation-States Use AI While Cybercriminals Lag Behind
Nation-States Use AI While Cybercriminals Lag Behind
A deep-dive into Google's latest report on malicious usage of Gemini AI
I’m back after a week-long cruise with my family. The wife and I got a great hike that ended with a race to the top. And because we’re nerds, we participated in nearly every trivia session on the ship. I’m proud to say we dominated cartoon trivia, which says a lot about how I spent my childhood.

Anywho, today in the cyber and AI world, we’re covering:
AI + Cyber Attack’s Relationship…it’s complicated
Nation-state hacking…it’s just business
Salt-typhoon is at it again
-Jason
p.s. Americans are rightly freaking out about AI safety and ethics…meanwhile, in China…
AI Spotlight
AI + Cyber Attacks Relationship Status…It’s Complicated

You know what I hate? Articles that claim cybercriminals are using AI in their attacks. Then, when you read the article, it’s largely fabricated and speculative on how they COULD use AI to conduct an attack…
Ahem...Malwarebytes, nice try using AI fear-mongering to sell your shitty “well-intentioned” identity monitoring product.

But I digress…back to the main story.
Are cybercriminals using AI in their attacks? Maybe, possibly, sure? But there wasn’t a flip of a switch where little Timmy suddenly became a l33t sup3r h4cker in his mom's basement when ChatGPT launched.
Before today, the most reliable source I’ve seen was OpenAI’s October 2024 report outlining how nation-state threat actors used ChatGPT to facilitate their attacks, which I wrote about here.
In my career, I’ve noticed a lag between what nation-state threat actors are researching and doing and when that information trickles down to cybercriminals. We’ll call it trickle-down hackenomics.
I recently found a new source of information on malicious AI usage. Google released research detailing (and I mean a lot of detail) how over twenty nation-state threat actors attempted to augment their attacks with Google Gemini.

The most common use case was to accelerate their campaigns. They attempted to use Gemini to enhance their existing capabilities and speed up their tasks. Wow, they’re just like us. Lazy efficient. These tasks included:
Researching potential targets, known vulnerabilities, and hacking techniques (like lateral movement, privilege escalation, and defense evasion).
Generating content for phishing campaigns
Creating and troubleshooting code
A personal favorite of mine was North Korea's use of Gemini to research how to enable IT worker schemes in which workers gain “lawful” employment at a company and then steal data while getting a paycheck. This included drafting cover letters and researching open jobs.
A quick note on financially motivated threat actors. Interestingly, the report provided very little information on this topic, starkly contrasting the information available on nation-state usage. I don’t think that’s a coincidence.
Google did say that it has seen evidence of financially motivated actors using deepfakes to support business email compromises, so there’s that. But I want more here because, in my opinion, this is the greater risk to organizations.

Of course, we can’t forget the information operations. A fan favorite of nation-states looking to influence foreign elections or sow distrust and hate in their enemies’ citizens. The use cases here are exactly what you would think for information operations:
General research about the nation-state’s strategic interests
Generating content: articles, rewriting or translating text, and optimizing content for better reach
All of this is consistent with what OpenAI found in its independent research.
A quick note on jailbreaking. While it's more of a technique than a use case, Google made some observations here. Jailbreaking uses specially crafted prompts (called prompt injection) to bypass pesky safety controls that prevent harmful content from popping up, such as not instructing it to create the next zero-day exploit.
Google found these threat actors used publicly available jailbreak prompts and modified the final instruction, specifically what it was asking to do. These included building DDoS software, Chrome-based infostealers, and advanced phishing techniques targeting Gmail users. It's not super sophisticated.
All of this means that we’re still in the early days of AI in cyber attacks. While that may sound comforting, it's not. While AI isn’t quite there yet to turn little Timmy into a super hacker, it can help a semi-skilled threat actor move faster and further.

Security Deep Dive
Nation-State Hacking…It’s Just Business

Although Google didn’t sponsor this newsletter, it recently released a lot of good information, so it’s a Google research doubleheader. And it’s a bit of a follow-up to my rant above about most cybercriminals not meaningfully using AI.
Most organizations face greater risk from cyber criminals, not nation-state threats. When we say nation-state, we think of a government agency whose mission is to hack into the systems of other countries’ government agencies and companies. These nation-state groups are well-resourced, well-trained, and can have some crazy sophisticated capabilities.
When we say cybercriminals, think of ransomware groups, business email compromises leading to wire transfers, or even your low-level investment scams. These are less sophisticated (sometimes laughably terribly) attackers.
But just because they have different motives doesn’t mean they don’t play together—which is the topic of Mandiant’s research.

Cybercriminals make good 1099s for nation-states. Mandiant’s report highlights that financially motivated threat actors have conducted operations for the government in exchange for payment or through coercion (aka not being tortured or sent to a work prison).
The government gets hackers for hire and deniability when it can cite a rogue “patriotic” hacking group that did it.
Russia has been leaning into cybercriminal marketplaces. Google cited Russia's increasing use of malware and tooling bought from these marketplaces. Why build tools when you can buy them from someone else? It might be just as effective and surely a lot cheaper. Government espionage still runs a P&L at the end of the day.
Mandiant also found that Russia used an old infection from commodity malware, dating back to 2013, to access its target victim’s network. In today’s cyber world, you can easily draft off someone else’s work to skip the line.

Nation-states are also taking tactics from ransomware groups. Why break into a company when you can just buy valid credentials from infostealers? Ransomware affiliates are doing it. Just because you have an 8-figure hacking budget doesn’t mean you have to blow it all on a shiny new exploit.

The moral of the story is that hacking is just business. The cybercriminal ecosystem is so extensive now that governments don’t have to do everything themselves. Some nation-states don’t need to spend millions of dollars developing their own tools. Instead, they can use that money for something more productive, like feeding their people.
On the one hand, defenders get a small step forward because their tools may be better positioned to detect and defend more generic malware. On the other hand, outsourcing some of these capabilities gives nation-states more time and money to spend on cutting-edge research, like AI-powered hacking tools and techniques.

Security & AI News
What Else is Happening?
🇷🇺 Russian hackers are believed to have hacked into the British PM’s personal email account before he entered office. My favorite line from this article was that he “was forced to change his email address following the incident and add the basic and painless security protection of two-factor authentication which apparently had not been applied beforehand.” I’ll repeat this…”basic and painless.” Yes, thank you.
🔁 Hackers gonna hack. In news that no one should be surprised by, the China-based threat actor Salt Typhoon, who hacked into US telecom companies last year, is at it again. This time, they’re targeting Cisco devices to gain a foothold into the networks.
🎭️ An online platform used to automate the university application process is combatting deepfakes. Out of 20,000 interviews, 0.1% used a third party to take their place in the interview, 0.6% used lip-syncing technology, and 0.15% used deepfakes.
🚨 Two Russian citizens were arrested in Thailand for their participation as affiliates of the Phobos Ransomware-as-a-Service (RaaS). They attacked over 1K victims, netting ~$16 million in ransom payments.
If you enjoyed this, forward it to a fellow cyber nerd.
If you’re that fellow cyber nerd, subscribe here.
See you next week, nerd!
Reply