How are Attackers Using GenAI in 2024?

Microsoft and OpenAI provide actual proof of nation-state threat actors using ChatGPT in their cyber operations

Good morning. Is it just me, or does it seem like law enforcement is killing it right now with disruption campaigns and arrests of cybercriminals? We’ll cover one of the latest disruption campaigns today. That and the following:

  • The greatest malware name ever?

  • Actual proof of cyber threat actors using AI in attacks.

  • Mobile malware that steals your face, how rude.

-Jason

Spotlight
Russian Malware’s Last Moo?

The United States FBI is upping disruption campaigns against nation-state threat actors. I briefly wrote about a campaign that happened in December, in which the FBI disrupted a Chinese-operated botnet that used home routers.

This time, it’s Russia’s turn for a pity party.

In January, the FBI disrupted a Russian state-sponsored hacking group’s botnet that relied on infecting hundreds of Ubiquiti home routers with Moobot malware. Aside from the question whether this is the greatest malware name every, the other question the comes to mind is how did they infect the devices?

Don’t think too hard on this one. It was default credentials used during the initial setup of the routers 😭 

When the router’s remote management interface was accessible to the Internet and had default credentials, Russia would access the system and install Moobot malware. The Moobot malware is based on the leaked source code of the popular Mirai botnet, a widely distributed IoT botnet. The malware provided Russia the ability to remotely access and control the router.

Similar to the Chinese botnet, Russia used these infected routers for a variety of malicious purposes, including:

  1. Proxying their network traffic in an attempt to hide their true origin while hacking into companies.

  2. Host spearphishing landing pages to harvest credentials.

  3. Host custom tools.

The FBI disruption campaign used the Moobot malware against itself. Specifically, the FBI sent commands to the malware to take the following actions:

  1. Retrieve data from the router related to malicious activity.

  2. Remove malicious files from the routers.

  3. Remove Russia’s access to the routers.

  4. Block remote access to the routers to prevent reinfection.

  5. Enable subsequent collection and transmission of remote login attempts to support future investigation.

It’s another great win for law enforcement but a blow to an amazing malware name. When do we get to see WoofBot 🐕️?

Deep Dive
State-Sponsored Attackers Testing GenAI

Since ChatGPT dropped onto the scene, there have been a ridiculous number of reports “showing” how cybercriminals use various malicious versions of ChatGPT to assist with hacking campaigns. The main problem with those reports? There was never real proof to support what the claims.

Last week, Microsoft and OpenAI gave us a Valentine’s Day present that changed that. The two companies teamed up to provide insights into how nation-state threat actors have used ChatGPT to support their hacking efforts.

How were they able to track this activity? Well, Microsoft has a lot of data available to it from mapping known adversary infrastructure. That includes systems and devices that the threat actors use to conduct their attacks. Microsoft likely shared those indicators with OpenAI, who was then able to track login activity to their systems and the subsequent searches that were performed by the user, in this case the nation-state threat actor.

So how are these advanced hacking groups using ChatGPT to support their efforts? Well first, and most important, Microsoft calls out that threat actors, just like defenders, are looking at AI (to include LLMs) to improve the productivity of their attacks. There are many manual repetitive tasks when it comes to hacking. It’s those sorts of tasks that are well positioned to take advantage of LLMs to make things go faster.

Microsoft’s review of state-sponsored attackers, which included Russia, North Korea, Iran, and China, found the groups experimenting with ChatGPT to support the following activities:

  • Supporting social engineering tasks: The language processing powers of LLMs are the obvious (and most reported on) capabilities that all threat actors will likely adopt first. It can help breakdown language barriers and easily craft language for phishing activities.

  • Supporting reconnaissance activities: Another superpower of LLMs is to collect a large amount of information and summarize it into something that is easier for humans to digest. This can provide significant time savings in collecting and summarizing information on potential targets.

  • Understanding technologies of interest: You don’t need to be an expert in every technology anymore. Microsoft saw Russia using LLMs to understand satellite communication protocols and radar imaging technologies, all attempts to learn more about satellite capabilities. Sorry Boris from GoldenEye…your services are no longer needed.

  • Assisting with basic scripting tasks: The ability to script basic tasks can find incremental improvements that compound over time. This can include file manipulation, data selection, and regular expressions. A quick prompt to create a quick script to automate tasks adds up over time.

  • Some advanced programming support in evading detection: One of the more advanced use cases Microsoft observed was with Iran using ChatGPT to attempt to develop code to evade detection, learn how to disable antivirus through the Windows registry or group policies, and how to delete files in a directory after an application has been closed. This is the quintessential example of data that is available in plenty of blog posts, but with ChatGPT, the consolidated answer can be at your fingertips in seconds.

All of the known (or expected) malicious uses for AI are tracked via the MITRE ATLAS™ (Adversarial Threat Landscape for Artificial-Intelligence Systems). If you’re feeling nerdy, click through that to learn more about the different types of attacks.

Reply to this email if you’re interested in a future deep-dive series on AI supporter attack techniques!

News
What Else is Happening?

🏥 A ransomware attack impacted the production servers hosting the Health Management System (HMS) of Hipocrate Information System (HIS). This impacted 25 hospitals that had their data encrypted, while at least 75 other hospitals have taken their systems offline as a precaution.

🔋 German battery maker VARTA AG took their systems offline after a cyber attack. This included five production plants and their administration offices. While this was a voluntary action, it shows the lengths that organizations will take to prevent further damage during a cyber attack (as we saw in the hospital story above).

🎭️ Group-IB reported on new mobile malware capable of stealing facial recognition data, identity documents, and intercepting text messages. The stolen data creates deepfakes to access the victim’s bank accounts. This follows a March 2023 directive from the Bank of Thailand that instructed banks to use facial biometric verification to confirm user’s identities instead of one-time-password codes for certain actions.

If you enjoyed this, forward it to a fellow cyber nerd.

If you’re that fellow cyber nerd, subscribe here.

See you next week!

Reply

or to participate.