New GenAI Options for Hackers

Plus: Living-off-trusted-sites (LOTS)

Give humans a powerful tool like ChatGPT, and they will find a way to make it weird. The Washington Post recently studied 40K ChatGPT prompts (with consent from users, which makes this even weirder) and found that 7% of the prompts involved dirty talk, with topics ranging from “racy role-play” to “spicy images.”

I’ll give you a minute to stop blushing…

Okay, today in the cyber world, we’re covering:

  • Cybercriminals selling the GenAI warez

  • LOTS of cool ways attackers bypass security

  • A $3.5M discount for info on 2.9 billion people

-Jason

AI Spotlight
Cybercriminals Get GenAI Upgrades

With the hype of the Olympics about to fade, we need to get hyped about something else. How about deepfakes?? I spent some time this week playing around with RunwayML, as they released their new Gen-3 Alpha model, which creates videos from text prompts. While their videos look awesome, my quick test had some mixed results. While a puppy had a shot at making the Olympic gymnastics team, other puppies had no legs…

Anywho, back to deepfakes.

Trend Micro had a great write-up on the current state of criminal large language models (LLMs). Here’s the tl;dr.

Jail-Break-as-a-Service: Cybercriminals have trust issues with ChatGPT. So, instead of accessing it, they purchase access through a jailbreak-as-a-service frontend, which works around censorship that most legitimate LLMs, like ChatGPT, enforce.

Old-school LLMs are Back: WormGTP and DarkBERT appeared and disappeared faster than me at a party I feel obligated to attend. New malicious LLM tools are purportedly popping up, including DarkGemini and TorGPT. However, it’s unclear how good they are or how many cybercriminals use them…so it could still be a scam.

Deepfake Tech is on a Roll: Cybercriminals have started to offer more GenAI tools that make creating or altering images and videos easy. These include creating 3D avatars based on a picture, creating nude images of someone just based on their face, and face-swapping technologies that can change your appearance on video calls.

As I often say, the attack playbooks largely stay the same. What we’re dealing with are cybercriminals that have better tools for social engineering. We always knew before that we had to verify suspicious activity, like when your “CFO” asks you to wire millions of dollars to new bank accounts.

Security teams should focus on verifying humans and their requests/actions. Enter stage left, multi-factor verification. Let’s make it happen.

Security Deep Dive
Very Crafty, Very Clever Cybercriminals

I love a good new security term (unless it’s quishing…I hate it). This week, I learned a new one. It’s called living-off-trusted-sites (LOTS). While not a new technique, it’s where attackers use popular and legitimate websites or applications to conduct part of their attacks. A basic example is an attacker using something like Dropbox to upload stolen files from a compromised system.

Let’s look at a cool recent example that Menlo Security wrote up:

1. Phishing Email: The attacker sends the victim a phishing email impersonating Amazon. The email includes a link to a Google Drawings image, which is a graphic prompting the user to verify their account because the account was “suspended” due to “unusual sign-in activity.” The graphic links to an attacker-controlled phishing site.

2. Verification Link: The malicious link is shortened with the WhatsApp URL shortener to hide the true phishing site. When the user clicks on the image, thinking they are about to verify their Amazon account, they are sent to a phishing page resembling the Amazon sign-on page.

3. Account Verification: After sending their Amazon credentials, the victim is prompted through a series of pages to provide their mother’s maiden name, date of birth, phone number, address, and credit card information. That’s a lot of information that can do a lot of damage to the victim while giving the attacker a good payday. The victim is then redirected to the legitimate Amazon login page.

Using LOTS increases the likelihood that the site won’t be blocked by security software, increasing the chances the user will click on a link in a phishing email and get through. At the same time, once the abuse is identified, those websites/SaaS applications won’t wait around to take down the malicious content. But by that time, the damage is already done.

Security News
What Else is Happening?

🙏 The Ronin gaming blockchain reported that “white-hats” found a vulnerability in their systems that allowed them to withdraw ~$2M USDC (a digital dollar). If you’re wondering what a “white-hat” hacker is, it’s a term for an ethical security hacker. In this case, the hackers are acting in good faith and sending the money back.

😭 What happens when you steal sensitive information on 2.9 billion people from a background-checking service but can’t sell it for $3.5 million? For one attacker, they give it away for free. How kind.

🏆️ Here’s a win for the good guys. Police recovered $40 million stolen in a recent business email compromise. This resulted from Interpol’s Global Rapid Intervention of Payments (I-GRIP), a network of 196 countries that streamlines requests to recover funds.

🐷 A BBC reporter documented a scammer’s attempt to conduct a pig-butchering scam against him. Pig-butchering scams attempt to build long-term relationships with their victims, usually of the romantic variety, that results in victims sending money to them over time.

🎒 Over 13,000 students in Singapore rejoiced cried when attackers remotely wiped their school-issued iPads and Chromebooks. The root cause? A mobile device management company, Mobile Guardian, was hacked.

If you enjoyed this, forward it to a fellow cyber nerd.

If you’re that fellow cyber nerd, subscribe here.

See you next week!

Reply

or to participate.