- The Weekend Byte
- Posts
- Do Deepfake Detectors Have the Advantage?
Do Deepfake Detectors Have the Advantage?
Plus: more reasons to hate online ads
Ahhh, the Olympics are here. My favorite part is watching sports I know nothing about and trying to reverse-engineer the rules. Of course, I also went down a path of finding weird and obscure Olympic sports, things like hot air ballooning, obstacle course swimming racing (hello Ninja Warrior!), and pigeon shooting. Wait, wtf!?!? Umm…yeah, we’ll just keep that one in the past.
Today, in the cyber world, we’re covering:
Do deepfake detectors have the advantage?
More reasons to hate online ads
The largest ransom ever paid?
-Jason
AI Spotlight
Hungry Hungry Deepfake Detectors
Should we be less worried about being able to detect deepfakes? My thinking on this evolves the more I research. My latest finding has me feeling slightly more optimistic, at least for now.
Drexel researchers did an interesting study that tested 11 publicly available synthetic image (deepfake) detectors. They found that each program was 90% effective at identifying manipulated images. The technology finds inconsistent “fingerprints” in photos, like analyzing a pixel’s relationship to its neighboring pixels. Some pixels stand out like a fish out of water because they don’t fit in with the pixels around them.
Spotting deepfake images
Applying those same tools to videos dropped their performance by 20-30%. Okay, so image detection doesn’t work as well as video detection. Lesson learned. But there’s a fix…teach existing tools how to fingerprint video.
The team trained eight existing convolutional neural network (CNN) detection tools on deep fake videos. I know your first question. WTF is a CNN? It’s a type of AI that enables a computer to understand and interpret images, which is why it’s often used in self-driving cars like Waymo.
That training allowed these tools to fingerprint the synthetic video. Each program was more than 93% effective at spotting the fake, with one tool reaching 98%. Huzzah!
So what does this mean for us? The more deepfakes that surface, the more we can train our detection capabilities. It will just be a huge game of hungry hungry hippos where detection tools try to eat up as many bites of training data as quickly as possible. While it won’t ever be 100% effective, we will have the ability to spot deepfakes.
The greater challenge is how we incorporate these technologies into our day-to-day lives…
Security Deep Dive
Who is Larry Marr?
Here’s another reason to hate online ads…they serve up malware. Per a recent Malwarebytes report, a recent campaign used ads for the Google Authenticator MFA application.
As you can see from Malwarebyte’s report, the advertisement below looks legit. I mean, come on, it says it’s the official website! But when you click to learn more about the advertiser, it’s some random dude named Larry Marr. Spoiler alert: Larry Marr is not an official Google representative.
Users who click on the link are directed to a malicious web page that looks like a legitimate download link for Google Authenticator. Never mind that Google Authenticator is a mobile application…unsuspecting less-technical people who follow the download install an infostealer that grabs all the credentials on the victim’s system.
This threat doesn’t just apply to personal computers. Ransomware actors are getting in on the malvertising game by installing malware onto work computers to gain a foothold in the environment, ultimately leading to a ransomware attack.
The moral of the story here…use an ad blocker!
Security News
What Else is Happening?
💰️ Zscaler reported that the Dark Angels ransomware group received a $75 million ransom payment from a single victim. The group prefers to show each victim extra attention, targeting larger companies more apt to pay large ransoms…but wow, $75 million is just next-level crazy.
💸 Is AI a money pit? The big tech companies earnings showed that their expenses are quickly growing with the costs to train new AI models. The CEO of Anthropic said that each AI model costs around $100 million to train, and by 2025 or 2026, it could be closer to $5-10 billion.
🔑 Google is taking steps to combat infostealers stealing passwords from Chrome’s password browser. They’re updating how they protect secrets stored in the browser. While it’s a good move, you’re much better off using a dedicated password manager.
📱 Mobile malware continues to expand, and they’re stealing your MFA codes. Researchers at mobile security firm Zimperium found over 107K malware samples designed to steal one-time passwords sent through SMS…so yeah, please stop using SMS for MFA.
🐝 In a much less violent outcome than Jason Statham delivered in The Beekeeper movie (it’s terrible), the leader of a tech support fraud scheme was sentenced to seven years in prison. The scam impacted 6,500 victims and netted over $6 million, largely from elderly victims.
If you enjoyed this, forward it to a fellow cyber nerd.
If you’re that fellow cyber nerd, subscribe here.
See you next week!
Reply