- The Weekend Byte
- Posts
- The Deepfake Dilemma
The Deepfake Dilemma
It's not just new technology, it's a new world
Last weekend, I watched Oppenheimer, which chronicles the story of the American scientist J. Robert Oppenheimer and his role in developing the atomic bomb. One line in the movie captured my attention and how it relates to GenAI…
“You’re like Prometheus. When you create the bomb, you’re not just creating a new weapon, you’re creating a new world.”
Like Prometheus, the advancements in GenAI are not just bringing a new technology to mankind, it’s creating a new world.
And like the tale of Prometheus, the GenAI can be seen on two ends of a spectrum, good and bad (or at least cautionary). This is nothing new in history.
In Percy Bysshe Shelley’s Prometheus Unbound, he imagines Prometheus as a hero who escapes his fate and continues to spread knowledge. Percy states that Prometheus is “the type of the highest perfection of moral and intellectual nature impelled by the purest and the truest motives to the best and noblest ends.”
In contrast, Percy’s wife, Mary Shelley, took a more cautious view in her novel Frankenstein; or, The Modern Prometheus. This popular story suggests the risk of corrupting the natural order and raises ethical questions about science and technology today.
Wow, we’re getting philosophical here. Where you stand on the spectrum is up to you. Regardless, GenAI is rapidly advancing. So today, let’s look at some implications of GenAI in the form of disinformation and deepfakes. What are they, what’s the impact they can have, and what tech is emerging to help combat them?
-Jason
Spotlight
The Reality Distortion
Disinformation is false information that is deliberately spread to deceive people. The technique can be applied to any type of use case, whether political or to drive revenue, sometimes even both simultaneously. Just look at “yellow journalism,” which dates back 128 years. It refers to a style of newspaper reporting that emphasizes sensationalism over facts. While the rhetoric increased newspaper sales, it also contributed to the start of the Spanish-American War.
While the concept of disinformation isn’t new, the technology supporting it is. Enter stage left: deepfakes. Deepfakes are synthetic media that have been digitally manipulated or created using AI. This impacts all types of media. OpenAI’s ChatGPT revolutionized the creation of text. Midjourney did the same for text-to-images. And OpenAI is again pushing boundaries with Sora, which can generate hyperrealistic one-minute videos.
As you can see, the impact on humanity from those three technologies alone is and will be profound. So where are we heading with all of this? The Deep Fakery white paper showed the impact on three levels.
Individual Portrayal: When an individual is portrayed in a deep fake. It’s an issue when you live in a world where anyone can create fake media portraying you saying or doing anything. It takes the excuse of your dog eating your homework to an entirely different level. Just look at the example of people creating pornographic deepfakes of Taylor Swift. The unfortunate and disgusting truth here is that while some men will be impacted, likely those in positions of power who will be extorted or shamed, women will more likely be victimized by this. With the rate of cyberbullying increasing, I worry about the impact this will have on younger kids.
Individual Deception: When deepfakes are used to deceive individuals. The most obvious here is the use of deepfakes in social engineering attacks. Case in point: this attack used a deep fake to trick a finance employee into wiring $25 million USD to an attacker's bank account. For organizations, this is one to watch closely.
Individual Collateral Damage: When the actions of deceived individuals harm individuals. The illusory truth effect is the tendency to believe false information after repeated exposure. Look at the media echo chambers present today. Like yellow journalism, they foster an environment where the truth is flexible. Put this into a group setting, and you can quickly turn the masses against individuals, regardless of whether the information is accurate. By the time the false information spreads, the damage may be irreparable.
Where does this leave us? The erosion of trust. It’s difficult to predict what this means for the future of information, but it’s safe to say that over time, more people will question what real media is. I catch myself doing this from time to time now.
And what about cybersecurity? We see a future where deepfakes improve social engineering capabilities and organizations face new types of extortion threats. Fabricated news stories that will impact stock prices. Deepfake videos of executives that will cause reputational harm. This is the new world we are stepping into.
While the path forward isn’t fully clear, some technologies are being developed to help spot deepfakes. Let’s dig into those…
Deep Dive
Spot the DeepFake
Wow, well that last section was a bit doom and gloom. Sorry? Don’t worry. I have some good news to balance this out. Researchers are spending a lot of effort in detecting deepfakes. Here’s a sampling.
Your lips lie…but also tell the truth. When I’m trying to spot a deepfake video, one area I focus on is the person’s lips when they talk. From what I’ve seen, many AI models are still working out the kinks of syncing words and lip movement. Researchers at Standford and Berkeley developed an approach to spot those inconsistencies between mouth formations and phonetic sounds.
Let’s talk about biological signals. The proper definition is “space, time, or space–time records of a biological event such as a beating heart or a contracting muscle.” Ummm…whut? It’s just something a living thing has that can be measured, like your heartbeat.
Intel researchers created FakeCatcher to help spot deepfakes. The tech uses the concepts of how your Apple Watch monitors your heart rate. Your Apple Watch has a photoplethysmography (PPG) sensor that shines a light on your wrist and monitors how much light shines through your skin. Blood is dense and doesn’t let as much light shine through. This allows the PPG sensor to “see” the blood pushing through your veins. As your heart beats, more blood gushes through your wrist, and your watch picks that up.
Intel found that you can analyze videos for minuscule changes in the color of pixels on your skin to detect when your heart beats. This is possible because as your blood rushes through your skin, it changes color in a way that is imperceptible to the human eye, but it can be picked up by AI models. This works because deepfakes don’t have hearts. Burn!
Look at me! Sorry, that was aggressive. But another technique monitors eye gaze in videos. Like humans in a really awkward scenario, researchers hypothesized that deepfakes don’t know where to look. It's less cool than looking at subtle shifts in skin color, but it can help detect deepfakes until AI models grow up and gain some confidence with a power stare.
So we’re good then? Deepfakes aren’t a big deal? Back to the bad news. These research techniques are awesome, but nothing is baked into existing workflows. The reality (can we still use that word) is that we are entering a new world, and the responsibility of knowing what is real predominately relies on the individual.
So what can you do?
Ignore more. Be intentional in the information you consume. Social media may not be your best source of trusted information (just saying.) I stepped away from almost every form of social media years ago, and it was the best decision I’ve ever made. It is ironic, given how active I am on LinkedIn, but that leads to the second piece of advice.
Ask questions and validate your information. When talking with friends or strangers online, ask questions. What is the source of their material? How does it relate to other known facts? This leads us to…
Don’t forgo logic. If something seems wrong or is too outlandish to believe, refer back to #2. Critical thinking skills are more important now than ever before.
News
What Else is Happening?
🛑 Midjourney banned the generation of political content related to Joe Biden and Donald Trump. Relevant given the focus of today’s topic.
🎮️ Hackers disrupted an Apex Legends video game tournament. It appears they gained access to players’ systems before the tournament and then installed cheats on the players’ systems mid-game. The video of it happening live is epic, even if you aren’t a gamer.
🛌 How safe is your hotel’s electronic door locks? Umm, maybe you should use all the safety latches. As part of a private hacking event in Vegas, researchers found a way to unlock any hotel room lock in the hotel. Yikes.
If you enjoyed this, forward it to a fellow cyber nerd.
If you’re that fellow cyber nerd, subscribe here.
See you next week!
Reply