A shocking new example of deepfake tech should sound the alarm for anyone who uses the internet.
AI expert and Wharton professor Ethan Mollick published a jaw-dropping deepfake of himself. (See below.)
The deepfake is nearly indistinguishable from his real on-camera self. And it features him speaking in multiple languages.
The craziest part?
Mollick says he created this deepfake using just one minute of training data.
The takeaway?
“You really can’t trust what you see or hear anymore,” Mollick writes.
See for yourself…
Make no mistake…
It’s a brave new world where you can no longer trust anything without verification.
On Episode 78 of The Marketing AI Show, Marketing AI Institute founder/CEO Paul Roetzer walked me through what that means for all of us.
Be Very Careful with Using Deepfake Tech
On the surface, there are plenty of interesting use cases for deepfakes. You might even be tempted to deepfake yourself. But be warned: It’s a slippery slope.
“People are blindly trusting AI startups with their likeness and giving them training data to go build these things that can be wildly misused,” says Roetzer.
Even if you decide that’s worth the risk, you need to start preparing now for deepfakes, he says. Because now that we have hyperrealistic ones, we’re in a brave new world.
“You can’t trust anything online unless it’s coming from a verified source.”
What to Do About Deepfakes
Two things need to happen to adapt to this brave new world, says Roetzer. Neither of them are easy.
First, we need to very quickly train people to vet content online for what’s real and what’s not. That’s possible, though it takes a lot of effort.
Second, we need to accelerate the authentication of media that appears online. That’s much, much harder. “ I don’t know how you do that,” says Roetzer.
This leaves us in a tough spot.
The Impact of Deepfakes in 2024
In business, companies need to get defensive—fast. Deepfake strategies need to be in every firm’s 2024 crisis communications plan.
“There’s nothing stopping someone from doing this with an executive in your company or a board member and causing chaos,” says Roetzer.
In society, deepfakes very well could sway elections. (Including the 2024 U.S. presidential election.)
It’s very possible deepfakes could be used to sow misinformation or sway voters.
“AI will give politicians superpowers at targeting and influencing voters,” says Roetzer.
While it may be harder than ever to tell what’s fake, one thing is certain:
The serious impact of deepfakes in 2024 will be very, very real indeed.
How ParrotGPT can help:
ParrotGPT can help detect and combat deepfake technology through advanced AI chatbot solutions that are capable of identifying and verifying authentic content. With its advanced capabilities, ParrotGPT can aid in differentiating between genuine and fabricated media to ensure the trustworthiness of online content.