As explained in a blog post, the Dessa team managed this feat by developing a deep learning system called RealTalk that uses text inputs to produce life-like speech in the style of a real person.
It’s perhaps the best example of an audio deepfake yet. Even those well-acquainted with Rogan’s voice will likely have a hard time telling apart the fake audio from things the comedian has actually said — and that ability to fool listeners could have terrifying implications for the future.
As a big Joe Rogan guy, I found this beyond funny. I don’t know if the Dessa team directed the AI to talk about topics such as starting a hockey team strictly of chimps, but if they did, they should be given an award of some sort. This is top-tier comedy if you are a JRE fan like myself:
The fact that AI/machine learning can do this at this point is crazy. Of course, this doesn’t sound just like Joe Rogan, but it’s pretty damn close. That’s some black mirror shit. Rogan often has experts in the AI field on and discusses concerns just like this. At this point, it’s just a YouTube video, but what happens when this technology is utilized for harm? There are thousands of hours of Joe Rogan audio to work with, so you couldn’t just do this for a coworker you hate, but what happens in the future if this is a possibility? What, we just can’t trust audio anymore? I don’t want to ruin your Friday, but this is some futuristic shit that could go left QUICK.
P.s. I’m a big Joe Rogan guy. I love his podcast and UFC commentating. I think at this current time in life one of my biggest pet peeves is when somebody says Joe Rogan is Oprah for guys. Nothing against Oprah, but when is the last time she had a neuroscientist on her show to have a commercial free conversation for 3 hours? Oh, that’s right, never. So, spare me the “Joe Rogan is Oprah for guys. Change my mind” memes.