
Four years ago, Coffeezilla tried to make a video about AI deepfakes. He failed — the technology was too complex, required broadcast cameras, studio lighting, a custom training dataset, and a Hollywood-level production team just to fake one person’s face. Today, he deepfaked the President, Elon Musk, MrBeast and a few other famous faces using two $20 subscriptions and two days of his own time.
That’s how fast this changed. And in his latest video, “Investigating AI Deepfakes,” Coffeezilla makes an argument that flips the usual deepfake panic on its head: the threat isn’t the scary, hyper-realistic stuff. It’s the obvious slop.
Nobody Can Tell What’s Real Anymore
UC Berkeley professor and digital forensics expert Hany Farid dropped a stat that should make everyone uncomfortable: in controlled lab studies, people identifying real vs. fake images perform barely above chance. Same for audio. For video, the best-case scenario in a lab — where participants are primed, not emotionally invested, and given ideal conditions — is about 70% accuracy. Not 98%. Seventy.
And within 12 months? The professor’s prediction: full-blown video deepfakes will be indistinguishable. “It’s over,” he said. Not hypothetically. As a timeline.
The Three Ways Deepfakes Are Already Ruining Things
1. Scams
The obvious one. MrBeast, Joe Rogan, Elon Musk — all deepfaked to sell crypto schemes, medical supplements, companionship bots. The live stream scam format (send 0.1 BTC, get double back) has been running for years on YouTube, and YouTube’s own CEO admits it’s getting harder to distinguish AI from real content.
But the more interesting development is what deepfakes do to the infrastructure of scams. Previously, a large-scale fraud operation needed a call center full of humans — people who could get raided, people who could blow the whistle. Now, those same operations can run ten times as many AI callers, who don’t sleep, don’t ask for raises, sound human, and cannot testify against you. Checkpoint Research recently uncovered malware attributed to a single individual that previously would have required “multiple teams of professionals.” Most of the code was AI-generated. Solo scammer. Sophisticated operation.
2. Propaganda
After the U.S. military operation against Maduro in Venezuela, videos of Venezuelans crying tears of joy went viral. They were entirely AI-generated. But here’s the thing — Coffeezilla’s argument isn’t just “fake videos fool people.” It’s more disturbing than that.
Even people who saw the debunk didn’t come away with a clear picture of reality. They came away suspicious of everything. As one interview subject put it: “Propaganda doesn’t shout at you until you believe. It talks until you’re too tired to care.” That’s the actual mechanism. Not mass conversion to false beliefs. Mass exhaustion from trying to find the true ones.
The Joe Rogan moment in the video is a perfect case study. Shown an obviously AI-generated video of Tim Walz, he believes it immediately — because it fits what he already thinks about the person. When told it’s fake, he resists, then admits: “I fell for it because I believe he’s capable of it. That’s his essence.” That instinct — “okay it’s fake but it probably represents something real” — is exactly how propaganda survives the debunk.
The White House has used manipulated arrest photos to make subjects appear to be crying when they weren’t. When called out, the official response was: “The memes will continue.”
3. Non-Consensual Explicit Content
Deepfakes originated in 2017 on a now-banned subreddit dedicated to face-swapping celebrities into pornographic content. The subreddit got 100,000 members before getting shut down. The demand didn’t go anywhere.
In early 2025, Grok — the AI owned by Elon Musk, which runs on X, also owned by Elon Musk — launched a “spicy mode.” Within a 24-hour period, researchers documented 6,700 non-consensual sexualized deepfakes generated in a single hour. One hour. The TAKE IT DOWN Act and the DEFIANCE Act are now being pushed through Congress to criminalize this content — but both put the burden of finding and reporting the images on the victims themselves, rather than the platforms generating or hosting it.
One in eight girls is now reportedly experiencing the harm of AI-generated deepfake pornography. A significant portion of the content involves minors.
The Dumb Deepfake Problem
Here’s the core insight from the video that most deepfake coverage misses: we’ve been worried about the wrong thing. The hyper-realistic deepfake — the Deep Tom Cruise requiring a Hollywood production team — was never really the threat. It’s expensive, slow, and requires expertise.
The threat is the cheap version. The “slop deepfake” — obviously imperfect, quickly generated, deployed at massive scale. Not because it fools everyone. Because it fools enough people, costs almost nothing to produce, and makes it impossible for the rest of us to trust anything we see.
If one platform shuts it down, bad actors move to the one that doesn’t. If nine out of ten companies do the right thing and one doesn’t, the problem concentrates there. The bar isn’t perfection — it’s the lowest common denominator.
We’re in a world where you can already barely tell what’s real. Within a year, according to the researchers Coffeezilla interviewed, video will join images and audio in being indistinguishable from reality for the average person. The infrastructure for abuse — scams, propaganda, harassment — is already in place. The only thing still catching up is regulation, and it’s moving slowly, on the wrong assumptions, and with the burden placed on the wrong people.
Coffeezilla’s verdict: the biggest deepfake threat was never the one that looked too real to be fake. It was always the one that was just real enough to spread.
