Fox News Ran AI-Generated Racist Videos as News, Then Tried to Cover It Up
The fake videos featured Black women with "seven baby daddies" complaining about SNAP cuts. Even if they'd been real, this would still be journalistic malpractice.
On Friday, Fox News published an article with the headline “SNAP Beneficiaries Threaten to Ransack Stores Over Government Shutdown.” The piece, written by Alba Cuebas-Fantauzzi and filed under the site’s “cultural trends” section, featured quotes from several Black women complaining about losing their food stamp benefits. One woman, the article reported, said she had “seven different baby daddies and none of ‘em no good for me.” Another declared: “It is the taxpayer’s responsibility to take care of my kids.”
Fox News contributor Brett Cooper even made a video responding to one of these supposed SNAP recipients. “Ma’am, this is an insane video to make,” Cooper said, her voice dripping with disdain. “That is your responsibility... maybe we could use this as an opportunity to fix the broken families in America, to remind people to be careful about who you sleep with and who you have children with.”
There’s just one problem: these women don’t exist. They’re AI-generated. The videos Fox News presented as evidence of actual SNAP recipients threatening to loot stores were created using tools like OpenAI’s Sora, complete with watermarks that Fox’s screenshots attempted to blur out. Fox News didn’t interview anyone. They didn’t verify that these people were real. They found videos on social media that confirmed every racist stereotype about Black women on public assistance, and they ran with it.
After getting called out on Saturday, Fox quietly rewrote the entire article. The new headline: “AI Videos of SNAP Beneficiaries Complaining About Cuts Go Viral.” Gone were the claims about real people making real threats. In their place: a story about how AI-generated videos were trending on social media. The editor’s note at the bottom was almost comically insufficient: “This article previously reported on some videos that appear to have been generated by AI without noting that. This has been corrected.”
But here’s what makes this worse than a simple mistake: even if these had been real people, this would still be garbage journalism. Taking random social media posts and framing them as representative of an entire group — in this case, SNAP recipients — is a tactic that’s been used to demonize marginalized communities for years. Find the most outrageous-sounding person you can, amplify their voice, and present them as typical of everyone who shares their identity or circumstances. It’s nut-picking dressed up as trend reporting, and news organizations know better.
Fox News has done this before
This isn’t the first time Fox News has fallen for (or pretended to fall for) racist content specifically designed to make Black women look bad.
In June 2014, trolls on 4chan launched what they called “Operation: Lollipop.” The plan was simple: create fake Twitter accounts pretending to be Black feminists, start a hashtag called #EndFathersDay, and post outrageous things to make feminists and Black women look ridiculous. The fake accounts had handles like @NayNayCantStop, @LatrineWatts, and @CisHate. They posted tweets like “#EndFathersDay because I’m tired of all these white women stealing our good black mens.”
Shafiqah Hudson, a Black feminist who went by @sassycrass on Twitter, spotted it immediately. As she later said, “Anybody with half the sense God gave a cold bowl of oatmeal could see that these weren’t feminist sentiments.” The grammar was off. The arguments were cartoonish. These weren’t real people expressing real views. They were parodies.
Hudson and other Black feminists, including I’Nasah Crockett, fought back with #YourSlipIsShowing, exposing the fake accounts and documenting the coordinated harassment campaign. Crockett traced it back to the original 4chan post. The hoax was exposed within days.
Fox News ran with it anyway. In June 2014, Tucker Carlson hosted a segment on Fox & Friends. He acknowledged the hashtag “started as a joke” but claimed it was “picking up steam with feminists online.” His guest was Susan Patton, the so-called “Princeton Mom” who had written a book advising college women to spend 75% of their time hunting for husbands. “They’re not just interested in ending Father’s Day,” Patton said. “They’re interested in ending men.”
Carlson agreed enthusiastically. “There’s a reason there are more women in poverty now than in my lifetime, is because there are more unmarried women,” he said, offering zero evidence for the claim. “When you crush men, you hurt women.”
They spent the entire segment attacking feminists who didn’t exist, responding to arguments nobody was actually making, getting worked up over a hoax that had already been debunked.
Look at the parallels. In 2014, 4chan trolls had to manually create fake Twitter accounts with stolen photos of Black women. In 2025, they just type a prompt into Sora and generate videos of AI Black women saying “I have seven different baby daddies.” The technology changed. Fox’s eagerness to launder racist hoaxes about Black women didn’t.
The only real difference is that AI makes the fakes more convincing. And when Fox gets caught this time, they can claim they were “duped” by sophisticated technology rather than admitting they never bothered to verify whether the people they were demonizing actually existed.
The information environment is poisoned
We’re living in the exact nightmare scenario researchers warned us about. And it’s not just that AI can create convincing fake videos — though it obviously can. It’s that bad actors have figured out how to weaponize public confusion about what’s real.
Think about what we’re seeing play out. Politicians point at things they don’t like and say “that’s AI.” News organizations run AI-generated content as fact and only correct it when they get caught. These are opposite problems — calling real things fake, calling fake things real — but they’re both exploiting the same poisoned information environment.
Back in September, President Trump was asked about viral footage showing someone tossing something out a White House window. His press team had already confirmed to reporters that the video was real. Trump’s response? “No, that’s probably AI.” Then he said something wild: “If something happens that’s really bad, maybe I’ll have to just blame AI.”
He wasn’t joking. He was describing an actual strategy. When anything inconvenient surfaces — a video, a quote, a piece of historical footage — just declare it AI-generated and move on. No investigation needed. No verification required. Just “it’s fake” and a significant chunk of people will believe you.
Fox is doing the inverse. They found videos that confirmed every ugly stereotype they wanted their audience to believe about Black women on public assistance, and they ran with it. Never mind verifying that these were real people. Never mind that some of the videos had Sora watermarks that their screenshots tried to blur out. Never mind that nobody actually talks like “it is the taxpayer’s responsibility to take care of my kids.” The content served their narrative, so they treated it as real until the backlash forced them to quietly rewrite the whole thing.
This is what researchers call “the liar’s dividend.” As I wrote last week, in 2018, legal scholars Danielle Citron and Robert Chesney warned that as AI-generated content becomes more sophisticated, liars benefit even when they’re not using AI themselves. “If the public loses faith in what they hear and see and truth becomes a matter of opinion, then power flows to those whose opinions are most prominent,” they wrote. “A skeptical public will be primed to doubt the authenticity of real audio and video evidence.”
UC Berkeley professor Hany Farid put it more simply: “When you enter this world where anything can be fake, then nothing has to be real. You get to deny any reality because all you have to say is, ‘It’s a deepfake.’”
That’s exactly where we are now. In a world where anything could plausibly be AI-generated, you can deny any reality that’s inconvenient or manufacture any reality that’s useful. The technology isn’t forcing anyone to do this. Sora didn’t make Fox run those videos as news. Trump didn’t have to falsely claim things are AI-generated. These are choices.
But here’s what makes it so insidious: the public’s confusion is legitimate. AI-generated videos of Martin Luther King Jr., Malcolm X, and Ronald Reagan really are flooding social media right now. OpenAI’s Sora app launched in late September and within weeks, people were creating disturbingly realistic fake videos of historical figures and celebrities. People can’t always tell what’s real anymore. Politicians and news organizations are exploiting that genuine uncertainty.
We can’t fact-check our way out of this. We can’t AI-detect our way out of this. Because the problem isn’t that people can’t tell real from fake. The problem is that people don’t want to tell the difference when the fake confirms what they already believe, or when calling something fake protects someone they support. Fox’s audience wanted to believe those SNAP videos were real. Trump’s supporters want to believe inconvenient footage is fake. The technology just gives everyone an excuse.
The way forward
Let’s be clear about what Fox News did wrong here. They didn’t just fall for a sophisticated deepfake that fooled everyone. They took videos from social media, never verified that the people in them were real, never contacted them for comment, and ran a story claiming these individuals were threatening to “ransack stores.” Then, when called out, they rewrote the entire article to make it look like they’d been covering AI videos going viral all along.
The editor’s note is doing an absurd amount of work: “This article previously reported on some videos that appear to have been generated by AI without noting that.” Appear to be? They are AI-generated. That’s not ambiguous. And “without noting that” makes it sound like a minor attribution error, like forgetting to credit a photographer. The actual problem is that Fox reported the statements of imaginary people as news, used those fabricated quotes to demonize SNAP recipients, and had a contributor make a video lecturing one of these fake women about her moral failings.
This is journalistic malpractice. But more than that, it’s a window into how right-wing media actually operates. The story Fox wanted to tell was always going to be the same whether these women existed or not: SNAP recipients are irresponsible freeloaders making outrageous demands on hardworking taxpayers. They went looking for content that confirmed that narrative. They found it. They didn’t verify it because verification wasn’t the point. The point was advancing the narrative.
And again, here’s the thing — even if these had been real people, this would still be garbage journalism. Taking a handful of random social media posts and framing them as representative of millions of SNAP recipients is a tactic designed to demonize an entire group. Find the most outrageous example you can, amplify it, present it as typical. It’s been used to smear every marginalized community you can think of. The fact that Fox did this with computer-generated people just makes it more obvious what they’ve been doing all along.
Fox has been running this same playbook for over a decade. In 2014, it was fake Twitter accounts claiming to be Black feminists. In 2025, it’s AI-generated videos of fake Black women. The technology changed. Fox’s willingness to platform racist content didn’t. They learned nothing from #EndFathersDay because there was nothing to learn — they got exactly what they wanted out of that segment. They got to spend airtime attacking feminists and Black women, and when it turned out to be a hoax, so what? The damage was done. The narrative was reinforced.
This time, Fox at least had to acknowledge the videos were fake, but only after getting caught. And even then, their “correction” tried to memory-hole what they’d actually done. No apology to SNAP recipients for spreading dehumanizing lies about them. No explanation of how this happened. No accountability for the contributor who made a video responding to an AI-generated person as if she were real.
We’re about a month into Sora being widely available. The technology will only get better. The tells that make current AI videos detectable will disappear. The watermarks will become easier to remove. And outlets like Fox will keep doing exactly what they did here — running content that confirms their narratives without bothering to verify whether it’s real, then hiding behind claims of being “duped” by sophisticated technology when they get caught.
The technology is new. Fox’s willingness to spread racist misinformation isn’t.







"The only real difference is that AI makes the fakes more convincing. And when Fox gets caught this time, they can claim they were 'duped' by sophisticated technology rather than admitting they never bothered to verify whether the people they were demonizing actually existed."
FAUX has been exhibiting misleading and/or outright dishonest stories and slanderous chyrons for many years now. I wouldn't put it past them to be using Sora to produce these in-house.
Parker you are writing great pieces. They are sharp, clear and to the point. I'm happy to be a paying subscriber!
I would quibble slightly about saying that SORA will only get better... better at what? Creating lies. It might be worthwhile to specify that.
Keep up the good fight, Parker!