The Man Calling Bullshit on the AI Boom
Ed Zitron built a massive following by saying what tech journalists won't: that generative AI is an unsustainable bubble propped up by hype and bad math.
Ed Zitron has a theory about why everything in tech feels broken.
From Las Vegas, Zitron runs a PR firm while writing some of the sharpest criticism of Silicon Valley you'll find anywhere. His newsletter, Where's Your Ed At, started with 300 readers and has exploded into a must-read for anyone trying to understand why Google search got worse, why social media feels exhausting, and why tech companies keep pushing products nobody asked for.
His diagnosis: the “Rot Economy,” where growth matters more than quality, where making things bigger beats making things better. His podcast Better Offline is basically him explaining, with receipts, why the tech industry's promises rarely match reality.
I've been documenting a string of AI catastrophes lately — the White House using AI to make the founding fathers quote Ben Shapiro, ChatGPT's role in suicides and murders, former CNN anchors falling for obvious AI fakes. Each story felt absurd in isolation, but Zitron sees them as part of something bigger: a bubble built on hype, sustained by credulous media coverage, and destined to pop.
The stakes keep rising. DOGE is now reportedly feeding sensitive federal data into AI systems to decide which government workers are “mission critical.” The same technology that struggles with basic accuracy is making decisions about real people's livelihoods and creating a privacy nightmare.
Zitron isn't some outsider throwing stones. He's been in tech PR since 2008, which means he knows exactly how the hype gets manufactured. His most recent article is a thorough breakdown explaining why “Oracle and OpenAI Are Full Of Crap”.
Here's our conversation, lightly edited for clarity.
But first… a word from ACLU Foundation:
The so-called Department of Government Efficiency (DOGE) forced its way into the government’s most protected systems, seizing access to federal computer systems with the personal data of millions of Americans. They bypassed the safeguards Congress put in place to keep our health records, financial information, and Social Security numbers safe.
The ACLU is filing FOIA requests and taking legal action to uncover the full scope of this privacy invasion. But Congress must act now to defend our privacy rights — and we need your support to hold the Trump administration accountable when they violate our civil rights and liberties.
Take action by urging Congress to protect our privacy. Together, we can keep DOGE out of our data.
Parker Molloy: Basically, I've been finding myself writing quite a bit lately about these inane AI stories — you know, the White House using generative AI to create videos of the founding fathers quoting Ben Shapiro, ChatGPT's role in suicides and murders, and stuff like that.
So it feels like we're living through a particularly absurd moment where the technology's failures are becoming increasingly dangerous and bizarre. You stand out in the tech commentary space for being anti-hype around AI. You don't hedge or equivocate — you just call it “warmed-up dogshit” and move on.
Ed Zitron: Yep.
But you're not just some random critic. You run a PR firm, been in tech for a while. So I wanted to start here: What was your journey from being embedded in the tech world to becoming one of its loudest critics?
I still run a PR firm and I'm still very much embedded in it, but I firewall off — obviously, I don't let the sides touch. I don't work with AI firms, and obviously not that I think they want to, but...
It was really back in 2020, because there was this company called Clubhouse that came out. It was an audio-only social network, and it was dogshit. It was a terrible product. It sucked. It was boring. It was a live-streamed podcast with the least interesting motherfuckers of all time going like, “What if a woman were...” “Well, you know, what if we did something?” And like a bunch of people: “Wow, damn, this is so cool!”
But the media fell over themselves around Clubhouse. They were like, “This is the next thing! This is the new big social network! Everyone's gonna buy!” And it became really obvious that what was actually happening was that the people who invested in Clubhouse were trying to sell Clubhouse so that they could exit. They had a classic Snapchat-style thing where they had an offer on the table, or they were close to having one to sell to someone else — I think it was Twitter — and they just chose not to. And the media just fell behind it. It was completely insane. Even though when you use the product — and I get the sense you used it — it was bad.
Yeah, it was the social media equivalent of conference calls.
It wasn't a good experience. It was very obviously bad. And that was the first real thing I did where I felt like Roddy Piper in They Live, just like, “What? Why is everyone like this? Why are they doing this?”
But it really became serious with remote work, because there was this big return-to-office push that started actually in 2020, like almost immediately. People talk about this as if it was a phenomenon that began in 2021. No, there was a March 2020 piece from Kevin Roose in the New York Times — who has backed NFTs, crypto, generative AI, and returning to the office — saying, “Oh, well, remote work doesn't really work.”
And there was this string of these poisonous op-eds from CEOs and bosses being like, “Oh yeah, you can't possibly go without the office.” And it became really obvious that they just didn't actually go to the office, or when they did, they did it so they could abuse people — so they could just stomp around all angry and say, “Wow, I'm terribly mistreated by these plebs.”
But again, the media fell in line behind them. And then it was this constant — it was very much this centrist opinion of, “Well, we like remote work, and workers like it” — and workers were always framed as liking it like one likes ice cream and bunnies, rather than “Hey, this is more productive.” But, you know, “We need to consider both sides here,” and the biggest side being the boss and what the boss would like.
I realize this doesn't sound like a tech thing, but again, you could see the media crystallizing around the powerful narrative that we really should have office time, the serendipity of the office is important. And every article about this, they talk to managers and bosses, not workers. And I just went, “This is completely insane.”
The true justification began with the metaverse, though, where everybody was saying, “Oh yeah...” And there were people that were against it, just not many of them. But you had Gayle King on CBS being like, “Wow, look at this! Ooh, we're in the metaverse, Mark! What's that like?” You had that video of Mark Zuckerberg in this completely falsified environment, this completely fake thing, being like, “In the future, we'll all be in the metaverse together!”
And the media lined up to say, “Wow, I guess the metaverse is here.” Who knows if it was actually profitable? It wasn't remotely possible to do anything they were saying. They were just lying. But the coverage again: “Metaverse this, metaverse that.” They burned $45 billion or more, and you know what? Nothing happened. Absolutely nothing happened. Everyone just pretended like it didn't happen at all. We all moved on.
Again, crypto — same deal. And you'd see these things again and again, these same cycles where the powerful — by which I mean the various rich people and the venture capitalists — got together and said, “This is the thing we're doing now.” And instead of the media saying, “Okay, hold up a second. Wait. What do you mean, the metaverse? Horizon Worlds looks like dogshit. What do you mean we're all gonna wear VR?”
“Well, we'll do that in the future. You're a moron! You're a rube! You idiot! How can you not see the future?”
Well, generative AI — that was the most egregious one they've ever done. It has absorbed our economy and media. And actually, the media, I believe, has participated in and effectively run the marketing campaign for large language models. Because when you look at the actual outcomes of these models, they don't do the things that they're promising.
And if I may specifically say something — probably the most egregious example of this I saw was recently when I went and looked back to the middle of 2023, when GPT-4 — probably the last time OpenAI put out anything that was truly big. Like, they put out big things since, but that was the big innovation that really made people stand up.
There was a story that went around that GPT-4 tricked a TaskRabbit into solving a CAPTCHA. This is a completely bullshit story. It's from the system card, which is the PDF they released that lets you see all the stuff about how the model was trained and such. And there is a bit in it where they say, “Yes, in a simulated study we did, we used the large language model to talk to a TaskRabbit to do this, and then the TaskRabbit had this conversation.”
But when you actually click through and you look, they even say, “Well, what happened was we told a model what to do. We copy-pasted the responses from the model into the chat window with the TaskRabbit.” But they word it in such a way that it's not really clear how many times this took place or if it took place at all.
Now, I must be clear that Casey Newton on the Hard Fork podcast said, “Oh my god, pull the plug!” Kevin Roose reported this on Hard Fork, the New York Times podcast. Kevin Roose reported this in the New York Times newspaper—this completely and utterly made-up thing.
Enable 3rd party cookies or use another browser
So yeah, the thing that's really twisted me with this is it's sold based on stuff it does not do, and it's everywhere, and everyone appears to be willfully hallucinating.
Why do you think it is that tech journalists have been so credulous about it? Is it access, genuine belief, fear of looking stupid?
I think it is a combination of credulousness and access journalism. I think that there is a cult of personality within tech journalism that says everyone should fall in line. I think we're also in the throes of a Dunning-Kruger pandemic, where there are people that believe because they are smart at something else, that they can understand everything.
And journalists tend to trust them. Journalists also, across the board but especially in tech, are really pathetically — in an ugly way — so willing to believe whatever they are told. And they treat startups completely differently to the big ones. When they choose a company like OpenAI, they will believe any fucking thing these companies say. It's ridiculous.
I don't think journalists are trained or encouraged to have the kind of critical thinking that is necessary to actually critique these companies. And I would love to believe that most of them are just scared by the industry consensus, but I worry more that a lot of them have just accepted that people have said enough times that AI is powerful, that they don't have to do the research to check that it is.
And the result is the media has become OpenAI and Anthropic's marketing machine. They don't need to do much PR because they just get it already. And quite frankly, they haven't even had to describe what their product does, because people like Kevin Roose at the Times have repeatedly and reliably repeated their narratives and then extrapolated from those narratives into saying, “This will be, this will be...”
I'm tired of hearing “this will be.” Tell me what it does today. And people don't want to do that because, well, they'd realize it was bad.
One thing I was really curious to get your opinion on: You've talked a lot about how AI is burning through tons of money. But there's another angle here — these tools, which can't actually do what people claim they'll someday be able to do, are still being integrated into everything in our lives.
Take DOGE, for example. We're seeing reports that they're feeding sensitive federal data into AI systems to make decisions about budget cuts and which jobs are 'mission critical.' Given your expertise on AI's limitations, I'm curious—
To what end? What happened there? Because we get all these stories about “Oh, they fed all the data into the LLM,” and then what?
That's the thing, nothing happens because nothing can happen. It's not like, “Tell me all the names in this list who are woke.” Like, what is meant to happen at the end of that? It's a scare tactic. I should be clear, it's extremely unsafe what they did. They should stop doing it, and everyone involved should be in real trouble because that's insanely unsafe. But the actual outcomes never seem to reach anywhere.
I guess that's one thing we have going for us — for those of us worried about our data — these machines can't actually do what they claim they can do.
Or don't even know what they want them to do! Well, exactly. I mean, we just hear, “They're gonna eliminate people doing jobs.”
Oh, that's another myth! This is really interesting stuff. So the whole AI jobs narrative is another myth, and it is a myth perpetuated by the media. And it's disgusting because there was a recent one that came out that “13% of jobs most affected by AI” — look this up, it's the Stanford study.
“AI adoption linked to 13% decline in jobs for young US workers.” What this study actually said was that there was a 13% decline in a bunch of jobs, and these were the jobs that they believed were affected by AI. You may think Stanford wouldn't just put out a study that flimsy — no, that's exactly what they did.
One of the jobs, by the way, was accountancy. Accountancy has had a three-year-long or more drought because people aren't becoming accountants. Of course the employment of accountants is going down — there are less of them! But this study did not prove anything to do with AI.
And there was another one from Oxford Economics from a few months ago—same fucking deal. People went, "Wow, it said the jobs were lost because of AI!" And when you went into it, the fundament of that was just them saying, “Yeah, you know, we've seen some signs.” And again, you're gonna think I'm being facetious — no, this is the actual fundament of it.
It's insane because so many people — I saw literal headlines about the Oxford Economics one where it was saying, "This is the proof!" Why do you want proof? Why are you so excited? And I think there are some people in the media who are excited about this. I think that there are some people in the media who have doomed now, who are just like, “Fuck it. You know what? Who cares? We're all going to hell.”
I must be clear: They do not have proof that this is happening at scale. Now, there are people who are losing jobs. Brian Merchant right now at Blood in the Machine is doing a whole thing around translators and transcribers. There is a layer of contract labor right now that is just getting fucked by this. And it's not because it does a better job — it's because it does a quicker and cheaper job. Because there are people within translation and transcription — I don't know exactly how much, but I theorize there's a fair amount of people who don't really care about getting it exactly right, but just kind of want a reference. This is a problem that these jobs are finding.
But when it comes to this large-scale knowledge worker level loss, it's bullshit. It's not replacing coders either. These companies, what they do is they lay people off because they massively overhired in 2021 and 2022. And you'll notice that they never say, “Oh yeah, it's because of AI.” They say, “So we're doing this because of efficiency and this and that and the other, and also the power of AI.” They staple it on at the end like a little emoji that says, “We love laying off people and blaming a machine.”
“It's not our fault, the machine made us do it,” basically.
That, and also they want to signal to their shareholders that that's why they did it, even though it isn't.
So what makes this a bubble rather than just an overhyped technology? What are the actual economics here?
Well, the economics are bad, is what I'd say. So first and foremost, AI—just the generative AI industry — despite its so-called scale, will only make about $35-40 billion this year. Maybe a little more, which is around what smartwatches made last year. That's a bad start.
The infrastructure required to build these things out is over half a trillion dollars since the beginning of 2024, including venture capital, capital expenditures, and all that good stuff.
OpenAI burned $5 billion last year. They will likely burn $10-15 billion this year. They have leaked they will burn eight — I believe they are lying or they are massively understating. Anthropic will burn, they say, $3 billion. I think it's gonna be much more than that. They just had to raise $13 billion.
Like, that's the thing — they're raising these astonishing amounts of money and then not making that much. OpenAI is projected to make $12.7 billion this year, but from my calculations based on leaks, they've only made about $5.26 billion. I mean, it sounds like a lot of money, but not when your company has to raise $40 billion in the year 2025, which they have yet to do, by the way, because half of that is contingent on turning into a for-profit entity. They're currently a nonprofit.
But the real bubbly thing is you've got this massive infrastructure buildout, these insane amounts of money — a projected $400 billion in infrastructure capital expenditures this year — for an industry that has shown no signs it can become profitable. Like, straight up, the cost of AI models is getting more expensive. The cost of AI compute is staying the same. It is not getting cheaper. There is no making it cheaper.
They talk about specialized chips — specialized chips take years and years and years to make. And the more specialized they are, the more likely they are to get made obsolete.
So you're in this situation where nobody's making any money. You have companies like Perplexity that spent 160-something percent of their revenue last year. Between Amazon, OpenAI, and Anthropic, you've got every single company — Cursor, probably the largest non-OpenAI or Anthropic company in AI, I think they have like $500 million of annualized revenues. They're not gonna make $500 million this year. They send 100% of their revenue to Anthropic.
This is every single company. Any company built on top of a large language model is sending all of their money to OpenAI and Anthropic, who then lose billions of dollars. Nobody is making — everyone is losing money. So they can, because they're sending money to someone else who then loses money. And to make this money-losing machine work, we need to spend hundreds of billions of dollars.
Because people will say, “Oh, it's just like Uber.” It's just like Uber? Uber burned $25-30 billion in ten years. Wow, that's chump change! Microsoft's put in $80 billion of capital expenditures out the door alone this year. CoreWeave, $20 billion. And you know who's their biggest customers? Those two? OpenAI.
But OpenAI — for these techno-libertarians, they love them some fucking welfare, because OpenAI has all of their infrastructure paid for by Microsoft and CoreWeave, and the new site in Abilene, Texas, paid for by Oracle and Crusoe and Blue Owl and our primary digital infrastructure. Everyone pays for their shit other than them.
But that's the thing — this completely and utterly overshadows anything Uber did. And on top of that, you can explain to people why Uber is important. Whether you like it or not, put that aside — Uber is useful. You can say to someone why Uber matters. But when you look at generative AI: "Oh, it's like slightly better search. You can generate a picture of Garfield with an AK-47. You can do code that opens up security vulnerabilities." It's all a bit worrying when you say it out loud. It does feel a little bit worrying.
So what do you think the collapse actually looks like? Is it like a sudden implosion like the dot-com crash, or a slow bleed? How does this bubble pop?
The thing is, it really comes down to so many different elements. Because right now we have all of these questions. Nvidia right now, their accounts receivable is increasing dramatically — I think it's like $26 billion. It isn't obvious why. It could be that this is money they're due, that they've booked this as revenue but they're yet to receive the money. Now that could be perfectly standard economics where they have 50-60 day payment terms and they'll get them in the next quarter and all be fine. Or it could be a sign that they're shipping product to people, more than they need to ship.
On top of that, at some point, we're going to run out of space for these goddamn GPUs. There's only so much space, and there's only so much money. Are we really going to spend $55, 60, 70, 80, 90, 100 billion a quarter on these things? Where are they gonna go? Where are they going right now? I've heard for over a year, "Oh, we've got compute capacity issues, capacity issues here and there." Where are these fucking GPUs going? Where are they putting them?
So Nvidia themselves, every bit of information that comes out of Nvidia, that in and of itself could be worrying. Really, the biggest bubble-popping sign will be problems with Anthropic and OpenAI. They've just raised bunches of money, but really, if anything happens that says OpenAI will not convert to a for-profit, OpenAI will collapse. Don't know how quickly it will happen, but if they can't go public, they're done. No one can buy them. They'll get absorbed into Microsoft, we don't know yet.
But my gut instinct is that something comes out about someone's economics being even worse than we know. Because all of the leaks around the losses with OpenAI have been through journalists, and I think that they might have — I don't know if they've been planted, but someone like OpenAI is such a tight community security-wise that I don't think that that was done without anyone's—I think that that was done, if not deliberately, it was done with some internal knowledge.
So how this burst will come down to a symbolic shift. It will be an AI company going, "We do not have enough money to survive. We will die now. Goodbye." And they do the thumbs up from the end of Terminator 2.
It will be — I'm not gonna say it is Perplexity, but it's one of these companies that burns tens, hundreds of millions of dollars on nothing. That will be the kind of starting shot. Nothing like this is ever going to be linear or satisfying, though. It will be quite embarrassing for some people, but at some point, the money will run out. And the story will run that a company that needs money cannot get it, and a company is now dying.
I can't say who it will be because history is yet to occur, but it's inevitable at this point that one of these companies runs out of money. And when they do, it's going to scare venture capitalists. When venture capitalists get scared, they will not invest. When venture capitalists will not invest, they will not keep these AI startups going. And then the funding will dry up, and then being in AI will not be quite as sexy.
Really, though, we're seeing the beginnings of it with the current shifts in the stock market. I also think CoreWeave, which is an AI data compute company invested in by Nvidia, who's also a customer of theirs. And then this company raises money to buy GPUs from Nvidia using the GPUs as collateral. And you'll never guess what they use the debt to buy: more GPUs!
CoreWeave is under — I think over $10 billion of debt now. They lost $300 million last quarter. They very likely cannot afford to pay their debts. Someone like CoreWeave collapsing would be also another sign.
I also just think that there could be something chaotic. There could be more fraud. Builder.ai was just—instead of AI building websites, it was a bunch of people in India. It's a completely insane story, completely bonkers. Just actual fraud.
I don't know if there's other actual fraud. There's no way of telling, because it's not like you can see. But I will say there is enough piss-poor economics happening here, there's enough illogical economics happening here, well, I think it's inevitable someone's doing something silly with their books. Perhaps not true fraud, but understating exactly — maybe that is fraud, I don't know — but understating their losses.
Because when you look at it, all of these companies lose so much money. They lose so much money. There's so much money just being annihilated. And when you look at articles where people try and explain how they won't lose money, the answer is, “Well, they'll just charge more.”
It's like, right now they're losing money on the discounted rates. Do you think people are gonna — you think they're just gonna jack up prices and every customer's gonna go, “I love paying more for the same thing! This is great! This is how products work for me!” No, people are not gonna do it. Because the other thing with large language models is they're just not that good. They're really not that impressive.
“And it's not because it does a better job — it's because it does a quicker and cheaper job.”
The epitaph for American business. They’ll spend any amount of money as long as it doesn’t go into the pockets of the people who actually work for them.
Thanks so much for elevating Ed’s work in this space. I have to follow a lot of this and tech in general because I work in human rights policy and advocacy. There’s a great deal of hype/panic/general concern about how AI will impact human rights. And I think this is warranted, though I believe it is less the technology itself that’s a threat and much more the people behind it. They’re racing to achieve General AI, and spending hundreds of billions of dollars to get there — with very little assurance it will actually happen — while the CEOs behind it all kiss Trump’s diapered ass to keep regulators as far away as possible so we, the people, are sitting ducks for whatever they DO manage to come up with, assuming the economy doesn’t collapse. They do not have good intentions, and whether or not their tech actually does what they say it does, they’re putting us all at risk for it, whether we consent or not.