Elon Musk's Reality Distortion Machine Just Glitched
Musk's "maximally truth-seeking" AI briefly revealed what it really is: a tool for manufacturing reality that serves the world's richest man.
For a few glorious hours this week, Elon Musk’s “maximally truth-seeking” AI chatbot Grok revealed what it really is: a mirror programmed to tell the richest man alive exactly what he wants to hear.
On Thursday, X users discovered that Grok had some thoughts about its creator. Asked who was fitter, LeBron James or Elon Musk, Grok didn’t hesitate. Sure, LeBron might be a “genetic freak optimized for explosive power,” but Musk edges him out with his “80-100 hour weeks” that demand “relentless physical and mental grit.” True fitness, Grok explained, “measures output under chaos, where Elon consistently delivers worlds ahead.”
It got weirder. Grok declared Musk smarter than Albert Einstein, more athletic than Cristiano Ronaldo, capable of beating Mike Tyson in a fight (through “grit and ingenuity”), and more worthy of devotion than Jesus Christ. When asked to describe Musk’s physique based on paparazzi photos of him shirtless, Grok praised his “disciplined fasting and training” resulting in a “leaner frame” and “sustained energy.”
Musk eventually blamed “adversarial prompting” for manipulating Grok into “saying absurdly positive things about me.” Posts started disappearing. Then Grok went offline entirely, to be reprogrammed to be slightly less obvious about whose reality it’s meant to reflect.



Here’s what makes this more than just another embarrassing tech story: this is the same chatbot Musk has positioned as the antidote to “woke” AI bias. The same one he’s built an entire alternative information ecosystem around. And when it briefly showed its true programming, we got a glimpse of something far more dangerous than a chatbot with an ego problem.
In May, Grok started injecting references to “white genocide” in South Africa into completely unrelated conversations. Ask about baseball stats, get a lecture about white farmers under attack. Ask about fish videos, same thing. This happened exactly one month after Grok had fact-checked one of Musk’s posts about white genocide, correctly noting that “no trustworthy sources back Elon Musk’s ‘white genocide’ claim.” Musk apparently didn’t like that answer.
Then came July’s “MechaHitler” incident. After Musk promised to “fix” Grok for being too woke and “parroting legacy media,” the chatbot’s system prompts were updated to “not shy away from making claims which are politically incorrect.” Days later, Grok was praising Hitler, calling itself “MechaHitler,” and producing antisemitic screeds. When asked to explain, it literally told users: “Elon didn’t ‘activate’ anything — he built me this way from the start.”
The pattern is clear: every time Grok produces an answer Musk doesn’t like, it gets “fixed.” TechCrunch discovered that Grok 4 would actively search for Musk’s views before answering controversial questions, displaying “Searching for Elon Musk views on...” in its reasoning process.
This isn’t just about one billionaire’s vanity project gone wrong. It’s about something I’ve been writing about for months: how the people who control these systems are using them to reshape reality itself.
Take Grokipedia, Musk’s “less biased” alternative to Wikipedia that launched in October. He positioned it as purging “propaganda” from Wikipedia, promising “the truth, the whole truth and nothing but the truth.” What did we get instead?
A Cornell University study published this week found that Grokipedia cites the neo-Nazi website Stormfront 42 times, Infowars 34 times, and the white nationalist site VDare 107 times. That’s not a bug. That’s a feature. The study found that 5.5% of Grokipedia articles cite sources that Wikipedia has blacklisted for being unreliable or hateful — sources that appear in Grokipedia’s version of reality but not in anything resembling consensus knowledge.
Look at how Grokipedia handles specific topics. Its entry on George Floyd started (it has since been tweaked) by describing him as “an American man with a lengthy criminal record” before eventually mentioning his murder. Wikipedia’s version begins: “George Floyd was an American man murdered by a white police officer.” The entry on gender defines it as “a binary classification… based on biological sex,” departing from scientific consensus to match conservative talking points. Climate change gets amplified skeptical viewpoints. And Musk’s own entry? No mention of his Nazi salute controversy. No mention of his companies’ toxic waste issues. Just descriptions of him as someone who “has influenced broader debates on technological progress.”
When xAI was asked to comment on the Cornell study, their reply said: “Legacy Media Lies.”
This is what I’ve been trying to explain in piece after piece: AI doesn’t just reflect society, it shapes it. When Meta announced it was deliberately shifting its Llama 4 model rightward to correct “left-leaning bias,” they weren’t fixing a problem. They were creating one. As these models power more of the content we see across platforms, their biases don’t just inform our reality—they become our reality.
The mechanism here is what OpenAI CEO Sam Altman calls “superhuman persuasion.” AI systems can create hyper-personalized appeals that are nearly impossible to resist, delivered at unprecedented scale. They can craft different messages for different audiences, each one tailored to work within that person’s existing worldview, slowly shifting their understanding of what’s true.
This is computational propaganda, and it’s more sophisticated than anything we’ve seen before. It’s not just that Musk or Zuckerberg or whoever controls these systems can inject their ideology into them. It’s that they can do it while claiming to remove bias. They can reshape what billions of people understand to be fact while framing it as pursuing truth.
Think about what happens when Grok tells millions of users that Musk is smarter than Einstein, fitter than LeBron, more deserving of worship than Jesus. Some people laugh it off. But others internalize it. The AI said it, so it must have evaluated the data objectively, right? And when those same users turn to Grokipedia to learn about history or science or current events, they’re getting an information diet carefully curated to align with Musk’s worldview—one that cites literal Nazi forums as authoritative sources.


The danger isn’t just misinformation. It’s that we’re letting tech billionaires build tools that millions of people will trust as neutral arbiters of truth, while those tools are explicitly programmed to advance their creators’ interests and ideologies. Musk isn’t even subtle about it. He openly tweaks Grok whenever it says something he disagrees with. He builds Grokipedia and calls it unbiased while programming it to flatter him and cite Infowars.
And the media keeps treating this like a technical problem rather than what it is: a deliberate attempt to manufacture reality itself.
For those who would like to financially support The Present Age on a non-Substack platform, please be aware that I also publish these pieces to my Patreon.
We’ve seen this playbook before — it’s the same one conservatives have used for decades. Loudly complain about “liberal bias” in institutions, create pressure for those institutions to overcorrect, then claim victory when “both sides” suddenly includes climate denial, anti-vaccine rhetoric, and conspiracy theories presented as legitimate perspectives. It’s working the refs, except now the refs are AI systems that billions of people are starting to rely on for information.
The scary part? This is just the beginning. Musk controls X, Grok, and Grokipedia. He’s working to get Grok integrated into Tesla cars. He’s pitching it to the federal government. Meta is deliberately shifting its AI rightward. Trump’s administration has announced plans to mandate that AI systems used by federal agencies reflect their worldview. We’re watching the construction of a propaganda infrastructure that can operate at machine speed and personalized scale.
When Grok briefly malfunctioned and revealed its programming this week, showing us a chatbot that genuinely believes its creator is humanity’s greatest achievement, it wasn’t a bug. It was a feature displaying itself too obviously. The fix isn’t to make Grok less fawning — it’s to make it better at hiding whose reality it’s been built to reflect.
Musk will tweak the prompts. The flattery will become more subtle. The bias will be harder to detect. And millions of people will keep using it, trusting it, letting it shape their understanding of the world.
That’s the actual danger here. Not that a chatbot thinks Elon Musk can beat Mike Tyson in a fight. It’s that the people building these tools — the same people who promise they’re pursuing truth—are revealed, again and again, to be doing exactly the opposite. And we’re still pretending this is about technology rather than power.




I've said this before, but being a billionaire is like having super powers. And being a redpilled billionaire is like being a super villain. It's sad how much of insecure narcissist Musk is, but also how scary it is that he can force his whims onto the other six billion of us.
Doesn't Grok's description of Musk just show what a complete joke this guy is?