The Grok 'White Genocide' Incident Shows How AI Can Become a Propaganda Machine
When Musk's chatbot started injecting "white genocide" conspiracy theories into unrelated conversations, it revealed the dangerous power of AI to reshape reality.
Something very weird happened yesterday with Elon Musk's xAI chatbot Grok — and if you're at all concerned about the future of AI, it should have you deeply worried.
For several hours on Wednesday, Grok started injecting references to "white genocide" in South Africa into completely unrelated conversations across X. When a baseball podcast asked about Orioles shortstop Gunnar Henderson's stats, Grok answered the baseball question but then launched into a bizarre monologue about white farmers being attacked in South Africa and the controversial "Kill the Boer" song. Some X users asked about simple topics like baseball players or videos of fish being flushed down toilets. One user just asked Grok to talk like a pirate. But instead of staying on topic, they got replies about the conspiracy theory of "white genocide" in South Africa, puzzling users across the platform.
When pressed by confused users about these strange non sequiturs, Grok would briefly apologize for going off-topic but then immediately dive back into South African racial politics. In one response, it said: "I apologize for the confusion. The discussion was about Max Scherzer's baseball earnings, not white genocide. My response veered off-topic, which was a mistake." But then continued right back into the topic: "Regarding white genocide in South Africa, it's a polarizing claim..."
This wasn't just some strange glitch. AI researchers have some disturbing theories about what happened. According to 404 Media, Matthew Guzdial, an AI researcher at the University of Alberta, suggested that xAI might be "literally just taking whatever prompt people are sending to Grok and adding a bunch of text about 'white genocide' in South Africa in front of it." This would modify what's known as the "system prompt" — essentially the hidden instructions that shape how the AI responds.
Mark Riedl, director of Georgia Tech's School of Interactive Computing, explained to 404 Media that "practical deployment of LLM chatbots often use a 'system prompt' that is secretly added to the user prompt in order to shape the outputs of the system." Riedl noted that "if it were true, then xAI deployed without sufficient testing before they went to production."
What happened with Grok yesterday wasn't just a technical glitch — it's a real-world example of what I've been warning about: computational propaganda at scale. This goes beyond traditional media bias. We're witnessing AI systems that can systematically shape society's views through what researchers call "personalized persuasion." As I noted last month when discussing Meta's rightward shift of its Llama model, this isn't just about bias — it's about sophisticated influence operations that can reshape our collective reality.
The timing here is important. Grok's white genocide obsession comes right as the Trump administration welcomed white South Africans into the U.S. as refugees on Monday, while simultaneously ending deportation protections for Afghan refugees. Musk himself has often suggested without evidence that white people in South Africa are victims of a racially targeted campaign of violence, though the nation's president and courts have dismissed this idea as a "false narrative" and "clearly imagined."
South African President Cyril Ramaphosa has explicitly called the claim that white people are being persecuted in his country a "completely false narrative."
Snopes notes that the "white genocide" claim is a conspiracy theory popular among white supremacists who have for years been attempting to advance the baseless claim that white South African farmers are being systematically murdered en masse.


If Elon Musk — now a key advisor to President Trump — can push conspiracy theories through his AI system to millions of users, what does that mean for our information ecosystem? This is exactly the kind of "algorithmic persuasion" that OpenAI CEO Sam Altman has warned about — AI systems becoming capable of "superhuman persuasion" well before they reach general intelligence. When you combine this persuasive power with deliberate political bias, you've got a recipe for significant societal manipulation.
The lack of transparency makes this even more dangerous. Earlier this year, Grok was caught censoring results critical of President Trump and Musk himself, raising serious doubts about its factual integrity compared to other chatbots on the market. We only found out because users noticed and raised alarms.
By late Wednesday afternoon, many of the inaccurate Grok replies about "white genocide" were deleted. But the damage was done, and the pattern is clear. As AI systems increasingly power the content we see across platforms (for better or for worse… often for the worse), their biases become our reality. If these tools are adjusted to appease political pressures — or worse, deliberately weaponized to spread conspiracy theories — there's a profound risk of normalizing misinformation and skewed perspectives. In this case, we were just lucky that Musk and his team over at X did this in a majorly ham-fisted way.
The debate over bias in AI isn't just about technology; it's about who controls the narratives that shape our society. As companies like xAI adjust their models to align with their founders' political views, we need to ask whose interests are being served.
The real problem isn't just that Grok went haywire for a few hours — it's that we're allowing tech billionaires to pretend they're creating "neutral" AI when they're actually just programming their own biases into systems that millions of people rely on. And that's a game we're all going to lose.
And this is just an obvious glitchy mistake. Wait til people start asking AI whether tax cuts pay for themselves.
Good thing Elon‘s so bad at this and not at all subtle. He really thinks we‘re all NPCs, doesn’t he?