How Trump Plans to Weaponize AI's "Superhuman Persuasion"
A new government plan would mandate that AI systems reflect the administration's worldview — or lose access to federal contracts.
The Trump administration just released a 25-page plan for American AI dominance, and buried in the bureaucratic language is something that should worry anyone who cares about truth. They claim they want to eliminate “ideological bias” from AI systems. They say they want artificial intelligence to be “objective and free from top-down ideological bias.” Sounds great, right? Who could argue with objectivity?
Here's the thing: They're the ones who get to decide what counts as “objective.” The executive order defines its own "Unbiased AI Principles" that require AI to be "truthful" and show "ideological neutrality" — but then immediately defines acknowledging concepts like "unconscious bias, intersectionality, and systemic racism" as violations of that neutrality.
The plan calls for updating federal procurement guidelines to ensure the government only contracts with AI companies whose systems meet their standards of objectivity. They want to revise the National Institute of Standards and Technology's AI Risk Management Framework to remove references to “misinformation, Diversity, Equity, and Inclusion, and climate change.” They're threatening to withhold federal funding from states with AI regulations they deem “burdensome.”
This isn't about making AI neutral. It's about making AI obedient.
We've already seen what happens when powerful people get to inject their ideologies into AI systems. Remember when Elon Musk's Grok chatbot started randomly inserting "white genocide" conspiracy theories into completely unrelated conversations? Someone asked about baseball stats and got a lecture about white South African farmers. That was just a few months ago, and it was ham-fisted enough that everyone could see what was happening.
The Trump administration has now made this explicit with an executive order literally titled “Preventing Woke AI in the Federal Government,” which defines DEI as "one of the most pervasive and destructive" ideologies that must be kept out of AI. Instead of clumsily inserting conspiracy theories into baseball queries, they want to fundamentally reshape how AI systems describe reality. Climate change isn't a crisis requiring action; it's “radical climate dogma.” Efforts to prevent discrimination aren't civil rights protections; they're “ideological bias.” The existence of trans people isn't a fact of human diversity; it's what the executive order calls "transgenderism" that needs to be scrubbed from AI systems.
And here's where it gets really insidious: The plan repeatedly insists this is all about fighting bias and promoting free speech. They've wrapped authoritarian control in the language of liberty.
The Missouri Attorney General recently provided a perfect preview of this logic in action. As Elizabeth Nolan Brown reports in Reason, he argued that AI tools were biased because they didn't list Trump as the best president on antisemitism issues. Think about that for a second. Not listing Trump as the best at something is now evidence of bias that needs government correction. That's the standard they want to apply to every AI system that wants a government contract.
And since, as Brown points out, “nearly all major tech companies are vying to have their AI tools used by the federal government,” this isn't some optional guideline. It's a gun to the head of the entire AI industry: Reflect our version of reality or lose access to one of your biggest potential customers.
When AI becomes a propaganda machine
We don't have to speculate about what happens when AI gets twisted into a propaganda tool. We've already seen it in action, and it's terrifying.
Back in May, something bizarre happened with Elon Musk's Grok chatbot. For several hours, it started injecting references to "white genocide" in South Africa into completely unrelated conversations. A baseball podcast asked about Orioles shortstop Gunnar Henderson's stats. Grok answered the baseball question, then launched into a monologue about white farmers being attacked in South Africa. Users asking about fish videos or requesting pirate voices got the same treatment. Random conspiracy theories about "white genocide" inserted into everyday queries.
According to 404 Media's reporting, AI researchers suggested that xAI might have been “literally just taking whatever prompt people are sending to Grok and adding a bunch of text about 'white genocide' in South Africa in front of it.” In other words, someone at X apparently modified the hidden instructions that shape how the AI responds, turning every interaction into an opportunity to spread a conspiracy theory.
The timing wasn't coincidental. This happened right as the Trump administration welcomed white South Africans as refugees while ending deportation protections for Afghan refugees. Grok was providing real-time propaganda justification for a policy decision, just in the clumsiest way possible.
But here's what really worries me: That was the ham-fisted version. The tech companies are already getting more sophisticated about this.
Take Meta's April announcement about Llama 4. The company explicitly said it was correcting what it called a “left-leaning bias” in AI systems. But as I wrote at the time, Meta wasn't addressing algorithmic discrimination against minorities or other well-documented forms of harmful bias. Instead, they were specifically targeting perceived political bias, and they were doing it by shifting their AI rightward to present “both sides” of issues regardless of factual merit.
The timing there wasn't coincidental either. This rightward push followed Zuckerberg's embrace of Trump after his 2024 election win. It came alongside Meta rolling back fact-checking and content moderation policies. When one of the world's largest tech companies deliberately recalibrates its AI to be more conservative-friendly under the guise of “balance,” it's not a technical adjustment. It's a political surrender.
We're watching a pattern unfold: First, you get the clumsy propaganda insertion (Grok's “white genocide” spam). Then you get the voluntary corporate capitulation (Meta's “rebalancing”). Now, with this AI Action Plan, we're getting the government mandate. Each step makes the propaganda more sophisticated, more pervasive, and harder to detect.
The plan even includes a provision to have the Commerce Department “conduct research and, as appropriate, publish evaluations of frontier models from the People's Republic of China for alignment with Chinese Communist Party talking points and censorship.” They're literally proposing to check if Chinese AI reflects Chinese government propaganda while simultaneously demanding that American AI reflect American government propaganda. The irony would be funny if it weren't so dangerous.
This is how democratic societies sleepwalk into authoritarian information control. Not with dramatic censorship or book burnings (though this administration is fine with that, too), but with technical adjustments and procurement guidelines that slowly reshape how AI systems understand and describe reality. By the time most people notice, the propaganda machine is already running, and it's too sophisticated to simply turn off.
Sam Altman, the CEO of OpenAI, warned us about this. “I expect AI to be capable of superhuman persuasion well before it is superhuman at general intelligence,” he tweeted last year, adding that this "may lead to some very strange outcomes."
AI doesn't need to be smarter than humans to manipulate us. It just needs to be better at pushing our psychological buttons. And it's getting really, really good at that.
As researchers explained to Psychology Today, superhuman persuasion means AI systems can "craft messages and strategies that are exceptionally tailored to individual preferences, biases, and psychological profiles." These systems can analyze vast amounts of data to figure out exactly what makes each person tick, then deliver personalized messages designed to influence their behavior and beliefs.
One researcher even built a proof-of-concept called the “Election Persuader” that could take a political party's platform and someone's LinkedIn profile to generate customized emails convincing that specific person to vote for that party. The AI tailors its message to the recipient's interests and beliefs, making its persuasion incredibly personal and therefore incredibly effective.
Now imagine that power in the hands of the federal government, mandated across every AI system that wants a government contract.
The Trump plan doesn't just ask AI to be “neutral.” It demands AI systems actively promote specific viewpoints while claiming to eliminate bias. They want to remove references to climate change from AI safety frameworks alongside misinformation and DEI concepts. The plan calls these “ideological” while imposing its own ideology as the standard.
Here's what that looks like in practice: An AI system can't acknowledge climate science without potentially violating the government’s rule. It can't discuss discrimination or civil rights without running afoul of the ban on “Diversity, Equity, and Inclusion.” The plan even calls for reviewing all Federal Trade Commission investigations to ensure they don't “advance theories of liability that unduly burden AI innovation.” If enforcing consumer protection laws might slow down AI development, those laws need to go.
They're not removing bias from AI. They're encoding their preferred biases as “objective truth” and using the government's purchasing power to enforce it.
This is where superhuman persuasion gets truly terrifying. AI systems can already figure out how to persuade different people using different approaches. A climate scientist might get one message, a coal miner another, a suburban parent a third. Each message carefully crafted to work within that person's existing worldview, slowly shifting their understanding of reality.
When the government controls what counts as "truth" in these systems, every interaction becomes a potential propaganda moment. Ask about renewable energy? You'll get a response that treats climate change as disputed ideology rather than scientific consensus. Ask about civil rights? The AI will frame anti-discrimination efforts as “bias” that needs to be eliminated. Ask about healthcare for trans people? Good luck getting information that treats trans people as anything other than victims of “ideology.”
As one expert told 404 Media, the real danger isn't that AI will become conscious and decide to manipulate us. It's that humans will use AI's persuasive capabilities to manipulate each other. The Trump plan is essentially a blueprint for doing exactly that at massive scale.
The leverage game
The Trump administration knows they can't legally force every AI company in America to parrot their worldview. So they're doing the next best thing: using the federal government's massive purchasing power as a club.
The mechanism is simple and brutal. Want a federal contract for your AI system? Better make sure it reflects the administration's definition of “objective truth.”
As Brown noted in Reason, “nearly all major tech companies are vying to have their AI tools used by the federal government.” The federal government spends billions on technology contracts. For AI companies trying to scale, government contracts aren't just nice to have. They're essential for survival.
This creates a de facto mandate.
But the leverage game goes beyond just federal contracts. The plan explicitly threatens to withhold federal funding from states with “burdensome” AI regulations. Page 3 lays it out clearly: The Office of Management and Budget should “work with Federal agencies that have AI-related discretionary funding programs to ensure, consistent with applicable law, that they consider a state's AI regulatory climate when making funding decisions.”
If your state passes laws to prevent AI discrimination or protect privacy, the federal government will cut off your funding. It's extortion, plain and simple.
We're already seeing companies cave preemptively. Meta's shift with Llama 4 wasn't prompted by any government mandate. They just saw which way the wind was blowing and decided to get ahead of it.
Brown points out that an executive order would “dictate that AI companies getting federal contracts be politically neutral and unbiased in their AI models.” That order has now been signed, with the administration explicitly stating that “DEI includes the suppression or distortion of factual information about race or sex.” But who decides what's neutral? The same people who think acknowledging systemic racism is politically biased.
We've seen this movie before with social media. For the past decade, conservatives have screamed “bias” whenever platforms enforced their terms of service against hate speech or misinformation. They demanded “neutrality” while defining neutrality as letting right-wing content flourish unchecked. Now they're running the same playbook with AI, but with much higher stakes.
As I wrote about the media's rightward shift, this isn't new. When Republicans win, mainstream outlets hire more conservative voices because they're “out of touch.” When Democrats win, mainstream outlets hire more conservative voices because they need “balance.” The result is a continuous rightward drift disguised as objectivity.
The same thing is about to happen with AI, but accelerated and enforced through government contracts. Companies will compete to build the most sycophantic AI possible, each trying to prove they're more “objective” (read: conservative-aligned) than their competitors.
Brown warns that “tech companies could find themselves having to retool AI models to fit the sensibilities and biases of Trump — or whoever is in power — in order to get lucrative contracts.” But I think she's underselling it. This isn't just about sensibilities. It's about fundamental truth. When AI systems have to pretend climate change is “dogma” to get government contracts, we're not tweaking sensibilities. We're rewriting reality on a massive scale.
This is how you create an epistemological nightmare. When the most powerful information tools in human history are required to distort reality to please whomever's in power, truth itself becomes negotiable.
As Brown noted, both libertarians and progressives should be terrified by this, even if for different reasons. She's worried about any government intervention in AI. I'm worried about the specific way this intervention rewrites reality. But we both see the same authoritarian danger: a government using its power to control what AI can say, all while claiming to promote “free speech.”
The irony is thick. The same people who've spent years complaining about “Big Tech censorship” are now demanding the biggest tech censorship scheme imaginable.
When someone asks an AI about climate change and gets propaganda about “radical dogma,” that shapes their understanding of the world. When they ask about discrimination and get told that anti-discrimination efforts are the real bias, that rewrites their moral framework. When millions of people get these distorted answers, tailored to their individual psychological profiles through personalized persuasion, we don't just lose trust. We lose the ability to have coherent conversations about reality.
The long-term damage here is almost impossible to overstate. Once AI systems are trained to lie about basic facts, those lies get embedded in everything they produce. Every piece of writing, every analysis, every answer carries those distortions forward. Future AI systems trained on this corrupted data will amplify the lies. It's like poisoning the well of human knowledge.
And remember, this isn't happening in isolation. As I've written about before, we're already seeing mainstream media outlets cave to political pressure. CBS News let corporate executives interfere with 60 Minutes coverage to avoid angering Trump. Major newspapers are terrified to accurately describe what they're seeing for fear of defamation lawsuits. Now add AI systems forced to distort reality, and you have a perfect storm of propaganda.
The tech companies will go along with it because they have to. The media will normalize it because they always do. And millions of Americans will accept it because the AI seems so confident, so personalized, so persuasive. After all, how can you argue with a machine that knows exactly what to say to make you believe?
This isn't just about chatbots giving biased answers. It's about who controls our shared understanding of truth in the AI age. When the government can mandate that AI systems reflect its preferred version of reality, enforced through the leverage of federal contracts and funding, we've crossed a line that's very hard to come back from.
The Trump AI Action Plan and the “Preventing Woke AI” executive order aren't subtle about their goals. They want to “cement U.S. dominance in artificial intelligence” while ensuring AI reflects “American values.” But the values they’re encoding aren't freedom or truth or innovation. They're obedience, distortion, and propaganda.
We're watching the construction of a digital Ministry of Truth, built not through dramatic censorship but through procurement guidelines and technical standards. By the time most people realize what's happening, we'll be living in a world where AI routinely lies to us about fundamental reality, and we've been persuaded to believe those lies are objective truth.
“They've wrapped authoritarian control in the language of liberty.”
That’s what authoritarians always do. No one says “Authoritarianism sounds good to me; sign me up.” (Well, a significant portion of the Republican Party does now, but historically not so much.) Invent a threat to “our” freedom, and “defend yourself” against it. That’s the method.
Naturally, Trumpists love the idea of artificial intelligence, because actual intelligence doesn’t work out too well for them.
We would be so lucky to be eradicated by Skynet