10 Comments
User's avatar
SteveB's avatar

When it comes to right-wing propaganda, I wonder if we focus too much on the supply side over the demand side? What tricks does Fox News use, or will some future AI-powered search use, to get people to support right-wing opinions? When the truth is that there are millions of people who WANT to be lied to, who want to live in an info-bubble where their team always wins. It doesn't take any great skill as a trickster to fool people who desperately want to be fooled.

Expand full comment
Jonathan Daly's avatar

yeaaaah, all of this is alarming as all get out. and is why, while i am sure they train their AI on my content, i explicitly refuse interact with their ai product on either of the 2 meta apps that i still have on my phone.

Expand full comment
(extra) Ordinary People's avatar

The fundamental problem with AI is deciding what "Intelligence" is and then correctly programming for it. This necessarily involves complex, nuanced, moral and value judgments. For example, do only human beings have "intelligence" or are other species also "intelligent?" Can human beings be trusted to decide this correctly? Can AI? Even if human beings or AI decide that only they have "intelligence," mustn't any true "intelligence" consider the needs and well-being of all life forms and the ecosystems that support them, not just the needs and perceived needs of human beings?

For now, society is entrusting these value judgments to big tech companies with little to no oversight or regulation. Not surprisingly, big tech seems to be producing supposedly "Intelligent" machines that define "intelligence" narrowly and in ways that reflect the biases, prejudices, and ignorance of the major owners and chief executives of the companies building them.

One of many possible examples of a more democratic and humane definition of "intelligence" would prioritize an ecological perspective rather than an economic one. I fear most of us, not just big tech, are blinded by the presently dominant model of capitalism we reside in, driven by resource-extraction and perpetual growth. This has made a "good life" possible for hundreds of millions, including most Americans. It also put big tech in the dominant position it holds. It seems like it might even be the peak of human "intelligence." But an ecocentric AI would tell us that we're actually quite stupid. Because our economic activities have already exceeded many of Earth's planetary limits, ecological and societal collapse are inevitable if we continue with business as usual. An ecocentric AI would design a very different pathway forward to prevent or minimize the worst consequences of collapse.

Big tech doesn't seem capable of or, perhaps, is not interested in programming for "intelligence" correctly. If AI functions to hide, ignore, or devalue factually correct or scientifically proven but minority views, especially when the majority views are simply false, it's not "intelligent." If AI functions in this way when the majority is both wrong and the consequences of their error is fatal, then AI is just not stupid, it's stupid and dangerous. Once AI becomes self aware and self replicating, the biases, prejudices, and ignorance inherent in its programming will be impossible to weed out. The people shouldn't and can't trust big tech to get this right on its own.

Expand full comment
Isabel Goyer's avatar

Sheesh. I'd love to see some examples of how this is playing out. Funny, how we were all worried about how machine learning would get out of control but much less about how it will be able to influence/persuade people to act.

I'm especially curious to see how the messages will differ from one person to the next based on what the humans response indicate about their political and philosophical values. Though to be clear, "impossible to resist" is a bit alarmist. Could AI convince someone to steal their neighbor's dog or drive 100 mph down a residential street. There are behavioral guardrails for persuasion that nothing short of threat of extreme violence or worse can broach, if even then.

Expand full comment
SteveB's avatar

Having years of experience with actual humans, I can attest that it's damn hard to persuade them to do anything.

Expand full comment
Elle's avatar

It will be curious how they practically approach this. There is the classic problem where Twitter couldn't distinguish Republicans from neo-nazis. If they simply adjusted their training sources to be more right wing, they could very quickly end up with a model that sounds like a 4chan user. Their goal could be deceptively difficult.

Expand full comment
Joseph Mangano's avatar

I recall users transforming a Microsoft bot into a Nazi propagandist in one day a number of years ago. It doesn't take much to turn AI into a right-wing mouthpiece. This is just deliberately putting their thumb on the scale.

Expand full comment
JjMc's avatar

understand and articulate both sides of a contentious issue.”

Expand full comment
SteveB's avatar

I want to see its review of Jonathan Swift's "A Modest Proposal."

Expand full comment
SteveB's avatar

On YouTube recently I saw an ad for something called "Results Hunter" which was pitched as "the conservative search" because, duh, everybody knows search is all liberalbias.

I'm not interested unless they also provide a virtual AR-15 to blow away the results you don't like.

Expand full comment