Twitter's Trending Page is Just Making Things Up Now
I'm worried about the AI-generated sludge-filled future of the internet.
Back in 2016, Facebook sent the political world into chaos when it fired the humans who curated the social network’s trending feed, replacing it with an algorithm that simply listed the most popular posts on the site no matter where they came from. We may be seeing 2024’s version of this playing out in real-time on X (fka Twitter).
Here’s the part of the newsletter where I say The Present Age is reader-supported. Please consider subscribing to the free or paid versions. Thanks!
On Friday, Mashable’s Matt Binder reported that the headline “Iran Strikes Tel Aviv with Heavy Missiles” had trended on the front page of X after Elon Musk handed the reins to the feed over to his “Grok AI” chatbot.1
“Elon Musk's X pushed a fake headline about Iran attacking Israel. X's AI chatbot Grok made it up.” (Mashable, Matt Binder, 4/5/24)
Based on our observations, it appears that the topic started trending because of a sudden uptick of blue checkmark accounts (users who pay a monthly subscription to X for Premium features including the verification badge) spamming the same copy-and-paste misinformation about Iran attacking Israel. The curated posts provided by X were full of these verified accounts spreading this fake news alongside an unverified video depicting explosions.
From there, it appears X's algorithms noticed a potential story trend within these users' posts, and an Explore story page was created. We can deduce from X's own claims about its inner workings that Grok must have then created an official-looking written narrative, along with a catchy headline. It did all this based on select users sharing fake news, in an automated attempt to provide context for what the platform itself seemed to assume was a real story.
This wouldn't be the first time Grok provided users with misinformation. Previous reporting on the early versions of X's chatbot found that it often created fake news in private chats with the select few users who had access to it. However, this recent incident combined with the new Explore feature presents the first time X took Grok's misinformation, packaged it as a real trending news story, and promoted it to its entire user base, ostensibly as context for a real event.
After an earthquake hit New York City on Friday, X user Maya Kosoff joked, “Eric Adams is going to put another 50,000 cops in the subway with the mandate of shooting the earthquake.” Soon after, Musk’s Grok AI posted this to X’s trending page: “Adams vs. Earthquake: 50,000 Cops in Subway Showdown. New York City Mayor Eric Adams has taken decisive action in response to an earthquake that recently struck the city. Adams has deployed 1,000 NYPD officers to address the situation and prevent further earthquakes, with plans to add an additional 500 cops to the effort. The mayor has even considered utilizing ‘robo cops’ in this endeavor. The response is focused on the city’s subway system, where Adams has ordered ‘every cop in the city’ to be present to ‘shoot the damn earthquake before it strikes again.’”
Though each story on the new trending page comes with a tiny disclaimer warning that “Grok is an early feature and can make mistakes. Verify its outputs,” this seems to be a disaster in the making.
In other AI sludge news:
“Google Books Is Indexing AI-Generated Garbage” (404 Media, Emanuel Maiberg, 4/4/24)
Google Books is indexing low quality, AI-generated books that will turn up in search results, and could possibly impact Google Ngram viewer, an important tool used by researchers to track language use throughout history.
I was able to find the AI-generated books with the same method we’ve previously used to find AI-generated Amazon product reviews, papers published in academic journals, and online articles. Searching Google Books for the term “As of my last knowledge update,” which is associated with ChatGPT-generated answers, returns dozens of books that include that phrase. Some of the books are about ChatGPT, machine learning, AI, and other related subjects and include the phrase because they are discussing ChatGPT and its outputs. These books appear to be written by humans. However, most of the books in the first eight pages of results turned up by the search appear to be AI-generated and are not about AI.
“A.I.-Generated Garbage Is Polluting Our Culture” (The New York Times, Erik Hoel, 3/29/24)
A study published this month examined scientists’ peer reviews — researchers’ official pronouncements on others’ work that form the bedrock of scientific progress — across a number of high-profile and prestigious scientific conferences studying A.I. At one such conference, those peer reviews used the word “meticulous” more than 34 times as often as reviews did the previous year. Use of “commendable” was around 10 times as frequent, and “intricate,” 11 times. Other major conferences showed similar patterns.
Such phrasings are, of course, some of the favorite buzzwords of modern large language models like ChatGPT. In other words, significant numbers of researchers at A.I. conferences were caught handing their peer review of others’ work over to A.I. — or, at minimum, writing them with lots of A.I. assistance. And the closer to the deadline the submitted reviews were received, the more A.I. usage was found in them.
“Facebook Is Filled With AI-Generated Garbage—and Older Adults Are Being Tricked” (The Daily Beast, Molly Glick, 3/23/24)
As AI-generated content proliferates online and clutters social media feeds, you may have noticed more images cropping up that invoke the uncanny valley effect—relatively normal scenes that also carry surreal details like excess fingers or gibberish words.
Among these misleading posts, young users have spotted some obviously faux images (for example, skiing dogs and toddlers, baffling "hand-carved" ice sculptures and massive crocheted cats). But AI-made art isn’t evident to everyone: It seems that older users—generally those in Generation X and above—are falling for these visuals en masse on social media. It’s not just evidenced by TikTok videos and a cursory glance at your mom’s Facebook activity either—there’s data behind it.
This platform has become increasingly popular with seniors to find entertainment and companionship as younger users have departed for flashier apps like TikTok and Instagram. Recently, Facebook’s algorithm seems to be pushing wacky AI images on users’ feeds to sell products and amass followings, according to a preprint paper announced on March 18 from researchers at Stanford University and Georgetown University.
To show how easily this chatbot can be gamed, check out this thing I did last year.
lord help those who get their news only from Xitter. They will be convinced that MSM is "covering up" the story of the destruction of Tel Aviv. And head for their bunkers. I hate to think what Grok will come up with as we wait for election results in November.
He meant to do that.