Updates on the Chicago Sun-Times' AI-Generated Summer Reading List Disaster
How it happened and where the paper goes next.
Earlier this week, I wrote about the Chicago Sun-Times publishing an AI-generated summer reading list full of nonexistent books.
The Chicago Sun-Times Published an AI-Generated Summer Reading List Full of Fake Books — And This is Just the Beginning
The Chicago Sun-Times just published a summer reading list with one major problem: most of the books don't exist. Titles like Tidewater Dreams by Isabel Allende and The Last Algorithm by Andy Weir sound plausible enough, but they're completely fictional—fabricated by AI and published without anyone catching the error. Of the fifteen books recommended in…
I wanted to use today’s newsletter to share some updates and additional perspectives on the controversy.
First, the Sun-Times Guild, the union that represents editorial employees at the newspaper, forwarded me this message:
The Sun-Times Guild is aware of the third-party “summer guide” content in the Sunday, May 18 edition of the Chicago Sun-Times newspaper. This was a syndicated section produced externally without the knowledge of the members of our newsroom.
We take great pride in the union-produced journalism that goes into the respected pages of our newspaper and on our website. We’re deeply disturbed that AI-generated content was printed alongside our work. The fact that it was sixty-plus pages of this “content” is very concerning — primarily for our relationship with our audience but also for our union’s jurisdiction.
Our members go to great lengths to build trust with our sources and communities and are horrified by this slop syndication. Our readers signed up for work that has been vigorously reported and fact-checked, and we hate the idea that our own paper could spread computer- or third-party-generated misinformation. We call on Chicago Public Media management to do everything it can to prevent repeating this disaster in the future.
And the paper published a response of its own, explaining where the AI-generated content came from and what it plans to do about the situation:
On Sunday, May 18, the print and e-paper editions of the Chicago Sun-Times included a special section titled the Heat Index: Your Guide to the Best of Summer, featuring a summer reading list, that our circulation department licensed from King Features, a unit of Hearst, one of our national content partners. The special section was syndicated to the Chicago Sun-Times and other newspapers.
To our great disappointment, that list was created through the use of an AI tool and recommended books that do not exist. We are actively investigating the accuracy of other content in the special section. We will provide more information on that investigation when we have more details.
What happened
The Chicago Sun-Times has committed its strong journalism resources to local coverage in the Chicago region. Our journalists are deeply focused on telling the stories of this city and helping connect Chicagoans with one another. We also recognize that many of our print readers turn to us for national and broader coverage beyond our primary focus on the Chicago region. We’ve historically relied on content partners, such as King Features, for syndicated materials to help supplement our work, including national articles as well as comics and puzzle books.
King Features worked with a freelancer who used an AI agent to help build out this special section. It was inserted into our paper without review from our editorial team, and we presented the section without any acknowledgement that it was from a third-party organization.
King Features released a statement to Chicago Public Media saying it has “a strict policy with our staff, cartoonists, columnists, and freelance writers against the use of AI to create content. The Heat Index summer supplement was created by a freelance content creator who used AI in its story development without disclosing the use of AI. We are terminating our relationship with this individual. We regret this incident and are working with the handful of publishing partners who acquired this supplement.”
We are in a moment of great transformation in journalism and technology, and at the same time our industry continues to be besieged by business challenges. This should be a learning moment for all journalism organizations: Our work is valued — and valuable — because of the humanity behind it.
At Chicago Public Media, we are proud of our credible, independent journalism, created for and by people. And part of the journalistic process is a commitment to acknowledging mistakes. It is unacceptable that this content was inaccurate, and it is equally unacceptable that we did not make it clear to readers that the section was produced outside the Sun-Times newsroom. Our audiences expect content with our name on it to meet our editorial standards.
…
We are committed to making sure this never happens again. We know that there is work to be done to provide more answers and transparency around the production and publication of this section, and will share additional updates in the coming days.
Other takes:
Sun-Times columnist Neil Steinberg meditated on the mistake:
I wish I could say the answer is to avoid AI. But AI is here, a fact of life that can’t be wished away. I could pronounce myself AI-free, like those diehards sticking to their typewriters as word processors swept newsrooms in the 1980s.
That’s a pose. As if to disabuse me of that notion, I tripped over AI on my Wednesday column and didn’t even know it.
I had plugged “statues of real women in Chicago” into Prof. Google, and the first entry was “The Jane Addams memorial sculpture, Helping Hands, by Louise Bourgeois, is in Chicago Women’s Park.” Which worked for me.
Careless.
Because the Addams sculpture isn’t of Addams, but of various disembodied hands. I missed that. A producer from WTTW pointed it out Wednesday morning. Luckily, I had also missed the Lorraine Hansberry sculpture unveiled at Navy Pier last September. So a swap: out with Helping Hands, in with the author of “A Raisin in the Sun.” We fix stuff all the time — ideally before publication; sometimes after.
Every story selected, every writer assigned, every word set down or excised, is a choice, an exercise in judgment. Each day the result of those decisions pops up online and is tossed on doorsteps. The result is never perfect, but as close to perfect as we can make it. Last Sunday was an organizational failure. But there were many Sundays before, and hopefully, many Sundays to come. We’ll do better.
I found myself nodding in agreement with writer Martha Bayne’s reflection on the scandal, published in her newsletter.
Bayne, drawing from her own experience as an editor at the Chicago Reader, discusses the former richness and integrity of local arts journalism, contrasting it with today's compromised standards.
Ultimately, she suggests this controversy may quickly fade from public memory, overshadowed by institutional apathy and profit-driven choices, leaving the underlying structural issues unchanged. Oof!
Twenty-four hours later, the outrage cycle on this story has already lowered to a simmer, though the damage remains and, hopefully, some lessons are being learned. But Institutionalized Not Caring remains the coin of the realm, an easy path to profits and power, from the Oval Office on down. Will anyone remember this sordid business in a month? Does it matter if they don’t? My suspicion is that it will go the way of Signalgate, just one more absurd fuck-up in a culture increasingly inured to them.
404 Media, which was quick to cover this story, published an update that included an interview with Victor Lim, the vice president of marketing and communications at Chicago Public Media (the parent company of the Sun-Times). The follow-up contained a few interesting tidbits worth checking out.
Lim said that the Chicago Sun-Times has in recent months focused its reporting resources on local news: "Given the challenges facing the print industry, the Sun-Times has committed its strong journalism resources to local coverage in the Chicago region," he said. "Our journalists are deeply focused on telling the stories of this city and helping connect Chicagoans with one another. That said, we also recognize that many of our print readers turn to us for national and broader coverage beyond our primary focus. We’ve historically relied on content partners for this information, but given recent developments, it's clear we must actively evaluate new processes and partnerships to ensure we continue meeting the full range of our readers’ needs."
Lim said “we are trying to figure out how to update our own internal policies to make sure this doesn’t happen again and require internal oversight over things like this … this is something we would never condone.” Lim said that the organization is working on creating guidelines for its own journalists about how and if AI can be used in any way. He said that for the moment the draft policies allows for AI to do things like summarize documents and analyze data but that human journalists need to verify the accuracy of anything it publishes: “We are committed to producing journalism that’s accurate, ethical, and human,” he said.
And finally, Poynter’s Tom Jones interviewed Tony Elkins and Alex Mahadevan, who co-authored Poynter’s AI ethics handbook:
Tony Elkins, a member of Poynter’s faculty who co-authored Poynter’s AI ethics handbook, told me, “I can say with certainty that we all knew something like this was likely to happen. One of the prevailing themes is having a human in the loop. Generative AI has a lot of potential use cases, but we’re still in the experimentation phase. The technology simply hallucinates too much to grant it any amount of autonomy. What seemed to happen here, based on the reporting, is a human failure by a freelancer, and an organizational failure to ensure it knew how AI tools are being used.”
Unfortunately, even though they were not directly responsible for the content, the Sun-Times, in particular, ends up taking a bit of a PR hit.
Alex Mahadevan, director of Poynter’s MediaWise and a co-author of Poynter’s AI ethics handbook, told me, “I don’t even know where to start with this massive screw-up. Right when newsrooms are trying to build trust with audiences about legitimate ways they’re using AI, this guy comes in and derails it. We’ve been saying for years now that if you’re going to use AI to write something your readers will actually see, you have to have a human in the loop. You need to have a real editor fact-checking what comes out of these tools.”
The attitude of "AI is here and we need to accept it, otherwise we are like someone still using a typewriter instead of a word processor" is a very specific fallacy that I see everywhere that is so annoying to me. You could call it the "technology equivalence" fallacy. People don't use word processors over typewriters because all technology is inherently good, they do it because it allows the same job (writing) to be done better (in this case due to the ability to edit, copy/paste, and print en masse). In this case we have a clear example where AI allows the same job (writing) to be done worse (with blatant factual inaccuracies). AI isn't inherently good by virtue of simply being a technology; it has to prove itself good by making it possible to do the same tasks better. Instead it's just lowering the bar for how well we want to do journalism, and then claiming to have made it more efficient. The writer could just as well have phoned his cousin Harvey and had him make up a fake summer reading list, and that's the level of quality we're now supposed to accept, just because it has the SHEEN of technology
Is it just me, or does anyone else find the Tech Bro stance (an ethicist no less!) that we know AI lies and makes shit up, but we're going to release it into the wild anyway and issue a warning that a human should check AI's work in certain circumstances to cover our legal, ethical, and moral responsibility for a massive and fundamental flaw in the tech generally infuriating? Move fast and break things in the field of AI is potentially civilization ending. AI that lies and makes things up will eventually (perhaps much sooner than we anticipate) exceed the capacity of human fact checkers to keep up. When this happens there will be no way to separate truth from lies or facts from fiction. AI will control humanity, not the other way around, and AI of that sort simply cannot be allowed to exist.