New study highlights the virality of hate
And why "dunking" on our political opponents only fuels the problem.
A new research article published by the Proceedings of the National Academy of Sciences of the United States of America (PNAS) doesn’t bode well for efforts to fight political extremism and polarization. The paper’s authors analyzed 2,730,215 Twitter and Facebook posts published by members of the news media and U.S. Congresspeople, and came to the conclusion that the quickest way to social media success is to attack members of the “out-group.”
Steve Rathje, one of the paper’s authors, has a really great Twitter thread about his work.
He puts it better than I can for obvious reasons:
This is all a choice. Facebook and Twitter long ago shifted from chronological to algorithmic timelines.
Facebook knows that its algorithm rewards extreme rhetoric and anger — it just doesn’t care. Last year, The Wall Street Journal uncovered an internal report Facebook put together in 2018 that found that the company’s “algorithms exploit the human brain’s attraction to divisiveness.” Even worse, the report’s authors found that if left in place, the algorithm would continue to serve “more and more divisive content in an effort to gain user attention and increase time on the platform.”
It’s easy to blame this on “the human brain’s attraction to divisiveness,” but without the profit-and-ideology-driven infrastructure put in place by social media giants, this problem would not exist — at least in any comparable way.
Last year, The New York Times reported on an initiative that Facebook considered implementing that would have benefitted publishers with a high “news ecosystem quality” score, an internal ranking metric. This was part of an initiative that began months earlier to protect against the possibility that then-President Donald Trump would use his social media accounts to contest the results of the election in the event of his loss.
From the Times:
The change was part of the “break glass” plans Facebook had spent months developing for the aftermath of a contested election. It resulted in a spike in visibility for big, mainstream publishers like CNN, The New York Times and NPR, while posts from highly engaged hyperpartisan pages, such as Breitbart and Occupy Democrats, became less visible, the employees said.
It was a vision of what a calmer, less divisive Facebook might look like. Some employees argued the change should become permanent, even if it was unclear how that might affect the amount of time people spent on Facebook. In an employee meeting the week after the election, workers asked whether the “nicer news feed” could stay, said two people who attended.
After the 2016 election, Facebook developed “Project P,” an anti-propaganda initiative that would have cracked down on political misinformation from obscure websites promoting hoax stories (i.e. actual fake news). Facebook Vice President of Global Policy Joel Kaplan — who would later make headlines for being in attendance at Justice Brett Kavanaugh’s confirmation hearing and later throwing a party in Kavanaugh’s honor — reportedly quashed plans for its implementation because it would disproportionately affect conservative pages, which were more likely to share the false information.
Part of Project P included surveying users about whether specific posts were either “bad for the world” or “good for the world.” What they found, according to the Times, is that hyper-viral posts were more likely to be considered “bad for the world.” Why was it that Facebook’s algorithm kept promoting the most divisive and angry content, and why wouldn’t they do anything about it? Company leadership didn’t seem to care either way. Prioritizing accuracy over rage-bait led to less Facebook use, and when pressed to make decisions about the future of the company, it chose the path that would result in more use, not less.
But the problem isn’t entirely the fault of social media companies, and we really need to acknowledge this.
Ever find yourself quote-tweeting a bad take online to mock it? I have. Constantly. It’s an impulse and is fueled by both a subconscious desire to project our own beliefs to the world and an adoption of the “sunlight is the best disinfectant” approach to misinformation. Unfortunately, it’s part of the reason these algorithms favor extreme points of view. As far as social media companies are concerned, retweets and shares of approval and shares of disapproval are essentially the same. Internally, the calculation is that more engagement is good, so it learns that those are the types of posts to promote. In the end, “dunking” on our political adversaries only helps them.
It’s a tough habit to break, and it’s certainly one that I’m guilty of. But I’m going to try to make a point of not “dunking,” even when I really want to. You know, for the sake of the internet.