A team of researchers from the American universities of Princeton, Yale, Northwestern and Harvard have revealed the mechanism by which disinformation uses outrage as a key tool to spread on social networks. The work, published by the magazine ‘Science’, analyzes data from millions of interactions on platforms like Facebook and Twitteras well as direct experiments with users, to understand the role that emotions play in the spread of both fake and true news.
“Outrage is highly attractive and does not need to be precise to achieve its communicative objectives, which makes it an attractive signal to incorporate into misinformation,” explain the authors of the study. In this way, generating messages that provoke indignationbecomes the ideal tool for those seeking to manipulate information on social networks,” according to the researchers, led by Killian L. McLoughlin (Princeton) and William J. Brady (Northwestern).
Based on the analysis of how users interacted with more than one million Facebook links and 44,529 tweets on Twitter, together with two behavioral experiments with 1,475 people, the authors made three compelling findings. The first is that “Sources of misinformation generate more outrage than reliable sources”. The second points out that when they feel indignation about information, users do not differentiate whether the news is true or not when it comes to sharing it. The third, even more worrying, is that “users are more likely to share disinformation that provokes outrage without having read it first.”
A process to which is added another reality of social networks that the researchers also review in their study. Since outrage generates high levels of interaction, the algorithms in charge of suggesting posts They tend to amplify this content, regardless of its veracity. However, the authors recognize that analyzing this relationship between outrage and interaction is a very complex challenge, given the opacity of the algorithms that manage social networks and the restrictions on access to data.
Outrage and misinformation: a vicious circle
The authors begin by defining disinformation “as false and misleading information” and link it to “increased political polarization, anti-democratic sentiment, and vaccine hesitancy.” To deal with it, the investments and initiatives carried out by digital platforms “They have had only limited success in slowing its spread.”
The reason for this failure is the power of indignation. Firstly because “it is highly attractive”, according to the study, which has verified how “social media posts that express indignation receive more ‘likes’ and are shared more”, which ends up generating a vicious circle according to the text: “Train users to express more outrage and classification algorithms to amplify it”. Furthermore, these expressions of anger also serve other communicative objectives that do not depend on the accuracy of the information “such as signaling loyalty to a political group or disseminating a moral position.”
Added to this is another discovery: “individuals who express indignation [en las redes sociales] “They are seen as more trustworthy.” A reality that, according to the authors, “suggests that news sources could gain a credibility advantage by publishing content that generates outrage.” Furthermore, the study indicates that on Facebook links that provoke more angry reactions are more likely to be shared without being read, especially those that come from sources of misinformation. For the authors, “indignation increases the relative strength of emotional motives such as political or moral position. These features “provide strong incentives for disinformation spreaders to generate content that provokes outrage.”
All of this, regardless of whether it is true or not. In fact, the study shows that “social network users sometimes share information that they know is inaccurate to satisfy non-epistemic reasons.” [relacionados con la exactitud]such as indicating your political affiliation or moral stance, despite potential reputational costs». Costs that are not that many because, as the authors explain, “if it is discovered that a user shared disinformation, they can claim that they only wanted to express that the content is ‘outrageous if it is true,’ preserving epistemic trust while reinforcing their moral trust.”
The study’s findings suggest that social media’s current strategies to prevent misinformation, focused on offloading responsibility to users – such as reminding them to check accuracy before sharing – may be insufficient. «Online misinformation remains a threat to a healthy digital space and to democracy“, they point out, to demand “effective solutions” to the “legal and political guidelines that seek to mitigate these damages.”
Will Bluesky be the solution to misinformation?
The study is based on the interaction on Facebook and Twitter of American users on very specific dates, from the summer of 2020 to the first months of 2021, a period that coincides with the heated electoral campaign between Trump and Biden and the no less eventful takeover of possession of the latter, with the assault on the Capitol included. And as the main source they took the Internet Research Agency (IRA), «a Russian organization whose purpose was to spread disinformation and discord in American politics”, so the authors are cautious that their results can “generalize to other cultural contexts or social platforms.”
Therefore, it does not refer to the current dichotomy between X and Bluesky, but it could help predict the future. Some politicians, mostly left-wing, public figures and media outlets, such as the British newspaper ‘The Guardian’, have announced that they are stopping publishing on X (the old Twitter) because it has become a “toxic media platform”especially since it is headed by billionaire Elon Musk, one of the most active supporters of the Trump campaign in the recent presidential elections.
As an alternative, they have opted for Bluesky, a network created in 2019 by Jack Dorsey, founder of Twitter, and which remained almost dormant with few users – it operated only by invitation until February of this year – and with the intention of being a decentralized version of the old social network of the little bird. Dorsey left the management team in May and deleted his account in September, and the CEO is now Jay Graber, who also holds the majority of ownership. ANDIn a few months it has exceeded 22 million users (still very far from X), with an open source approach, it is presented as “made by the people and for the people.”
However, if the results of the study are extrapolated, it is a matter of time before it ceases to be the Eden of social networks, as its users seem to describe it. If the steps of the described process are followed, The increase in interactions will lead to an inevitable growth of misinformation and, with it, indignation, which will once again function as an engine for greater redistribution. And there, the algorithms will have their last word.
#outraged #news #easier #spread #networks #read