The recent DANA crisis in Valencia has once again put a worrying reality on the table: on social networks, fake news spreads like wildfire and denials with data make their way at a snail’s pace. A new study led by Princeton University psychologist Killian McLoughlinand published in the magazine Scienceexplains this phenomenon and demonstrates that the fuel that fuels hoaxes is moral indignation: social media posts that contain erroneous information provoke more anger than those that have reliable information and that emotional response facilitates their spread.
For the study, the authors have analyzed more than one million posts on Facebook and several thousand on the social network X (former Twitter), in addition to carrying out two behavioral experiments with 1,475 participants. “We show that 1) misinformation sources provoke more outrage than reliable sources, 2) outrage facilitates the sharing of misinformation at least as strongly as the sharing of reliable news; and 3) users are more willing to share misinformation that provokes outrage without reading it first,” they write.
In other words, being angry and feeling the need to align ourselves morally with our group makes us let our guard down, and that is the weak point that agents who try to spread fake news take advantage of. Consequently, the authors conclude, “disinformation that provokes outrage may be difficult to mitigate with interventions that assume that users want to share accurate information.”
Exploit base instincts
For the authors, outrage is defined as the mixture of anger and disgust triggered by perceived moral transgressions. “Most current efforts to combat the spread of misinformation online focus on facts,” McLoughlin tells elDiario.es. “Our results suggest that that assumption is not always true and is unlikely to curb the sharing of outrageous content. This is because outrage sharing is motivated by factors unrelated to truthfulness, such as signaling group loyalty or spreading moral stances.”
Sharing outrage is motivated by factors unrelated to truthfulness, such as signaling group loyalty or spreading moral stances.
Killian McLoughlin
— Princeton University psychologist and lead author
McLoughlin and his team also highlight that the way social media platforms classify the content shown to users is also likely to influence the spread of misinformation.
“Outrage-inducing misinformation is likely to spread further in part due to algorithmic amplification of engaging content,” they write. “This is important because algorithms can improve the ranking of news articles associated with outrage, even if a user intended to express outrage toward the article for containing misinformation.”
Their results coincide with those of studies such as the one published this week in the journal Nature Human Behavioraccording to which around 75% of social media posts are shared without users clicking on the link and the most extreme content bounces much more than the politically neutral content.
A dark motivation
“These results show that people do not even stop to read outrageous news before sharing it,” he says. Clara Pretusan expert in misinformation at the Hospital del Mar Medical Research Institute (IMIM) who has not participated in the study. “Our motivation to share outrageous content is greater than our motivation to share truthful content.”
Our motivation to share outrageous content is greater than our motivation to share truthful content.
Clara Pretus
— Disinformation expert at the Hospital del Mar Medical Research Institute (IMIM)
For the specialist, the authors show that our indignant reactions are a key aspect of how misinformation spreads, and this is important for the design of strategies against the spread of hoaxes. “A rationalist framework is being imposed when designing strategies against disinformation,” he says. “This framework assumes that people have the motivation to share truthful information, but although this is important, it is not enough and strategies against misinformation have to take these other motivations into account.”
Luis Miller, doctor in Sociology and senior scientist at the CSIC, highlights that the fact that many people share fake news to signal their membership in a group makes the usual strategies for fighting misinformation ineffective. “That is, if one shares fake news without even reading it, and for reasons that have nothing to do with the news, provide more information (as in the case of the fact-checking or the community notes) does not seem to be going to solve the problem,” he says. “If this is the case on a massive scale, we will have to think about other formulas to combat misinformation.”
New strategies against lying
“It is true that we are fighting in a very insufficient way,” Pretus acknowledges. A possible avenue for improvement, he points out, is to make design changes to the platforms that allow users to verify information in a faster and more massive way than community notes. “But the platforms would have to be forced,” he asserts. “In many cases, perhaps most, we are fighting misinformation ‘badly’ because we do so by giving more information,” Miller emphasizes. “But if someone shares something not to make that information known, but to signal their identity, more information is of almost no use.”
If someone shares something to signal their identity, more information is of almost no use.
Luis Miller
— Doctor in Sociology and senior scientist at the CSIC
Given that the game is played on the terrain of identity and emotions, and not information and reasons, perhaps the strategy against fake news should be based on more effective reputation mechanisms, according to Miller. “That is, making someone ugly who has shared something false,” he explains. “I think community notes may work better by shaming the recipient (emotional punishment) than by making them realize the real information.”
A flood of hoaxes
Regarding the recent explosion of hoaxes surrounding the tragic episode of the floods in Valencia due to DANA from October 29Miller believes there were incentives to align strongly with those on your side. “In addition, these are events where emotions predominate, through images and dramatic stories, and few stop to verify the information before sharing it,” he points out.
“I think what has happened is that it can be seen as a moral transgression from whatever perspective, because it attacks one of the fundamental pillars of morality, which is the well-being of people,” adds Pretus. “Therefore, it is susceptible to being used to generate indignation by pointing out a moral transgression from any ideological position, it is easy to use this case to generate misinformation.”
#data #stop #hoaxes #moral #indignation #fuel #fake #news