Headlines like “’Healthy’ Doctor Dies Two Weeks After Receiving Covid Vaccine; The Government investigates why Chicago Tribune or “Surprising number of frontline health workers reject a covid vaccine”, from Forbes, were two examples of the most viral messages on Facebook at the beginning of 2021 and those that most affected the vaccination rate in the US against covid. The fake news They are more effective, but their reach was much smaller, because what is important is the will of those who promote news from reliable media.
This is the main conclusion of a new article published in the journal Science which analyzes the impact of links about vaccines on Facebook between January and March 2021. Links marked as fake were viewed 8.7 million times, which was only 0.3% of the 2.7 billion views on vaccines in that time. period, according to platform data. Instead, headlines not flagged as misinformation but suggesting that vaccines were harmful, many in traditional media, were viewed hundreds of millions of times. The difference in scope is so extraordinary that pure misinformation has much less relevance in comparison.
“Our analysis suggests that Facebook fact-checkers identify the most harmful misinformation, so Facebook was doing a ‘decent’ job of that,” says Jennifer Allen, a researcher at the Massachusetts Institute of Technology (MIT) and co-author of the article. “But other stories can go viral online, and malicious actors can use rigorous stories to promote misleading narratives, something platforms should do more to address. Although the media should also be aware when writing their headlines, since their content can be presented out of context,” she adds. A headline like that of Chicago Tribune In a group opposed to vaccines, with that charged context, it can be devastating.
This finding displaces part of the traditional focus on fake news and misinformation to remember that more traditional media also have to monitor what they publish, especially in an era where virality is often explained only by the headline. “Competition for clicks is a challenge,” Allen says, “but I don’t think that lets the media off the hook. Journalists should keep in mind that only headlines are read online and stories can be taken out of context. They should strive to avoid possible misinterpretations of their work.”
The authors calculate the real impact that these headlines had according to the numbers of users who consumed them. The negative impact of misleading headlines on people getting vaccinated was 46 times greater than more blatant misinformation. In a comment on Science About the article, Cambridge University researcher Sander van der Linden uses Facebook’s 233 million Americans to calculate that those headlines prevented at least 3 million people from getting vaccinated. “It’s a rough estimate,” warns Allen.
The study estimates that vaccine-skeptical content on Facebook reduced intention to get vaccinated in the US by 2.3 points, but intention to get vaccinated may differ from the actual final decision to get vaccinated. “We assume, based on other research, that vaccine adoption is 60% of the intention to get vaccinated and from there we get the number of 3 million. “That figure is speculative, but it suggests that the potential impact of these headlines could be considerable.”
The article focuses on the effects of vaccination headlines. But Van der Linden believes it would be easily replicable in other areas, such as politics, and “it only depends on continued access to relevant data,” he says.
Woman, older and conservative
In this debate between fake news of little reach and apparently serious headlines that go viral, Science publishes a second article on a known but rarely measured phenomenon: superspreaders. They are a disinformation-focused subgroup of “semibots,” as EL PAÍS baptized them in a 2019 investigation titled “200 tweets per hour from the couch: who is behind the most active political profiles on Twitter.” They are the small group of users who turn their account into a machine gun of retweets of misinformation or biased information. Thanks to this new research, we know that its real impact on public debate is greater than it seems.
The study observed a panel of more than 664,000 registered voting users during the US presidential elections in 2020. Among all of them, only a small group of 2,107, representing 0.3%, managed to distribute 80% of the fake news . They are those super spreaders. Only that group managed to reach 5.2% of registered voters on Twitter through its work. “These findings highlight a vulnerability of social media for democracy, where a small group of people distorts the political reality of many,” the article’s authors write.
The research has also allowed us to identify some personal characteristics of this group: they are mainly women, rather older and conservative. This detail coincides with a study of Nature of summer 2023, with data in that case from Facebook, which showed that the overwhelming majority of consumers of misinformation were conservative. This study is limited to Twitter due to a lack of data from other social networks, according to Nir Grinberg, a researcher at Ben Gurion University in the Negev (Israel) and one of the co-authors: “I would have liked to have the ability to answer questions about other networks with empirical evidence, but the availability of data from social media platforms limits this type of research.”
Twitter users [hoy X] Those who followed these accounts tended to be more exposed to misinformation and repeated exposure, which are prominent factors in believing lies, according to the article. The impact of these users who dedicate a good part of their days to retweeting is not trivial: if a candidate had wanted to influence such a group of super-spreaders, he would have had to spend 20 million dollars: “Not only do they find a considerable audience on networks, but they turned out to be be influential members of their communities, because they provide a quarter of the fake news to their followers,” the article says.
These developments in understanding disinformation open options to think about new ways to limit its reach. “Content moderation is a balance between freedom of expression and potential harm,” Allen says. “But it is difficult for platforms to measure how harmful content can be.” This has happened until now, where the platforms have limited themselves to reducing the scope of and have ignored this other type of more harmful content that violates the spirit, but not the letter, of the law.
“Our methodology allows platforms to first identify content that potentially has a negative impact and then develop policies,” says Allen, who worked at Meta before starting his PhD at MIT. “As a first step, Facebook could prioritize sending content to fact-checkers based on its potentially harmful impact, its persuasiveness multiplied by its potential audience, with pages with many followers being prioritized more,” she explains.
Allen also proposes as a valid alternative something similar to X’s Community Notes, which are a collaborative fact-check between users. “It may be a way to mitigate the impact of harmful stories that pass a fact check but lack relevant context,” Allen says.
You can follow EL PAÍS Technology in Facebook and x or sign up here to receive our weekly newsletter.
#Misleading #headlines #traditional #media #dangerous #fake #news