Controlling hate speech on the internet is one of the greatest challenges of this information age. Nobody denies this, but it is also not known how to do it effectively. For example, eliminating those who spread toxic content is one of the alternatives chosen by some platforms; Now, a study carried out internally by Facebook, with 26,000 of its users, shows that excluding the leaders of extremist communities is an efficient way to dismantle hate speech on social networks, especially in the long term. Removing a hundred such accounts had a huge impact because denying speakerphone to core members leads to improvement of the overall network.
Some previous studies suggested that the exclusion of these harmful profiles on platforms like Twitter, Reddit or Telegram helped reduce unwanted activities, even those speeches in a general way. But it was not enough to demonstrate the cause and effect relationship that this study, carried out by researchers from Meta, Facebook’s parent company, and published today in PNAS.
To reach these conclusions, Daniel Robert Thomas and Laila A. Wahedia analyzed the effects produced in the hearing of six communities, whose most active representatives were expelled from the platform. Specifically, the Meta researchers wanted to understand to what degree this public continued to observe, publish and share hateful content or interact with other profiles, after their references ceased to exist. The results show that, on average, all these factors decreased. “They reduce their consumption and production of content that incites hate, and they interact less with other members of the audience,” they say in the study..
Following profile exclusions, users went on to view almost half of hateful content on a daily basis. This means that, on average, those who saw around five posts with toxic content went on to see less than three. Furthermore, by ceasing to interact with members of the same toxic community, the members began to be exposed to other types of content, groups or communities, which were not essentially of a violent nature. None of the data in the study can be linked to the original user accounts, due to Facebook’s privacy protection terms.
The most loyal audience to those organizations that disseminate hate can look for other sources after the expulsion of the haters professionals. However, it is an ephemeral reaction and it slows down in just two months. The audience that is further away from these leaders decreases their interaction with this content from the outset. According to the study, this is positive, because it is the group that is most at risk of being influenced by toxic communities.
Overall, the results suggest that culling can lead to “healthier” social networks. “Leader removal and network downgrading efforts can reduce the ability of hate organizations to operate successfully on the internet,” they explain.
It is not easy, in any case. By being banned from popular platforms, those profiles could easily create new ones, and try to build a new network. They could also migrate to other platforms. And furthermore, other toxic organizations that are still around could replace their position and co-opt supporters, who would continue to be exposed to harmful content. To make this deletion strategy more effective, the authors propose that deletions be done by multiple profiles at once because it “prevents organizations from rebuilding their networks,” making it harder for members to find each other again, because there is no remaining accounts to coordinate those returning to the platform.
hate speech and toxic
If it is a decision that is left to the hands of the platforms, will they really want to carry them out? Silvia Majó-Vázquezresearcher associated with Reuters Institute for the Study of Journalism from Oxford University and Professor at the Vrije University Amsterdamexplains that the moderation of content on social networks must “be done seeking a balance between freedom of expression and the preservation of other rights that may be damaged”, so it is essential to differentiate between hate speech, toxic speech and hate speech. incivility.
In conceptual terms, as Majó-Vázquez explains, incivility is at the mildest level, ranging from informal language that includes disrespect or sarcasm. When it comes to a more extreme manifestation and “others are scared away from participating in a conversation”, toxic speech is born, which can become violent. “From a democratic point of view, they are very harmful, because they do not allow for the democratic ideal of public deliberation,” she details by email.
According to this expert, the suspension of profiles must be done taking into account these conceptual dimensions and with manual mechanisms “that can guarantee that freedom of expression is being preserved.” And this criterion should also serve for political figures. “You have to carry out an exercise like the one we would do outside the networks, in which the right to freedom of expression of the sender of the message is balanced with the preservation of the other fundamental rights of the audience. The automated mechanisms for deleting messages and suspending accounts must be continually reviewed and the evaluation of these messages by experts must be prioritized, as some platforms already do with external advisory councils for the most relevant cases, ”she underlines.
One of his studies carried out at the Reuters Institute in seven countries has shown that the relationship between toxicity and engagement It is positive, that each case is different: it depends on the theme of each discussion and how severe the content is. In the context of the pandemic and by analyzing Twitter, the results showed that the toxicity and the popularity of toxic content do not go hand in hand. In fact, we see that the tweets more toxic lose popularity with the audience. However, messages with low levels of toxicity do see their popularity levels grow”, says Majó-Vázquez. Therefore, it is not possible to say if this relationship is the result of an audience decision “not to reward toxicity” or if it is the result of the moderations carried out by the platform. “It is something that we cannot answer with the data from our work, but this result questions the belief that toxic contents are always the most popular,” she concludes.
You can follow THE COUNTRY Technology in Facebook and Twitter or sign up here to receive our weekly newsletter.
#Kicking #Toxic #Leaders #Social #Media #Reduces #Spread #Hate #Online