Meta is preparing to change its system to control misinformation on its social networks. The corporation is going to end the model based on independent verifiers that it has been maintaining in the last five years to switch to Elon Musk’s method in X, which relies on the work of the users themselves when pointing out and contextualizing hoaxes.
The change may mean a new conflict between the multinational and Brussels, since the Community Executive has an open investigation into the model used by X due to its doubts that it complies with the legislation. “Before deploying new policies or features that may have a critical impact on systemic risks, very large platforms, including Meta, have to carry out a prior risk assessment analysis and send it to the Commission,” community sources warn this newspaper. .
Until now, Meta has relied on collaboration with independent fact-checking agencies. These organizations, such as Newtral, Maldita.es or EFE in Spain, were responsible for analyzing viral content on Facebook, Instagram or WhatsApp to determine its veracity and detect possible deepfakes or misinformation.
The verification process included the review of content by verifiers when it began to be shared massively. If false or misleading content was identified, fact-checkers added labels to the posts to inform other users. However, Mark Zuckerberg has dismissed fact-checkers as favoring censorship. “Even if they accidentally censor just 1% of posts, that’s millions of people, and we’ve reached a point where there are too many mistakes and too much censorship,” he says.
The organizations that have been carrying out this work have denounced that this is not true, but the decision has been made and change is imminent.
Verifying users
The verifier system unleashed the wrath of Donald Trump and the American extreme right from its inception, who attacked them for pointing out their hoaxes. Therefore, it was one of the first changes that Elon Musk implemented when he took control of Twitter, now X.
Instead, Musk implemented an anti-misinformation system based on “community notes.” It is based on appendices that users can add to viral tweets, to provide context or denials to publications they consider false or misleading. All X members can vote if they consider notes added to posts “useful,” but not everyone can write them.
To write notes on the publications of other users it is necessary to achieve the rank of “verifying user”. To do this, you must have an account that is more than 7 months old, have a phone number associated with it, and have not committed recent violations of X’s community standards. Those who meet the conditions can enroll in the program to qualify for receive approval from the platform to be verifiers.
At first, verifier users can only rate existing notes. After a sufficient number of existing notes have been rated, and these have accumulated enough ratings to be published, X allows them to start submitting their own original notes. If these are considered useful by the rest of the users, they will be published along with the original tweets.
Doubts about the X system
Musk defends that his method is more respectful of freedom of expression than that of verifiers, since it is not based on organizations with special powers, but on the wisdom of the masses, capable of detecting and neutralizing specific errors. Other experts think otherwise.
a study conducted by the Center Against Digital Hate published in October 2024 revealed that up to 74% of community notes about misleading messages related to the US elections on X were not visible. Furthermore, misleading posts received 13 times more views than the Community Notes that corrected them, according to this NGO created in 2017 to combat hate and misinformation on digital platforms. In total, the analyzed hoaxes that did not receive notes from the community received 2.2 billion views.
“This algorithm, which was supposed to solve the problem of biased fact-checkers, basically means no fact-checking,” denounces Alex Mahadevan, of the Poynter Institutefocused on the promotion of ethical practices in the media. The expert points out that the lack of visibility of the notes It is due to the requirement of an “ideological consensus” for a note to be public, which means that people from different political spectrums must agree on its veracity. Mahadevan notes that this system is problematic in a polarized environment, as consensus is unlikely to be reached on controversial issues.
These types of issues led the European Commission to open an investigation against X in December 2023 due to the community rating system, to which Meta will now join.
An unsolved problem
The problems with community notes do not mean that the verifier-based system is perfect. This program faces three main criticisms: The first is its legitimacy, since decisions about veracity rest with journalists and not judges. The second is its limited impact, since hoaxes are more viral than denials. The third is that its effectiveness is doubtful, given that during its existence both misinformation and polarization on social networks have increased.
Borja Adsuara, a legal expert in law and digital strategy, points out that there is opacity in the selection of verifiers and in the criteria they use to determine the veracity of the information. This lack of transparency “generates distrust” and fuels accusations of ideological bias.
In conversation with this medium, Adsuara argues that the Digital Services Act (DSA, for its acronym in English, in its effort to combat disinformation, can lead to “indirect censorship” by governments. This is because The law obliges platforms to take “appropriate measures” against disinformation, but does not clearly define what is considered disinformation: “Social networks were forced to intervene with editorial powers to suppress content that was not illegal, which is very dangerous.” ”.
The EU Digital Services Law forced social networks to intervene with editorial powers to remove content that was not illegal, which is very dangerous
Borja Adsuara
— jurist specialized in digital rights
This ambiguity allows the European Commission, through good practice agreements, to pressure platforms to remove content that is not illegal, but that governments consider “inappropriate” or “harmful.” “Through these agreements, social networks are told what they should allow and what they should not,” reflects the jurist, using hate speech as an example, where offensive comments that do not imply criminal behavior can be deleted. .
“The right to freedom of expression includes the right to say stupid things. The only limit is the law. Making homophobic and distasteful jokes is not a crime. He may be an idiot and others, exercising their freedom of expression, may say that he is. Removing content of this type implies censorship,” details Adsuara.
The common front of X and Meta before the EU
The jurist points out that all this brings to light an underlying problem: there is no consensus on how to combat misinformation or how to moderate the debate in increasingly polarized and aggressive social networks. But the common front that Mark Zuckerberg and Elon Musk now form could make the issue escalate above European platforms and regulators.
“If the Commission opens an informative file to Meta, and in the one they have opened to X they conclude that they are insufficient measures and impose a sanction, they will join together to appeal the sanction to the Court of Justice of the European Union or even the European Court of Human Rights, since this has to do with freedom of expression,” he advances. A concatenation of facts that I would consider positive: “I would love it because then we would have a ruling from the European Court of Human Rights on an issue as delicate as freedom of expression and its limits on social networks.”
—
Correction, January 10, 2025: A previous version of this article reported that the European Commission had already been informed by Meta that it is preparing to change its data verification method also in the European Union. However, the Community Executive has rectified this Friday, now ensuring that this communication has not yet occurred.
#keys #debate #verifiers #Elon #Musks #model #Human #Rights #Court