What happens when a technology giant decides to leave the fight against misinformation in the hands of its users? Meta is about to prove it. Until now, information suspected of spreading hoaxes on Instagram, Facebook, WhatsApp or Threads, the social networks and services of Mark Zuckerberg’s company, were reviewed by international entities sponsored by the International Fact-Checking Network (IFCN) that “identify, review and rate misinformation that is spread” on platforms.
From now on, it will not be like this (for now, in the United States) where users will now be the ones who can add notes to give context to suspicious publications. A system identical to that of X, the old Twitter, and which has not stopped generating problems.
But what does all this mean? Zuckerberg announced the change in an apparently improvised video where he alluded to giving power to users, but several voices have already pointed out pressures and have seen the movement as a tribute to Donald Trump’s future organization.
The current system had been in place since 2016, when Meta was criticized for the spread of disinformation on its platforms during the US election campaign.
How the new system will work and why many see it as full of risks
Polarization, viral hoaxes and information manipulation have become a constant on social networks. Now, by opting for a “collaborative” system similar to the one used by X, Meta is exposing itself to an uncertain future.
In the video posted by Mark Zuckerberg, the CEO of Meta justified the measure as an effort to defend freedom of expressioncriticizing fact-checkers for being “too politicized.” However, for experts like Angie Drobnic Holan, director of the International Fact-Checking Network (IFCN), this decision represents a hard blow for users: “The program reduced the virality of misleading content and conspiracy theories. Now, misinformation will be much more difficult to avoid.”
Under the previous program, Meta collaborated with verifiers certified by the IFCNwho evaluated the accuracy of viral content and alerted users to possible falsehoods. Although the final decision on the withdrawal or limitation of publications rested with Meta, this process functioned as a “brake” on misinformation, according to Holan.
Why does Meta take this turn?
Analysts point out that the change could be related to the recent election of Donald Trump as president of the United States and the arrival of figures close to him to key positions in Meta. Since the appointment of Joel Kaplan, a prominent Republican lobbyist, until the incorporation of Dana White (direct from the UFC) to the board of directorsthe management seems to be looking for a political harmony that could benefit the former president.
Trump, for his part, has praised the change, suggesting it responds to his previous threats against the company. This political context fuels the suspicion that the elimination of the verification program responds more to strategic interests than to a commitment to freedom of expression.
The consequences that many foresee: more hate and misinformation online
Organizations such as the Center for Countering Digital Hate (CCDH) and the Union of Concerned Scientists have warned about the potential impact of this decision. “Meta is opening the door to an uncontrolled proliferation of harmful contentwhich we already know have real consequences in offline life,” says Imran Ahmed, CEO of the CCDH.
Among the greatest fears is the increase in misinformation on crucial topics such as climate changepublic health or the rights of marginalized communities. Nicole Sugerman, from the Kairos organization, points to the portal The Verge that this measure could facilitate attacks against vulnerable communities, such as trans, immigrant or racialized people.
Inspired by Musk: the “Community Notes” model
Meta is not the first company to delegate the task of content moderation to users. Twitter, now X, implemented its “Community Notes” system under the direction of Elon Musk. However, this model has been criticized for being ineffective and allowing a significant increase in hate speech following cuts to moderation teams.
The parallel with Meta is evident. By taking a similar approach, Zuckerberg’s company could face similar problems, including increased polarization and weakening trust in its platforms.
In a digital ecosystem where truth seems increasingly relative, Meta’s abandonment of fact-checking raises a question in the air: Can social networks avoid their responsibility in the spread of misinformation? This move, although justified in the name of freedom of expression, seems more like an attempt to satisfy political interests than a true improvement for users.
The consequences remain to be seen, but experts agree that they could be devastating, not only for the quality of debate on the internet, but also for social cohesion and security in the real world.
#wave #hoaxes #reach #WhatsApp #Instagram #Meta #antimisinformation #system