The bulles on the Dana have flooded social networks from the first moment, intensifying the restlessness among citizens and favoring the viralization of deceptive content spread by those who took advantage of the tragedy as a crop broth. Pseudoperiodistas, anonymous users and ultras -language in what has been called the ‘fachosfera’ – did not hesitate to impose a fallacious logic about what was happening in the Metropolitan Area of Valencia.
Speeches with more emotional tone and voice, with the form of “personal experiences”, spread false news such as the dead in the parking lot of the Bonaire shopping center and caused differentiated emotions: anger, in the case of Tiktok, and fear, in the Case of X. This is one of the conclusions drawn from a recent study on Divergent emotional patterns in misinformation in social networks. An analysis of tweets and tiktoks about the Dana in Valencia Directed by the researchers at the University University of Valencia Paolo Rosso and Iván Arcos, together with Ramón Salaverría, from the University of Navarra, and published by the Iberian Digital Media Observatory (IDMO), an agency co -founded by the European Commission in collaboration with the European Digital Media Observatory (Edmo).
“It has impacted us how it has been played with the emotions of the people following the misfortune,” explains Rosso, also a member of the Valgrai -profit foundation (Valencian Graduate School and Research Network of Artificial Intelligence), in a telephone conversation With this medium. Following that starting point, experts wanted to analyze the contents of both the Tiktok platform and the network that Elon Musk controls: “We wanted to analyze which linguistic patterns and what emotional patterns were used in these videos already detected as disinformations by Verifiers (cursed.es, Newtral and Efe verifies) used; And on the other hand, classify automatically to see if from samples with keywords you could recover content related to the DANA, but without false information to pass a part of these examples to the AI models to verify if they were able to learn to discriminate. ”
In this sense, the UPV researcher insists that FACT-CHECKERS Of these companies they are usually “overflowed” by the huge amount of information to verify that they circulate through the networks, so the GPT-4O FEW-Shot model could help in the automation of the contents labeling and to be able to “detect bulls in time real”. “It is undoubted help in his trade. We have been interested in seeing to what extent the model was able to label something as a bulos or not bulos. We were with low expectations, but it has been better than we expected, ”he said.
Keywords like Vito Quiles, Manipulations and Dramatic Music
Manually, the researchers pointed a total of 650 publications to discern between truthful content and misinformation, through a qualitative method (narrative analysis, manual data review and observation of misinformation strategies) and quantitative (AI models, statistics, statistics , detection of emotions and precision metrics).
And it is that the report reinforces what is being defended in numerous scholars about which emotions are a usual strategy in false speeches, however, this work adds that it is necessary to take into account the way in which it is operated in each of each one of These platforms. In addition, through linguistic analysis – using the LIWC dictionary – it has been possible to demonstrate that the contrasted and rigorous content uses a “more articulated and factual” language, unlike misinformation, who resort to “manipulations, denials, perceptual words, dramatic music and personal anecdotes. ”
Also, in the keywords chosen, Rosso, during the interview, states that labels were selected to cause an impact between society or that could have misleading content. Vito Quiles, Alise Pérez or Bonaire are some of those employed in data extraction. To do this, they were used Crawlers (Web tracers that navigate the Internet following links from one page to another and index them) To extract the content with those words in X and Tik Tok, and after that, the possible sample bias was evaluated with the rarity index (Weirdness Index, WI) to verify if the ‘Keywords’ influenced the data selection and comparing the record with a control sample based on more neutral terms such as Dana Valencia.
“Many of the messages, apart from provoking these emotions among citizens, were intended to criticize the actions of the central government, more than those of the Generalitat. That has caught my attention, ”confesses Valgrai’s researcher. “It is important that people be aware that at vulnerable and panic moments take advantage of our emotions,” he says. The study, published by Iberifier, will be presented at Icaart 2025 (Porto), an international congress with artificial intelligence experts that will take place between February 23 and 25, 2025.
#study #reveals #misinformation #Dana #fear #rage #Tiktok