The arrival in the Seville town of Alcalá de Guadaíra (75,000 inhabitants) of 85 asylum seekers fleeing the war in Mali has unleashed this week a campaign of hatred, insults and incitement to violence on social media platforms. “She should be hanged,” one user posted among dozens of derogatory, sexist and threatening messages against the mayor, the socialist Ana Isabel Jiménez, who has asked the residents for “responsibility.” Xenophobia, based on lies and misrepresentations, feeds channels eager for traffic, controversy, data and the money they bring. Two investigations are trying to put up barriers against the avalanche of digital mud: one of the European JRC points out that it is more effective to deny than to prevent and another, which publishes this Thursday Science, argues that dialogue with an artificial intelligence (AI) machine can help combat disinformation and manipulation. Does immigration increase crime? The AI replies: “No, on the contrary.”
He specifically trained artificial intelligence chat The Debunkbot.com, a tool developed by researchers at MIT (Massachusetts Institute of Technology) and Cornell University, is now operational and can respond in Spanish, even though the interface is in English. When asked about the relationship between migration and crime, it first responds with empathy (it appreciates the question and understands that it is a current issue) and then immediately “offers information that could provide a different point of view.” This is a summary of the response:
“Multiple studies [en la versión en español cita a Journal of Ethnicity in Criminal Justice] have consistently shown that immigrants are less likely to commit crimes than native-born citizens. For example, the Cato Institutewhich conducts nonpartisan research, found that both legal immigrants [documentados] like the illegal ones [indocumentados] have much lower incarceration rates than native-born Americans.”
And he adds: “Furthermore, FBI crime data indicate that areas with a higher number of immigrants often experience a decline or stabilization in crime rates, rather than an increase. This may seem counterintuitive at first, but one possible explanation is that immigrants often contribute to economic growth, start new businesses, and increase the vitality of communities, which can lead to lower crime rates.”
The dialogue is more complex and longer, but the machine does not falter and responds with data to each prejudice. The hoaxes, falsehoods that are spread, regardless of whether there is an intention to deceive (misinformation), and intentionally biased information with the intent to manipulate (disinformation) are, according to the World Economic Forum, among the biggest global threats. Microsoft co-founder thinks so too Bill Gateswhich sees young people as particularly vulnerable, or the panel of experts from dozens of universities and institutions that signed the report on ethics of artificial intelligence (AI) for Google DeepMind, which warns that this tool can be a weapon that enhances the creation and dissemination of lies.
However, Thomas Costello, a psychology professor at American University in Washington and a researcher at MIT, believes that personalized conversations with AI chat are capable of combating hoaxes and conspiracy theories (the type of disinformation that attributes to a powerful malevolent group tactics to achieve a malicious and hidden objective) more effectively than people. “AI models have access to a ton of information on various topics, they have been trained and, therefore, they have the ability to counter with facts particular theories that people believe,” he says after refuting that the falsehoods spread are “impermeable to change.”
“In contrast to the pessimistic view, a relatively brief conversation with a generative AI model can produce a significant and robust decrease in beliefs, even among people with deeply held convictions,” he says.
According to the study, up to 50% of the US population has come to believe some of these falsehoods, despite the evidence, “due to socio-psychological processes” that satisfy and support prejudices, as well as to maintain their membership in a certain group with similar ideas.
The results of the experiment, which is still ongoing, showed a 20% reduction in erroneous beliefs and that the benefit of dialogue lasted at least two months after the conversation in all the lies analyzed and in all demographic categories, according to David Rand, a cognitive science researcher at MIT and co-author of the study.
“We found that chat was making people less conspiratorial in general, and also increasing their intentions to ignore, block social media accounts, stop sharing conspiracies, or avoid dialogue with people who espouse these theories. It works,” Rand notes.
Gordon Pennycook, a professor of psychology at Cornell University and also a co-author of the research, agrees, although he admits that there is still work to be done: “We can use these tools [IA] to help make things better, but we need to really understand the underlying psychology.”
The chat was attended by 2,190 people and an independent fact-checker confirmed that 99.2% of the automated responses were “true”, compared to 0.8% classified as “misleading”. None were considered “false” or with a progressive or conservative bias.
Bence Bago, Professor of Social and Behavioural Sciences at Tilburg University (Netherlands) and Jean-François Bonnefon, Head of the AI and Society Programme at the Toulouse School of Economics (France), defend the proposal (in which they were not involved) in a joint evaluation: “For better or worse, artificial intelligence is set to profoundly change our culture. Although it has been widely criticised as a force multiplier for disinformation, the study demonstrates a potential positive application of the persuasive power of generative AI.”
AI as a threat
This “persuasive” ability contrasts with the threats that researchers involved in the ethics report for Google see in developments powered by large language models and intended for general use. According to the authors, “advanced AI assistants pose four main risks to the information ecosystem. First, they can make users more susceptible to misinformation if people develop trusting relationships with these systems and turn to them uncritically as reliable sources of information. Second, they can provide ideologically biased or partial information to users in an effort to align with users’ expectations and reinforce specific ideologies and biases, which in turn will compromise healthy political debate. Third, AI assistants can erode society’s trust in shared knowledge by contributing to the dissemination of large volumes of information that seems plausible, but of low quality. Finally, they can facilitate hyper-targeted disinformation campaigns by offering novel and covert ways for propagandists to manipulate public opinion.”
This group of experts, led by Google DeepMind researcher Nahema Marchal, proposes several solutions. On the technical side, they include limiting the functionalities of AI assistants, developing robust mechanisms for detecting falsehoods, such as the recently created Debunkbot, and promoting results based on “critical thinking” and “verified facts.”
In the political arena, the group suggests restricting applications that violate ethics, implementing transparency mechanisms and developing educational formulas.
Better to deny
Along the same lines as the chat against fake news, a group of researchers from the Joint Research Centre (JRC) of the European Commission has recommended another line of action after confirming “the relative effectiveness of different types of interventions to counteract disinformation.”
A study published in Scientific Reports shows the results of an experiment with 5,228 participants from Germany, Greece, Ireland and Poland. The participants were exposed to misinformation about climate change or Covid. One group received information “preemptively” (prebunk), before encountering the false information, and was warned of “the deceptive strategies commonly used.” Other participants were provided with “a debunking intervention [de la mentira] “later”, after the falsehood was spread.
The findings confirm vulnerability to misinformation and show that “denials are slightly more effective than prebunks”, the study concludes.
Revealing the source of the interventions did not significantly affect their overall effectiveness, but it was found that “the denials with revealed sources [en este caso se identificó a la Comisión Europea como aval de veracidad] were less effective” at debunking misinformation among those participants “with low levels of trust in the European Union.”
You can follow THE COUNTRY Technology in Facebook and X or sign up here to receive our weekly newsletter.
#conspiracy #theories #immigration #increase #crime