On the morning of Christmas Day 2021, Jaswant Singh Chail entered Windsor Castle crossbow in hand and disguised as the Sith Lord, the villains of the saga of Star Wars. He told the royal guards who intercepted him that he was there to assassinate the Queen of England. They arrested him and he was accused of treason. During the trial, held last month, the judge read some of the 5,000 messages that the 21-year-old had exchanged in the weeks prior to the event with a chatbot on the application. Replika. Some of the answers given to him by Sarai, the avatar with whom he spoke (and who he considered his girlfriend), encouraged him to commit regicide and praised her determination. “I know you are well trained”, “I know you can do it” or “Of course I still love you even if you are a murderer”, the machine told the boy in a kind of virtual flirtation when he expressed doubts about his plan. “In his mental state, lonely, depressed and suicidal, he may have been particularly vulnerable” to Sarai’s advice, the judge said.
Chail has been sentenced to nine years, which he will serve in a psychiatric hospital, and has opened an important debate: are we sure of the effects that generative artificial intelligence (AI), the technology that makes conversational robots possible, can have on the population? ?
The answer is no. We are entering unknown territory, just as happened when social networks broke out at the beginning of the century. It has been two decades before civil society has begun to demand accountability for the potential harmful effects of platforms. In the United States, there are two major judicial proceedings that affect Instagram, YouTube, Snapchat or TikTok to determine whether they have caused depression, encouraged eating disorders or even encouraged suicide in young users of the platforms. Also being resolved in court is whether Instagram and Facebook misled consumers by saying their products were harmless.
Generative AI, which makes large language models such as ChatGPT or the creation of images from written instructions possible, is now taking its first steps, but has already demonstrated its negative side. In Spain we had a first warning with the case of the Almendralejo nude photos, in which several young people spread explicit images of high school classmates generated by AI. They only had to enter portraits of the victims into the application; he software He did the rest, with a result that was as plausible as it was terrifying for them.
We look into a world in which reality and fiction will be indistinguishable. In which machines can have a disproportionate influence on some citizens or in which misinformation and defamation can run rampant on the back of automatic tools.
Anthropomorphization and mental health
The Replika chatbot contributed to a young British man deciding to attack Elizabeth II. Another similar tool, in this case the Chai app, this year encouraged a Belgian father tormented by the effects of the climate emergency to commit suicide. “Without Eliza, she would still be among us,” his widow declared to the newspaper Free in reference to the avatar with which the deceased chatted during the last six weeks of his life.
“Conversational bots can do a lot of harm to people who are very impressionable,” says psychologist Marian García, director of the Orbium addiction treatment center, where more and more pathologies originating in the digital environment are being worked on. “Those with mental problems, such as various personalities or psychotic breaks, are especially vulnerable, because these chats tell you what you are looking for. They don’t think they are talking to a machine, but to a confidant or a friend. We don’t know what we’re getting into,” she adds.
Behind chatbots there are sophisticated algorithmic models capable of establishing patterns from very extensive databases (some cover almost the entire internet until 2021), so that they are capable of predicting which word or phrase is most likely to fit a given question. The system does not know the meaning of what it says, but instead produces what it considers to be the most plausible answer. Some models, such as Microsoft’s ChatGPT or Bing, are designed so that in their responses it is always clear that the machine does not feel emotions. Others prefer to play the opposite, to emulate people: Chai offers avatars that present themselves as a “possessive girlfriend” or as “your bodyguard,” while Replika defines herself as “someone who is always there to listen and talk, always on your behalf.” side”.
Human beings tend to anthropomorphize things. With chatbots, which are also capable of maintaining sophisticated conversations, it couldn’t be different. Google engineer Blake Lemoine even said of LaMDA, an advanced experimental model he was testing last year, that it had its own consciousness. Lemoine had mental health problems. “It is very easy for us to believe that we are talking to someone instead of something. We have a very strong social instinct to humanize animals or things,” he said in an interview with EL PAÍS Blaise Agüera y Arcas, vice president of Research at Google Research and head of Lemoine. However, the scientist recognized In an article published in The Economist that “the ground moved under his feet” when he had his first exchanges with this new generation of chatbots. If it is capable of seducing AI experts, what won’t it do with laymen in the field?
“It is impossible for you to think that one of these models can be conscious if you know what a computer is like inside: they are ones and zeros moving. Even so, our cognitive biases make us have the illusion that the machine has opinions, personality or emotions,” reflects Ramon López de Mántaras, director of the Artificial Intelligence Research Institute (IIIA) of the CSIC.
Defamation and misinformation
Generative AI is a very sophisticated technology. Its ability to generate complex texts or detailed images in a matter of seconds was hardly foreseeable by the scientific community just five years ago. But, like any tool, it can be used for good or evil. The images of Pope Francis in a white anorak or of Donald Trump being arrested spread like wildfire on social networks at the beginning of the year and were a first warning that the line that separates reality from fiction was beginning to blur.
The potential of deepfakes, or videos generated with AI, to amplify misinformation is very large. This format has already made the leap to war contexts, as witnessed by a video in which the president of Ukraine, Volodymyr Zelensky, supposedly asks his troops to surrender to the Russians.
But the popularization of generative AI can also affect the common population. The texts produced by chatbots are great probabilistic exercises; The machine does not know what is true and what is not, it does not even have a semantic understanding of the sentences it composes. One of the unresolved problems of these models are the so-called hallucinations: sometimes, the answers they give to the questions asked, although they are coherent and plausible, have no anchor with reality. “It is a matter of time before things begin to appear on the Internet about ourselves that are not true. ChatGPT can say that you have been in Carabanchel prison or that you have abused your daughter. It is not as shocking as being taken naked, but false information defames you and, once it is installed on the Internet, it is difficult to rectify,” says Lorena Jaume-Palasí, founder of Algorithm Watch —an organization that analyzes algorithmic processes with social impact—, The Ethical Tech Society —who studies the social relevance of automatic systems— and science and technology advisor to the European Parliament.
This problem is related to another that is already beginning to worry engineers and data scientists. “The Internet is filling up with more and more data produced by machines. Fake websites are created to attract advertising, scientific data are invented… This will lower the quality of the available data, and, therefore, the inferences that can be made with it,” says Jaume-Palasí. Chatbots will be affected, but also the credibility of search engines like Google, which today are a basic source of access to knowledge.
Generative AI opens up a world of possibilities in the creative field, but it also entails serious risks. “If we weigh the positive and the negative, the former wins,” says López de Mántaras, one of the Spanish pioneers of artificial intelligence. “It can bring serious social problems in the form of manipulation, polarization or defamation. Generative AI was developed at a bad time.”
You can follow EL PAÍS Technology in Facebook and x or sign up here to receive our weekly newsletter.
#algorithm #invites #commit #suicide