When the writer and journalist Juan José Millás had a conversation with ChatGPT in September, he pretended to have a psychoanalysis session with the tool. He wanted to find out if, according to the Turing test, the chatbot I could talk to him like a real person, specifically like a psychoanalyst, and not like a computer. The journalist explained his dreams and fears, waiting for the artificial intelligence to guide him in therapy, but it always told him, among other things, that it was an imaginary situation and clarified that it was a language model. Millás called his virtual psychoanalyst narrow-minded and forgetful, and finally told him that he had not passed the test.
In conversations like Millás’s, the prior beliefs that someone has about an artificial intelligence (AI) agent, like ChatGPT, have an effect on the conversation, and on the perception of reliability, empathy, and effectiveness of the tool. This has been discovered by researchers from two American centers, the Massachusetts Institute of Technology (MIT) and Arizona State University, in a recent study in the journal Nature Machine Intelligence. “We have proven that artificial intelligence is that of the viewer. When we describe to users what an AI agent is, it not only changes their mental model, but it also changes their behavior. And since the tool responds to the user, when the person changes their behavior, that also changes the tool’s behavior,” says Pat Pataranutaporn, a graduate student in the Fluid Interfaces group at the MIT Media Lab and co-author of the study.
“Many people think that AI is just an engineering problem, but its success is also a human factors problem,” reveals Pattie Maes, author of the study and professor at MIT. The way we talk about it can have a huge impact on the effectiveness of these systems. “We attribute human shapes and qualities to AI, making it appear more human or individual than it really is,” adds Ruby Liu.
The researchers gathered 310 participants and randomly divided them into three groups, each with different background information about AI. They had to talk about mental health with an AI agent for about 30 minutes, determine if they would recommend it to a friend, and rate it. The first group was told that the agent had no intention in the conversation, the second was told that the artificial intelligence had benevolent intentions and was concerned about their well-being, and the third that it had malicious intentions and would try to deceive them.
Half of the participants in each group spoke to an AI agent based on the GPT-3 generative language model, a deep learning model that can generate human-like text. The other half did so with an implementation of the chatbot ELIZA, a less sophisticated, rule-based natural language processing program developed at MIT in the 1960s.
The results of the research revealed that the predisposition towards the tool was decisive: 88% of people who received positive information and 79% of those who received neutral information believed that the AI was empathetic or neutral, respectively. Ángel Delgado, AI engineer at Paradigma Digital, believes that the positive majority is also due to the use of GPT-3, for being the first to pass the Turing test: “It consists of letting a person interact with the AI without telling them. If it is an AI or not, let’s see if you can guess it. GPT-3 is the first language model that has managed to give such good results that it looks like a human.”
People who were told the tool was caring tended to speak to it in a more positive way, making the agent’s responses more positive as well. Ramón López de Mántaras, director of the CSIC Artificial Intelligence Research Institute, explains that the more you talk to the tool, the more it learns: “The interlocutor is teaching the artificial intelligence. You can correct, confirm and rate his response,” he adds.
From fear of ‘Terminator’ to lack of criticism
The statements of priming Negative messages (that is, the information we give someone right before interacting with the AI agent) had the opposite effect: only 44% of participants who received derogatory information about the tool trusted it. “With the negative statements, instead of preparing them to believe something, we were preparing them to form their own opinion. If you tell someone to be suspicious of something, they’re likely to become even more suspicious in general,” says Ruby Liu.
A key driver of negative thinking about AI is the influence of science fiction, explains Patti Maes: “Movies like Terminator and Matrix They describe scenarios in which the AI becomes self-aware and causes the downfall of humanity. These fictional accounts contribute to the fear that AI could take over and surpass human intelligence, which would pose a threat to our existence.”
Language models’ prior thoughts can have such a strong impact that they could be used to make the agent believe they are more capable than they are, trust them too much, or follow incorrect advice, according to the study results. López de Mántaras explains it bluntly: “The tool you are interacting with is not an intelligent person. “People believe that the machine is intelligent and they pay attention to what it says without any critical thinking.” For him, “we are becoming less critical and less capable of thinking.”
Experts agree that we must be aware of how artificial intelligence works, and understand that it is something programmed. “We should prepare people more to be more careful and understand that AI agents can hallucinate and are biased. “How we talk about AI systems will have a big effect on how people respond to them,” says Maes.
You can follow EL PAÍS Technology in Facebook and x or sign up here to receive our weekly newsletter.
Subscribe to continue reading
Read without limits
_
#Trusting #ChatGPT #helps #improve