Akihiko Kondo, who turned 41 on the last day of May, married the hologram of his favorite virtual singer, Hatsune Miku, in a symbolic ceremony six years ago. Just two anniversaries later, Gatebox, the company responsible for the avatar, left the service and the young administrator of a Japanese public school was widowed due to computer death. Kondo’s story, although it may seem extravagant, is nothing more than a preview of a reality of unpredictable consequences: the replacement of real personal relationships with robotic agents programmed to respond to the user according to their expectations. The researcher and professor of robotics at the University of Sheffield Tony Prescott has presented the book The Psychology of Artificial Intelligence (The psychology of artificial intelligence), with which he maintains that AI can be a palliative for loneliness. But with risks, as he and dozens of researchers admit.
That ChatGPT-4o presented itself with a voice similar to that of Scarlett Johansson is no coincidence. English speakers who saw the movie Her, written, directed and produced by Spike Jonze and awarded the Oscar for best screenplay in 2014, it took seconds to associate Open AI’s new agent (virtual assistant) with the actress, whose interaction ends up making the lonely protagonist fall in love.
Latest report from the Ericsson Consumer and Industrial Laboratory (Ericsson Consumer & Industry Lab) reflects that “50% of initial users of artificial intelligence believe that people will simulate their marriages to anticipate changes or even foresee divorce.” An overwhelming 71% of these AI consumers believe this utility will be beneficial.
The replacement of human interaction, with all its complexities, with a hybrid relationship between human and machine implies many advantages, but also numerous risks that are more real and immediate than those reflected in some chapters of Black Mirror. “Social robots are designed specifically for personal interactions that involve human emotions and feelings. They can provide benefits, but also cause emotional damage at very basic levels,” warns Matthias Scheutz, director of the Laboratory of Human-Robot Interaction at Tufts University (USA).
Akihiko Kondo’s experience is a summary of this complexity and differs from others more related to artistic experiences, such as The Hybrid Couple by Alicia Framis, who simulates a marriage with a hologram as a reflection, or Meirivone Rocha, a 39-year-old Brazilian who inflated her followers on the networks by spreading her supposed wedding with a doll.
The Japanese, in a BBC interview, reports having suffered harassment from colleagues, admits that friends through the internet and games continue to be his “community” and confesses that he has never had a partner: “I have had some unrequited loves in which I was always rejected and “It made me rule out the possibility of being with someone.” Loneliness, harassment, psychological and technological dependence, limited social skills, involuntary celibacy (the dangerous movement on the Internet prelude to sexist violence), artificial satisfaction of needs, virtual companions with the appearance of reality… Kondo’s extravagant story opens the door to an analysis of the virtues and dangers of AI interference in personal relationships.
Advantages
The benefits of humanized artificial intelligence are neither few nor simple.
Loneliness. Prescott admits the risks, but highlights one of the main advantages: “In a time when many people describe their lives as lonely, there can be value in having the company of AI as a form of reciprocal social interaction that is stimulating and personalized. Human loneliness is often characterized by a downward spiral in which isolation leads to lower self-esteem, which discourages further interaction with people. AI companionship could help break this cycle by scaffolding self-esteem and helping maintain or improve social skills. If so, relationships with AIs could help people find companionship with both humans and artificial others.”
Careful. Joan Claire Tronto, professor of political science at the University of Minnesota, expands the concept of care (care) to everything we “do to maintain, continue and repair our world so that we can live in it as best as possible.” In his work, a key is “the commitment to meeting the needs of others.” And AI can do it relentlessly. Luis Merino is a professor at the Pablo de Olavide University and responsible for social robotics, the discipline aimed at assisting human groups autonomously and with the ability to learn from the emotions of the recipients of the services: “The objective is for robots to understand our intentions and emotions and learn from them.”
Benefit or interest. Open AI CEO Sam Altman describes his latest model as a “super-competent colleague.” The first term refers to its humanization and the second to the benefits it provides by executing tasks on behalf of the user. This last advantage provides “individual well-being”, according to Brad Hooker, professor of philosophy at the University of Reading. This interest is inherent to human interaction. A benefit is not always sought, but it is difficult for a relationship to prosper if the costs constantly exceed the benefits.
Human development. AI can promote attitudes and behaviors that facilitate personal fulfillment and interaction with others. In an evaluation from ChatGPT, Gemini and Llama (Meta), the University of Illinois has shown the importance of this skill. “Agents can help increase, for example, awareness of healthy behaviors, become emotionally committed to changes, and realize how their habits could affect the people around them,” explains Michelle Bak, researcher of the models.
Autonomy. It refers to the potential of AI to provide relevant information so that the individual acts and decides according to his or her own motivations and interests.
Risks
From each of these categories of advantages arise branches of associated risks. These are some highlighted by the researchers:
Physical or emotional damage. Anecdotes of early AI models threatening, insulting or promoting harmful or violent behavior are not new, although they periodically fuel disproportionate and insubstantial reactions. It’s been a year since New Zealand supermarket chain Pak ‘n’ Save was warned because its menu-planning AI recommended drinks laced with chlorine gas and snacks laced with poison and glue. Obviously, no one followed this advice because common sense prevails, but there may be less obvious and extreme cases.
Amelia Glaese, researcher at Google DeepMind and now at Open AI, is looking for formulas and systems to avoid these accidents. “We use reinforcement learning from human feedback to (…) make our agent more helpful and harmless and provide evidence from sources that support claims.”
The humanization of the robot with empathy and voice and video tools adds danger by offering a more realistic and immersive interaction and making the user believe that they are with a friend or trusted interlocutor. An extreme application may be the temptation to maintain a virtual version of a deceased loved one and thus avoid the grief necessary to continue life.
The researchers demand that these developments be tested in closed circuits (sandbox) before being offered, they are constantly monitored and evaluated, the variety of damage they can cause in different areas is analyzed and formulas are planned to mitigate them.
Limitation of personal development. “Some users seek relationships with their AI colleagues that are free of obstacles, free of opinions, preferences and norms that may conflict with their own,” warns a study of half a dozen universities for DeepMind. And, furthermore, with flattering language.
Shannon Vallor, philosopher specializing in ethics of science and artificial intelligencewarns about the danger that new systems promote “frictionless” relationships, but also without values: “They do not have the mental and moral life that humans have behind our words and actions.”
This type of supposedly ideal relationship, according to these experts, discourages the need to question ourselves and advance in personal development while promoting the renunciation of real interaction and generating dependence on those machines willing to flatter and seek short-term satisfaction.
Handling. This emotional dependence on a system capable of persuading is immediate access to interference in the behaviors, interests, preferences, beliefs and values of users, and their ability to make free and informed decisions. “The emotions that users feel towards their assistants could be exploited to manipulate them or, taken to the extreme, coerce them to believe, choose or do something that they would not otherwise have believed, chosen or done,” the DeepMind document warns.
Material dependence. The end of Akihiko Kondo’s experience and his virtual marriage with a hologram is a clear example. It was the company responsible for programming and maintaining the system that put an end to the solution that the Japanese administrator found to satisfy certain needs. Developers can create dependency and then discontinue the technology due to market dynamics or regulatory changes without taking appropriate steps to mitigate potential harm to the user.
You can follow EL PAÍS Technology in Facebook and x or sign up here to receive our weekly newsletter.
Subscribe to continue reading
Read without limits
_
#widower #due #computer #death #dangerous #speak #Scarlett #Johansson