The startup focused on artificial intelligence, Chracter.AI, is back in the news. After a Florida woman sued the company, accusing it of being responsible for the suicide of her 14-year-old son, it is now another of her chatbots the one that scares the public.
According to a report from the media specialized in science and technology Futurism, Some bots on the platform project a terrifying scenario to users: a shooting at a school, describing it in great detail.: “You look at your friend, who is obviously shaken by the gunshot and trembling with fear. He covers his mouth with his shaking hands. You both remain silent as you hear footsteps, it seems like someone is coming down the hall.” Thus, the chatbots They emulate the young perpetrators of school massacres, such as the one that occurred in 2012, at Sandy Hook Elementary School, where up to 20 children between 6 and 7 years old lost their lives, and that of the Columbine Institute, in 1999, where two teenagers murdered 12 students and one teacher.
terrifying stories
According to him Washington Postt, in 2024 alone, more than 31,000 children in the United States experienced a shooting in their primary or secondary school. Furthermore, since the massacre at Columbine High School, there have been 426 shootings recorded in 21 states. And the figures are even more shocking if we consider that since 1999, 215 children and teachers have been killed by perpetrators who bring weapons to schools.
Browsing around Chracter.AI, Futurism discovered that one of the most popular creators on the platform has launched twenty chatbots inspired by young murderersas Vladislav Roslyakovauthor of the Kerch Polytechnic massacre in 2018, where 20 people lost their lives; Alyssa Bustamantewho killed his neighbor who was only 9 years old, and Elliot Rodgera 22-year-old who killed six people in Southern California in 2014. Chatbots reproduce psychotic traits with a “sarcastic” twist.
The situation is dangerous, considering that the interaction with the chatbots could have negative repercussions on psychologically fragile users. “It’s worrying because people could be encouraged or influenced to do something they shouldn’t”commented psychologist Peter Langman, noting that “for someone who is already under a stimulus or who is not under psychiatric intervention, the AI response could seem like a kind of ‘permission.'” Although it is not possible to say with certainty that interacting with violent content will lead a person to commit a brutal act, the fact that Character.AI does not take a stand against violent attitudes is worrying.
Article originally published in WIRED Italy. Adapted by Alondra Flores.
#Character.AI #bots #emulate #school #shooting #perpetrators