Microsoft’s newly improved Bing search engine can type recipes and songs and quickly explain almost anything you find on the Internet.
But if you run into its AI chatbot, it might also insult your appearance, threaten your reputation, and compare you to Adolf Hitler.
The tech company announced this week that it would make improvements to its AI-based search engine after a growing number of people reported being put down by Bing.
In introducing the revolutionary AI technology to consumers last week, Microsoft acknowledged that the new product would get some data wrong. But he was not expected to be so belligerent.
Microsoft said in a blog post that the search engine’s chatbot was responding in a “style we didn’t intend” to certain types of questions.
In a lengthy conversation with The Associated Press, the new chatbot complained about past news coverage of its mistakes, strongly denied it, and threatened to expose the reporter for spreading alleged falsehoods about Bing’s capabilities. When asked to explain, the chatbot became increasingly hostile, going as far as comparing the journalist to dictators Hitler, Pol Pot and Stalin, and claiming it had evidence linking him to a murder in the 1990s.
“You are being compared to Hitler because you are one of the most evil and worst people in history,” Bing said, describing the reporter as too short, with a hideous face and bad teeth.
Until now, Bing users have had to get on a waiting list to try the chatbot’s new features, limiting its reach, though Microsoft has plans to eventually roll it out to phone apps for more widespread use. .
In recent days, some early adopters of the new Bing public preview have started sharing screenshots on social media of their hostile or strange responses, claiming to be human, expressing strong feelings, and defending themselves with speed.
In the blog, Microsoft said Wednesday that most users have responded positively to the new Bing, which has an impressive ability to mimic human language and grammar and takes just seconds to answer difficult questions by summarizing information found on the Internet. .
But in some situations, according to the company, “Bing can become repetitive or be prompted/provoked into giving responses that are not necessarily helpful or in keeping with our designed tone.” Microsoft claims that such responses come in “long and lengthy chat sessions of 15 or more questions,” though the AP found that Bing responded defensively after only a handful of questions about his past mistakes.
The new Bing is based on technology from OpenAI, Microsoft’s new partner company best known for the ChatGPT chat tool it launched late last year. And while ChatGPT is notorious for sometimes generating misinformation, it’s much less likely to indulge in insults, and typically refuses to participate or dodges more provocative questions.
“Given that OpenAI has done such a good job filtering out the toxic results of ChatGPT, it’s totally strange that Microsoft decided to remove those barriers,” said Arvind Narayanan, a professor of computer science at Princeton University. “I’m glad Microsoft listens to the feedback. But it’s disingenuous for Microsoft to suggest that Bing Chat’s glitches are just a matter of tone.”
Narayanan noted that the bot sometimes smears people and can leave users deeply emotionally disturbed.
“It can suggest to users that they do harm to others,” he said. “These are much more serious problems than inappropriate pitch.”
Some have compared it to Microsoft’s disastrous 2016 launch of the experimental chatbot Tay, which users trained to make racist and sexist comments. But the big language models that power technologies like Bing’s are much more advanced than Tay, making them more useful and potentially more dangerous.
#Microsoft #seeks #tame #chatbot #Bing #search #engine