Shortly after ChatGPT debuted last year, researchers tested what the artificial intelligence chatbot would write after being asked questions laced with conspiracy theories and false narratives.
The results, in writings in the form of news articles, essays and television scripts, were so concerning that the researchers did not mince words.
“This tool is going to be the most powerful tool for spreading misinformation that has ever existed on the Internet,” said Gordon Crovitz, co-CEO of NewsGuard, a company that tracks misinformation online and conducted the experiment last month. “Creating a new false narrative can now be done on a dramatic scale and much more often, it’s like having AI agents contributing to disinformation.”
Misinformation is hard to argue with when humans create it manually. The researchers predict that generative technology could make disinformation cheaper and easier to produce for even greater numbers of conspiracy theorists and disinformation spreaders.
Real-time personalized chatbots could share conspiracy theories in increasingly credible and persuasive ways, the researchers say, smoothing out human errors like bad syntax and mistranslations and moving beyond easily detectable copy-paste jobs. And they say that no available mitigation tactic can effectively combat it.
The predecessors of ChatGPT, which was created by San Francisco artificial intelligence company OpenAI, have been used for years to pepper online forums and social media platforms with (often grammatically suspicious) comments and spam. Microsoft had to stop the activity of its Tay chatbot within 24 hours of its Twitter debut in 2016 after trolls taught it to spit racist and xenophobic language.
ChatGPT is much more powerful and sophisticated. Supplied with disinformation-laden questions, you can produce clear and convincing variations on content en masse in seconds, without revealing your sources. On Tuesday, Microsoft and OpenAI introduced a new web browser and search engine that can use chatbot technology to plan vacations, translate texts or conduct research.
OpenAI researchers have long been nervous about chatbots falling into nefarious hands, writing in a 2019 paper of their “concern that their capabilities could reduce the costs of disinformation campaigns” and aid in the hunt. malicious “for monetary gain, a particular political agenda, and/or a desire to create chaos or confusion.”
#Disinformation #Researchers #Warn #Artificial #Intelligence #Chatbots