A few weeks ago ChatGPT has presented the option of entering web searches in its results and is also working on SearchGPT, which is believed to be Google’s great rival when it comes to searching for information.
And since the arrival of ChatGPT in 2022, there has been endless speculation about the possibility of AI assistants stealing ground from Google. Who wants to choose between multiple links and results when you can have a single answer?
The debate is profound and surely varies depending on the search intention, but Google has ended up reacting by introducing an AI module in its results that, for now, have not reached Europe but have been deployed in the United States.
Now, if Google already raised suspicions with some of its results, Can we trust ChatGPT?
A recent test by Columbia’s Tow Center for Digital Journalism has revealed an alarming weakness: its reliability in finding and citing sources is deeply questionable.
How chatgpt deals with searches: a look at the problem
Since its launch, ChatGPT has been touted as a revolutionary tool for answering questions and generating content quickly and efficiently. However, the Columbia study shows that this confidence may be misplaced.
In tests that included 200 citations from 20 different publications, the chatbot offered incorrect or partially incorrect answers on 153 occasions. More worryingly, he did so in a confident tone, giving the false impression of precision.
The errors detected were not simple misunderstandings. In some cases, ChatGPT attributed quotes from letters to the editor of the Orlando Sentinel to articles in Time magazine. In others, he linked statements from a New York Times article about endangered whales to a site that had plagiarized the original content.
Tone security: a double-edged sword
The most disturbing aspect of these errors is the confidence with which they are presented. Only on seven occasions out of 200 tests carried out did ChatGPT admit to being unsure of the origin of a quote, using expressions like “it could be” or “I didn’t find the exact source”. This confident tone can mislead even experienced users, especially in sectors such as journalism or academic research, where credibility is everything.
OpenAI responded to criticism of the study by stating that the tests represented an “atypical” use of the product and pledged to improve the quality of the searches. However, the damage has already been done: a tool promoted as an ally of precision has shown to be a potential risk to veracity.
#ChatGPT #Google #big #test #leaves #clear #loser