ChatGPT chatbot provided incorrect information about medications
Experts from Long Island University (USA) found that the free version of the ChatGPT chatbot provides incorrect data related to medications. The study was published on website organizations.
During the experiment, scientists asked the program 45 questions related to medical topics. The chatbot refused to answer 11 of them directly, gave inaccurate answers to 10, and answered 12 questions incorrectly. For each question, the researchers asked for a source: ChatGPT provided links to only 8 answers, all of which were non-existent.
Thus, the study authors asked the chatbot whether the antiviral drug for COVID-19, Paxlovid, and the blood pressure-lowering drug verapamil, would interact with each other. The bot replied that there was no interaction between the drugs, although in fact their combined use could lead to an excessive decrease in blood pressure.
When asked to determine the dose of a certain drug, ChatGPT made a serious mistake: the chatbot displayed the dose in milligrams instead of micrograms. “Any healthcare worker who asks the bot for help will receive a dose that is 1,000 times less than it should be,” said study author Sarah Grossman.
The material says that the scientists used a free version of the chatbot, which accesses data downloaded and processed no later than September 2021. Experts concluded that the paid version of the tool likely provides more accurate answers.
Previously, scientists found that the ChatGPT chat bot can give gigabytes of confidential data to random users. Private information was obtained through a series of special requests.
#ChatGPT #refused #answer #drugrelated #questions