Dutch researchers have concluded that the AI-powered chatbot, ChatGBT, makes emergency room diagnoses that are as accurate as, and in some cases better than, doctors, saying that AI could “revolutionize the medical field.” “.
However, the study’s authors stressed that the fate of emergency doctors’ careers is not yet in danger, as the chatbot may be able to speed up diagnosis but not replace the human’s ability to differentiate and experience.
Thirty cases treated in an emergency department in the Netherlands in 2022 were reviewed by providing ChatGPT with data based on the patient’s history, laboratory tests and doctor’s notes, and the chatbot was asked to come up with five possible diagnoses.
Doctors succeeded in providing the correct diagnosis in 87% of cases, compared to 97% for version 3.5 of ChatGPT.
Heidi Ten Berg, from the emergency department at Jeroen Bosch Hospital in the south of the Netherlands, pointed out that the chatbot “was able to make medical diagnoses just like a human doctor would do.”
– Maintain confidentiality
Researcher Steve Corstens, who participated in the study, stressed that it does not conclude that computers can one day run emergency rooms, but that artificial intelligence can play a vital role in helping doctors under pressure.
He said the chatbot “can help with diagnosis and may suggest ideas that the doctor might not have thought of.”
However, he noted that such tools are not designed as medical devices, sharing concerns about the confidentiality of sensitive medical data in the chatbot.
As in other areas, GPT chat has shown some shortcomings.
The study notes that his reasoning was “at times unreasonable or medically inconsistent, which could lead to incorrect information or incorrect diagnosis, with significant consequences.”
Scientists also acknowledge some shortcomings in their research, such as the small sample size.
The study was also limited to reviewing relatively simple cases, including patients with only one major problem. Therefore, the effectiveness of AI software in complex cases is unclear.
– medical errors
Sometimes, according to Corstens, “ChatGPT” did not provide the correct diagnosis in all five possibilities, especially in the case of abdominal aortic aneurysm, which is a potentially life-threatening complication with swelling of the aorta.
But what may be an excuse for “ChatGPT” in this case is that the doctor also failed to diagnose.
The report also refers to medical “errors” made by the chatbot, for example diagnosing anemia (low hemoglobin level in the blood) in a patient with a normal hemoglobin level.
The results of the study, published in the specialized journal Annals of Emergency Medicine, will be presented at the 2023 European Congress of Emergency Medicine in Barcelona, Spain.
#Artificial #intelligence #outperforms #doctors #emergency #diagnosis