Hospitals are incorporating artificial intelligence into their procedures with the intention of improving patient care. Now, A new study has studied whether this new tool favors a better medical diagnosis.
The research, carried out by the health system of the University of Virginia (United States) and published in the journal ‘JAMA Network Open’, has discovered that the use of Chat GPT Plus does not significantly improve diagnostic accuracy of doctors compared to the use of usual resources.
In the study, conducted by Dr. Andrew S. Parsons and his colleagues, 50 family medicine, internal medicine, and emergency medicine physicians put Chat GPT Plus to the test. in half randomly assigned to use Chat GPT Plus to diagnose complex cases, while the other half turned to conventional methods such as medical reference sites and Google.
The researchers compared the resulting diagnoses and found that the accuracy of both groups was similar. However, Chat GPT alone outperformed both groups, suggesting it remains promising for improving patient care. However, the researchers conclude that doctors will need more training and experience with this emerging technology to realize its full potential.
For now, researchers say that Chat GPT is best used as a complement rather than a substitute for doctors.
“Our study demonstrates that AI alone can be an effective and powerful tool for diagnosis says Parsons, who oversees the teaching of clinical skills to medical students at the University of Virginia School of Medicine and co-directs the Clinical Reasoning Research Collaborative. We were surprised to find that adding a doctor to the mix reduced diagnostic accuracy, although it improved efficiency. “These results probably mean we need formal training on how best to use AI.”
Chatbots called ‘big language models’ that produce human-like responses are gaining popularity, and have demonstrated an impressive ability to take patient histories, communicate with empathy and even solve complex medical cases but, for now, they still require the intervention of a human doctor.
Parsons and his colleagues wanted to determine how the high-tech tool could be used most effectively, so they launched a randomized controlled trial at three leading hospitals: UVA Health, Stanford, and Harvard Beth Israel Deaconess Medical Center.
Participating physicians made diagnoses for ‘clinical vignettes’ based on real patient care cases. These case studies included details about patient histories, physical examinations, and laboratory test results. The researchers then scored the results and examined how quickly the two groups made their diagnoses.
The average diagnostic accuracy of doctors who used Chat GPT Plus was 76.3%, while that of those who used conventional methods was 73.7%. Members of the Chat GPT group reached their diagnoses slightly faster in Overall: 519 seconds vs. 565 seconds.
The researchers were surprised by how well Chat GPT Plus worked, with an average diagnostic accuracy of over 92%. They claim that This may be due to the indications used in the study, suggesting that clinicians will likely benefit from training on how to use the prompts effectively.
Alternatively, they say, healthcare organizations could purchase predefined instructions to implement into clinical workflow and documentation.
The researchers also caution that Chat GPT Plus would likely perform worse in real life, where many other aspects of clinical reasoning come into play, particularly in determining the aftereffects of diagnoses and treatment decisions.
For all this, call for additional studies to evaluate the capabilities of large language models in those areas and are conducting a similar study on management decision making.
“As AI becomes more integrated into healthcare, it is essential to understand How can we take advantage of these tools to improve patient care and the doctor’s experience, says Parsons. “This study suggests that there is much work to do in terms of optimizing our partnership with AI in the clinical setting.”
#improve #medical #diagnoses #good #complement