Artificial intelligence passes the eye exam. In identifying an eye problem, making a diagnosis and suggesting possible therapies, “AI beats doctors”. Or at least non-specialist doctors. This is the conclusion of a study that put the Gpt-4 linguistic maxi-model, an upgrade of ChatGpt, to the test, comparing it with doctors at different career stages: from the generalist to the trainee ophthalmologist, up to the most experienced ophthalmologist. The work is published in 'Plos Digital Health'.
Here's the experiment. The real white coats were subjected to 87 case scenarios, relating to patients with a specific ocular problem, and were asked to frame the disorder by providing a diagnosis or therapeutic advice choosing from 4 options. In the test, “Gpt-4 produced significantly better results than young non-specialist doctors, who had knowledge of the eye comparable to that of general practitioners.” Instead, he earned “scores similar to those of trainee and experienced ophthalmologists, although the 'real' doctors who performed best earned higher scores.” Hypersensitivity to light, decreased vision, eye lesions, itching or pain in the eyes: the questions covered a wide range of problems and were taken from a text used to evaluate trainee ophthalmologists, not available on the Internet and therefore unlikely to have already been read ' by the AI in training phase.
“Large linguistic models will probably not replace healthcare workers”, the authors led by Arun Thirunavukarasu, working at the School of Clinical Medicine of the University of Cambridge (UK) at the time of the research, now at Oxford University Hospitals – NHS Foundation Trust. However, “they could be useful for proposing diagnoses and offering suggestions in well-controlled contexts such as patient triage, or where the possibilities of accessing specialized healthcare professionals are limited”.
“We could realistically use artificial intelligence to evaluate patients with eye problems – explains Thirunavukarasu – to decide which cases are emergencies”, that is, patients “who need to be seen immediately by a specialist, which can be seen by a family doctor and which do not require treatment”. Furthermore, “with further development – adds the lead author of the study – large language models could also advise GPs who have difficulty obtaining timely suggestions from ophthalmologists”.
Scientists believe the new research is superior to similar research conducted so far, because the AI measured itself directly with doctors. The scholars also tested other models (Gpt-3.5, PaLM2 and LLaMA), asking them the same questions, but Gpt-4 was the one that gave the most accurate answers.
Even looking to a future in which artificial intelligence systems are even more sophisticated, “I think doctors will continue to be responsible for patient care,” Thirunavukarasu believes. “The most important thing – he concludes – is to give patients the possibility to decide whether they want IT systems to be involved in their treatment process or not. It will be an individual decision that each patient will make.”
#ophthalmologist #39beats #doctors #eye #disorders39