World Health Organization publishes new Guidelines on Ethics and Governance of AI-Based Large Language Models (LLMs or LMMs)
Not only the World Economic Forum: artificial intelligence and its applications in healthcare also dictates the agenda of the World Health Organization (WHO) which returns to dealing with large language models: such as ChatGPT, Bard and Bert, so to speak. And he does it by publishing new Guidelines on ethics and governance large multimodal models (LMM), a rapidly growing type of generative artificial intelligence (AI) technology with applications across healthcare.
The guide, which is based on Guidelines on Artificial Intelligence in healthcare developed in 2021, contains over 40 recommendations for consideration by governments, technology companies and healthcare providers to ensure the appropriate use of LMMs to promote and protect the health of populations.
In May 2023, the WHO itself had warned and asked for caution in the use of Large language models tools generated by Artificial Intelligence (AI) to protect and promote human well-being, human security and autonomy and preserve public health.
Transparent information and policies needed
LMMs can accept one or more types of data input, such as text, video and images, and generate different outputs not limited to the type of data entered. LMMs are unique in their own way imitation of human communication and for the ability to carry out tasks for which they were not explicitly programmed. LMMs have been adopted faster than any consumer application in history, with several platforms – such as ChatGPT, Bard and Bert – entering the public consciousness in 2023. Generative AI technologies have the potential to improve healthcare, but only if those who develop, regulate and use these technologies identify and fully take into account the associated risks, reiterates Jeremy Farrar, WHO's chief scientist. We need transparent information and policies to manage the design, development and use of LMMs to achieve better health outcomes and overcome persistent health inequities.
The 5 fields of application in healthcare
The new WHO guidelines outline c
five broad sectors of applications of LMMs for health:
1.Diagnosis and clinical assistance, how to answer patients' written questions; 2.Patient-led use, for example to investigate symptoms and treatment; 3.Administrative and administrative tasks, such as documenting and summarizing patient visits within electronic health records;
4. Medical and nursing training, including providing trainees with simulated patient encounters;
5.Scientific research and drug development, including for the identification of new compounds.
Potential risks and benefits of LMM
While LMMs are starting to be used for specific health-related purposes, there are also documented risks of producing
false, imprecise, partial or incomplete statements, that could harm people who use that information in making health decisions. Additionally, LMMs may be trained on low-quality or biased data, based on race, ethnicity, ancestry, sex, gender identity, or age. The guidance also describes wider risks to healthcare systems, such as accessibility and affordability of the best performing LMMs.
The LMM can also encourage automation bias on the part of healthcare providers and patients, in which errors that would otherwise have been identified are overlooked or difficult choices are inappropriately delegated to an LMM. LMMs, like other forms of AI, are also vulnerable to cybersecurity risks that could endanger patient information or the reliability of these algorithms and the delivery of healthcare more generally.
Security passes through the involvement of all actors
To create safe and effective LMMs, WHO highlights the need to involve various stakeholders: governments, technology companies, healthcare workers, patients and civil society, in all phases of development and deployment of such technologies, including their supervision and regulation. Governments of all countries must cooperatively lead efforts to effectively regulate the development and use of artificial intelligence technologies, such as LMMs, adds Alain Labrique, WHO Director for Digital Health and Innovation in the Division scientific.
The key recommendations
The new WHO guidelines include recommendations for governments, who have primary responsibility for setting standards for the development and dissemination of LMMs, and their integration and use for medical and public health purposes.
For example, governments they should:
•Invest in or provide public or non-profit infrastructure, including computing power and public datasets, accessible to public, private and non-profit sector developers, requiring users to adhere to ethical principles and values in return of access.
•Use laws, policies and regulations to ensure that LMMs and applications used in healthcare and medicine, regardless of the risk or benefit associated with AI technology, meet ethical obligations and human rights standards that impact, for example for example, on a person's dignity, autonomy or privacy.
•Assign an existing or new regulatory agency to evaluate and approve LMMs and applications intended for use in healthcare or medicine, as resources permit.
•Introduce mandatory post-publication audits and impact assessments, including for data protection and human rights, by independent third parties when an LMM is deployed at scale. The audit and impact assessments should be published and include findings and impacts disaggregated by user type, for example by age, race or disability.
The guidelines also include key recommendations for the developers of LMM, which should ensure that:
•LMMs are designed not only by scientists and engineers. Potential users and all stakeholders direct and indirect, including medical service providers, scientific researchers, healthcare professionals and patients, should be involved from the early stages of AI development in a structured, inclusive and transparent design and should have the opportunity to raise issues ethical issues, express concerns, and provide input for the AI application under consideration.
•LMMs are designed to perform well-defined tasks with the accuracy and reliability necessary to improve the capacity of health systems and promote the interests of patients. Developers should also be able to predict and understand potential secondary outcomes.
Corriere della Sera also on Whatsapp. sufficient click here to subscribe to the channel and always be updated.
January 18, 2024 (modified January 18, 2024 | 3:54 pm)
© ALL RIGHTS RESERVED
#Artificial #intelligence #healthcare #dictates #rules #ChatGPT #Bard #Bert