v
Algorithms are deciding who will have housing opportunities or financial loans. And biased testing software is forcing students of color and students with disabilities to deal with increased anxiety for fear of being excluded from exams or singled out for cheating. But there is another frontier of the AI and algorithms that we should be very concerned about: the use of these systems in healthcare and treatment.
The use of the AI and algorithmic decision-making systems in medicine is increasing, although current regulation may be insufficient to detect harmful racial biases in these tools. Details about the development of these tools are largely unknown to doctors and the public, a lack of transparency that threatens to automate and worsen the racism in the health care system. Last week, the FDA issued guidance that significantly expands the scope of the tools it plans to regulate. These new guidance emphasize that more must be done to combat bias and promote equity amid the growing numbers and increased use of AI and algorithmic tools.
In 2019, a revealing study found that a clinical algorithm that many hospitals were using to decide which patients needed care showed a racial bias: Black patients had to be considered much sicker than white patients to be recommended for the same care. This happened because the algorithm had been trained on past healthcare spending data, reflecting a history in which black patients had less to spend on their healthcare compared to white patients, due to wealth and income disparities. rooted. Although the bias of this algorithm was eventually detected and corrected, the incident raises the question of how many other clinical and medical tools may be equally discriminatory.
Another algorithm, created to determine how many hours of help Arkansans with disabilities would receive each week, came under fire after making extreme cuts to in-home care. Some residents attributed extreme disruptions to their lives and even hospitalizations to the sudden cuts. A resulting lawsuit found that several errors in the algorithm, errors in how it characterized the medical needs of people with certain disabilities, were directly responsible for the inappropriate cuts made. Despite this outcry, the group that developed the flawed algorithm continues to create tools used in healthcare settings in nearly half of US states and internationally.
A recent study found that a AI Trained in medical imaging, such as X-rays and CT scans, she had unexpectedly learned to distinguish patients’ self-reported race. She learned to do this even when she only trained with the goal of helping doctors diagnose patient images. The ability of this technology to determine the race of patients, even when their doctor cannot, could be abused in the future, or inadvertently direct substandard care to communities of color without being detected or intervened.
Some algorithms used in clinical settings are severely underregulated in the United States. The U.S. Department of Health and Human Services (HHS) and its subagency, the Food and Drug Administration (FDAfor its acronym in English), are tasked with regulating medical devices, ranging from a tongue depressor to a pacemaker and now, medical systems of AI. While some of these medical devices (including the AI) and tools that assist physicians in treatment and diagnosis are regulated, other algorithmic decision-making tools used in clinical, administrative and public health settings, such as those that predict mortality risk, readmission probability and needs home care, are not required to be reviewed or regulated by the FDA nor by any regulatory body.
This lack of oversight can lead to algorithms biased be widely used by hospitals and state public health systems, contributing to greater discrimination against Black and Latino patients, people with disabilities, and other marginalized communities. In some cases, this lack of regulation can lead to wasted money and lost lives. An AI tool, developed to detect sepsis early, is used by more than 170 hospitals and health systems. But a recent study revealed that the tool failed to predict this potentially fatal disease in 67 percent of patients who developed it, and generated false sepsis alerts in thousands of patients who did not have it. Recognizing that this failure was the result of underregulation, the new guidelines from the FDA They point to these tools as examples of products that will now be regulated as medical devices.
The approach of the FDA to regulate drugs, which involves publicly shared data that are examined by review panels for adverse effects and events, contrasts with their approach to regulating AI medical and algorithmic tools. Regulate the AI medical technology presents a novel problem and will require considerations different from those applicable to the hardware devices that the FDA is used to regulating. These devices include pulse oximeters, thermal thermometers, and scalp electrodes, each of which has been found to reflect a racial bias or ethnic in its functioning in subgroups. The news of these biases further underscores how crucial it is to properly regulate these tools and ensure they do not perpetuate bias against groups. racial and ethnic vulnerable.
Although the FDA suggests that device manufacturers test their devices for racial bias either ethnic Before putting them on the general market, this step is not mandatory. Perhaps more important than evaluations after developing a device is transparency during its development. A STAT+ News study found that many AI approved or authorized by the FDA do not include information about the diversity of the data on which the AI, and that the number of these tools being licensed is increasing rapidly. Another study found that tools AI “consistently and selectively diagnosed underserved patient populations,” finding that the underdiagnosis rate was higher for underserved communities that have disproportionately limited access to health care. This is unacceptable when these tools can make decisions that have life or death consequences.
Equitable treatment by the health care system is a civil rights issue. The COVID-19 pandemic has laid bare the many ways in which existing social inequalities produce health care inequities; a complex reality that humans can try to understand, but that is difficult to accurately reflect in an algorithm. The promise of AI in medicine was that it could help debias a deeply biased institution and improve health care outcomes; instead, it threatens to automate this bias.
Policy changes and collaboration among key players, including state and federal regulators, medical, public health, and clinical advocacy groups and organizations, are needed to address these gaps and inefficiencies. For starters, as detailed in a new ACLU white paper:
Public disclosure of demographic information should be required.
The FDA should require an assessment of the impact of any differences in device performance by subgroup racial either ethnic as part of the authorization or approval process.
Device labels should reflect the results of this impact assessment.
The FTC should collaborate with HHS and other federal agencies to establish best practices that device manufacturers not regulated by the FDA should follow to reduce the risk of racial bias either ethnic in his tools.
Instead of finding out about racial bias and ethnic embedded in clinical and medical algorithms and devices through explosive publications revealing what amounts to clinical and medical malpractice, HHS and the FDAas well as other stakeholders, must work to ensure that the medical racism become a relic of the past rather than a certainty of the future.
Via: Hypertextual
Editor’s note: That is why we also have to supervise what information and what personnel the artificial intelligences are being programmed with :/
#generates #racist #medical #diagnoses #Atomix