Regulatory Focus™ > News Articles > 2021 > 4 > How to reduce bias, improve fairness in medical devices

How to reduce bias, improve fairness in medical devices

Posted 07 April 2021 | By Jeff Craven 

How to reduce bias, improve fairness in medical devices

Lessons learned in the artificial intelligence community about addressing bias could also be applied to medical devices, according to a recent perspective in the journal Science.
 
“Initiatives to promote fairness are rapidly growing in a range of technical disciplines, but this growth is not rapid enough for medical engineering. Although computer science companies terminate lucrative but biased facial recognition systems, biased medical devices continue to be sold as commercial products,” Achuta Kadambi, PhD, of the Department of Electrical and Computer Engineering, University of California, Los Angeles (UCLA), wrote in the perspective.

Bias in medical devices needs to be addressed now, Kadambi said, and starts with examining where biases occur and how they can be mitigated. He outlined three main types of biases in medical devices: physical bias, computational bias, and interpretation bias.
 
A physical bias can take the form of an “undesirable performance variation across demographic groups,” Kadambi said. One such case can be seen in pulse oximeters, which use light technology to measure vital signs and detect occult hypoxemia. This can cause problems for patients with darker skin tone, and more than one study has shown this technology is less effective for these patients. “Dark skin tones respond differently to these wavelengths of light, particularly visible light. Because hypoxemia relates to mortality, such a biased medical device could lead to disparate mortality outcomes for Black and dark-skinned patients,” Kadambi noted. Physical bias can also create other disparities as well—implants for hip replacements not considering bone structure differences between patients, for example, which can affect patient outcomes.
 
In computational bias, unbalanced datasets that favor certain genders or racial groups can impact computational workflows, Kadambi explained. In the case of diagnostic algorithms for x-ray imaging datasets, having a sample size comprising more male than female chest x-rays could negatively impact female patients. “Somewhat unexpectedly, balancing the gender representation to 50% female boosts diagnostic performance not only for females but also for males,” Kadambi noted. For situations where datasets are examining conditions that are more prevalent in one group, Kadambi proposed transfer learning, which involves “repurpos[ing] design parameters from task A (based on a balanced dataset) to task B (with an unbalanced dataset).”
 
Computational bias can also occur through bias in an algorithm. Kadambi offered an example of software that detects a person’s rate of blinking to identify conditions like Parkinson's disease and Tourette syndrome, but noted that “traditional image-processing systems have particular difficulty in detecting blinks for Asian individuals.” For these and other groups, “poorly designed and biased algorithms could produce or exacerbate health disparities,” he said.

Kadambi described interpretation bias as a misreading of data due to assumptions or “corrections” based on race or gender. Spirometry data, for instance, can be unfairly interpreted by physicians “because certain ethnic groups, such as Black or Asian, are assumed to have lower lung capacity than white people: 15% lower for Black people and about 5% lower for Asian people,” he explained. “This assumption is based on earlier studies that may have incorrectly estimated innate lung capacity.” When these assumptions and “corrections” are applied to interpretation of spirometry results, the physician can disadvantage one of these ethnic groups by assuming their lung capacity needs to be at a lower threshold before prioritizing their treatment.
 
“This assumption does not account for socioeconomic distinctions across race: Individuals who live near motorways exhibit reduced lung capacity, and these individuals are often from disadvantaged ethnic groups,” Kadambi said. “The spirometer is just one of several examples of systemic racism in medicine.”
 
The artificial intelligence community has begun examining how their technology impacts all their users. For example, issues with misclassification in the face recognition of women with darker skin tones compared with men with lighter skin have caused companies like Amazon to halt the use of these technologies, Kadambi noted. “There is still a long way to go in addressing bias in AI, but some of the lessons learned can be repurposed to medical devices,” he said.
 
Kadambi proposed several avenues for improvement in the medical device field: first, developers could consider adopting a “fairness statement” that considers physical, computational, and interpretation bias to help address or overcome technical barriers that disadvantage certain groups. Where the design of some medical devices may always disadvantage certain groups, technical innovation could play a role. An approach that uses motion cues, rather than visual changes in skin color, has been developed for the remote plethysmograph, Kadambi said.
 
However, designers of medical devices that achieve fairness may still encounter pitfalls, he noted. A clinical provider who has a conscious or subconscious bias could still use the device, or it may be inaccessible to certain groups for socioeconomic reasons. “Achieving fairness in medical devices is a key piece of the puzzle, but a piece nonetheless,” he said. “Diversity and inclusion have gained increasing attention, and the era of fair medical devices is only just beginning.”
 
Science Kadambi A.
 

 

© 2021 Regulatory Affairs Professionals Society.

Tags: DEI, devices, EMA, medical, US

Regulatory Focus newsletters

All the biggest regulatory news and happenings.

Subscribe