Artificial intelligence and medical algorithms are deeply intertwined with our modern health care system. These technologies mimic the thought processes of doctors to make medical decisions and are designed to help providers determine who needs care. But one big problem with artificial intelligence is that it very often replicate the biases and blind spots of the humans who create them.
Researchers and physicians have warned that algorithms used to determine who gets kidney transplants, heart surgeries and breast cancer diagnoses display racial bias. Those problems can lead to detrimental care that, in some cases, can jeopardize the health of millions of patients.
So how exactly does bias seep into these algorithms? And what can be done to prevent it?
In this episode, we hear from Casey Ross, STAT’s national health tech correspondent, about his reporting on racial bias in AI. Chris Hemphill, the VP for applied AI & growth at Actium Health, tells us about the rise of responsible AI in health care. Ziad Obermeryer, an emergency medicine physician and researcher at the UC Berkeley School of Public Health, walks us through how his team found bias in an algorithm widely used in our health care system and an instance where AI was used to correct a health care injustice.
A transcript of this episode is available here.
To read more on some of the topics discussed in the episode:
And check out some of STAT’s coverage on the topic:
This podcast was made possible with support from the Commonwealth Fund.