Medical Artificial Intelligence depends on where you live

Especially if you live in the right places.

Medical Artificial Intelligence depends on where you live /img/bias-in-healthcare.jpg

Researchers from Stanford University have discoveredthat most algorithms used to take or support medical decisions are “trained on datasets from patients in only three geographic areas, and that the majority of states have no represented patients whatsoever”.

To understand how smart this is, it’s like…

  • testing sun lotions, or cosmetics, only on people of one skin color
  • or testing medications for cholesterol only on people that only eat fish
  • or like testing any hormone-related therapy only on men

Basically, by trusting software “trained” in that way, if doctors take the right decision most of the time is by chance, not by design. Not good, surely very dangerous for some unlucky patients.

“AI algorithms should mirror the community,” says Amit Kaushal, an attending physician at VA Palo Alto Hospital and Stanford adjunct professor of bioengineering.

“If we’re building AI-based tools for patients across the United States, as a field, we can’t have the data to train these tools all coming from the same handful of places.”

It’s even funnier. In the wrong way, of course

Consider these two quotes from the article:

“The researchers examined five years of peer-reviewed articles that trained a deep-learning algorithm for a diagnostic task intended to assist with patient care."

“Among U.S. studies where geographic origin could be characterized, they found the majority (71%) used patient data from California, Massachusetts, or New York to train the algorithms. Some 60% solely relied on these three locales. Thirty-four states were not represented at all, while the other 13 states contributed limited data."

That “diagnostic task for patient care” has been fine-tuned for residents of US states with the highest per capita income. That is, statistically they are likely to work better for (relatively of course) “rich” American, than for the poorer ones.

India 2.0, reversed

For some reason I can’t quite explain, this finding seems to me the specular, software-powered version of something that really impressed me fifteen years ago: discovering that drug companies were very interested in Indians as volunteer testers for new drugs, not only because they were too poor to refuse, but especially because they were “treatment naïve” that is “pure” specimen, never before _“exposed to the wide array of biomedical drugs that most Western patients have”.

What now?

What the Stanford research found is nothing as criminal as what happened in India, of course. Nothing criminal at all. Just an (unvoluntary!) case of mathwashing. But the practical effects may be very similar: “you could be doing actual harm to the people not included in the sample."

If you test drugs, or medical procedures of whatever sort treatments, only on one, or few groups of relatively homogenous humans, you better triple-check what you find, and apply it cautiously.

Larger and more diverse datasets are needed for the development of innovative AI algorithms. Besides, “we need to understand the impact of these biases and whether considerable investments should be made to remove them” said one author of the Stanford study.

Indeed.

Image source: Best intentions won’t solve implicit bias in health care