Medical Artificial Intelligence depends on where you live

(Paywall-free popularization like this is what I do for a living. To support me, see the end of this post)

Especially if you live in the right places.

Medical Artificial Intelligence depends on where you live /img/bias-in-healthcare.jpg

Researchers from Stanford University have discoveredthat most algorithms used to take or support medical decisions are “trained on datasets from patients in only three geographic areas, and that the majority of states have no represented patients whatsoever”.

To understand how smart this is, it’s like…

  • testing sun lotions, or cosmetics, only on people of one skin color
  • or testing medications for cholesterol only on people that only eat fish
  • or like testing any hormone-related therapy only on men

Basically, by trusting software “trained” in that way, if doctors take the right decision most of the time is by chance, not by design. Not good, surely very dangerous for some unlucky patients.

“AI algorithms should mirror the community,” says Amit Kaushal, an attending physician at VA Palo Alto Hospital and Stanford adjunct professor of bioengineering.

“If we’re building AI-based tools for patients across the United States, as a field, we can’t have the data to train these tools all coming from the same handful of places.”

It’s even funnier. In the wrong way, of course

Consider these two quotes from the article:

“The researchers examined five years of peer-reviewed articles that trained a deep-learning algorithm for a diagnostic task intended to assist with patient care."

“Among U.S. studies where geographic origin could be characterized, they found the majority (71%) used patient data from California, Massachusetts, or New York to train the algorithms. Some 60% solely relied on these three locales. Thirty-four states were not represented at all, while the other 13 states contributed limited data."

That “diagnostic task for patient care” has been fine-tuned for residents of US states with the highest per capita income. That is, statistically they are likely to work better for (relatively of course) “rich” American, than for the poorer ones.

India 2.0, reversed

For some reason I can’t quite explain, this finding seems to me the specular, software-powered version of something that really impressed me fifteen years ago: discovering that drug companies were very interested in Indians as volunteer testers for new drugs, not only because they were too poor to refuse, but especially because they were “treatment naïve” that is “pure” specimen, never before _“exposed to the wide array of biomedical drugs that most Western patients have”.

What now?

What the Stanford research found is nothing as criminal as what happened in India, of course. Nothing criminal at all. Just an (unvoluntary!) case of mathwashing. But the practical effects may be very similar: “you could be doing actual harm to the people not included in the sample."

If you test drugs, or medical procedures of whatever sort treatments, only on one, or few groups of relatively homogenous humans, you better triple-check what you find, and apply it cautiously.

Larger and more diverse datasets are needed for the development of innovative AI algorithms. Besides, “we need to understand the impact of these biases and whether considerable investments should be made to remove them” said one author of the Stanford study.

Indeed.

Image source: Best intentions won’t solve implicit bias in health care

Who writes this, why, and how to help

I am Marco Fioretti, tech writer and aspiring polymath doing human-digital research and popularization.
I do it because YOUR civil rights and the quality of YOUR life depend every year more on how software is used AROUND you.

To this end, I have already shared more than a million words on this blog, without any paywall or user tracking, and am sharing the next million through a newsletter, also without any paywall.

The more direct support I get, the more I can continue to inform for free parents, teachers, decision makers, and everybody else who should know more stuff like this. You can support me with paid subscriptions to my newsletter, donations via PayPal (mfioretti@nexaima.net) or LiberaPay, or in any of the other ways listed here.THANKS for your support!