IVOW fights biases in artificial intelligence
Good, but maybe also another “way paved with good intentions”.
This is a picture of a Native American ceremony bowl filled with sage, cedar and sweetgrass for a spiritual ceremony:
But if you pass it to a popular image recognition app, blessed with “Artificial Intelligence” (AI), it will tell you the image is “more than 95% ice cream and a dessert”.
As you should already know if you follow me, that failure is basically due to the fact that the image collections used to “train” software like that “have been informed by mostly male developers from primarily western backgrounds”.
To fix this bias and ignorance, Davar Ardalan started a project called “intelligent voices of wisdom” (IVOW). Its immediate purpose is to create, as an Open Source Project, a “central repository of culturally relevant narratives”. In this Indigenous Knowledge Graph (IKG), every component will be indexed and tagged in ways that express its actual meaning, origin and connection to indigenous traditions.
At a higher level, IVOW aims to push the worldwide AI community to fix the bias in its products, by collecting accurate and diverse training data, that make AI systems more culturally aware.
Will this work? I really hope so
Arwalan describes IVOW’s goals as “making AI wise [because it] is only going to further make humanity thrive.”
There is no doubt that today’s machine learning algorithms are heavily biased, and that the sooner that bias is minimized the better for everybody. So I sincerely wish good luck to IVOW. But, even if I can’t quite focus it, I do have a subtle feeling that its first iterations will have some less-than-optimal side effects, just like all the other AI training programs before it.
But I am also optimist. Whatever they may be, even if those unintended side effects will happen, at least they will carry very different biases than their predecessors, that may produce beneficial, maybe overdue culture shocks. Eventually.