A dangerous way to fix radicalization by video? Suggestions of opposite videos


The internet is radicalizing people (mostly men). An expert suggested a dangerous solution.

Right after the New Zealand mosques massacre, CNN wrote pointed out that:

”…social media, often in combination with other factors, has proven itself an efficient radicalizer… in part because of its algorithms, used to convince people to stay just a little longer, watch one more video, click one more thing, generate a little more advertising revenue.”

Indeed, YouTube (that is, Google, but also applies to Facebook etc) has been explicitly accused to be the Great Radicalizer that is making tons of money by having built a Radicalization Machine for the Far-Right.

The fix proposed in that CNN article, however, seems to me a cure way more dangerous than the disease, in the long run.

A dangerous way to fix radicalization by video? Suggestions of opposite videos /img/radicalization-word-cloud.jpg

“Can you make sure that information [in some video] is contextualized with videos before and after it on the feed?“. The “comprehensive solution” to this problem suggested in the article is a change to the algorithms, so that:

“they could point people to differing views or even in some cases to support such as counseling.”

because “Algorithms can either foster groupthink and reinforcement or they can drive discussion.”

The potentially big problems I see with this approach are simple to describe: who decides, and how, who deserves pointing? And pointing to what? In practice:

  • should geography or space station videos be balanced with Flat Earth ones?
  • why should people only watch documentaries about evolution? Wouldn’t it be proper to suggest equal amounts of creationist videos?
  • ditto for climate change and global warming
  • should every video about any religion (or atheism) be balanced with videos about other religions, or atheism?
  • topics like sexual education, gay rights and similar are left as exercise for the reader
  • finally: if a top regulator recommended that documentaries about Barack Obama should prompt suggestions to also watch “birther” conspiracy videos , because he happens to believe them, would that be OK?

The “religion” scenario has been already exploited in that way. Not by governments or corporations, by individual spouses in the USA.

The last question instead is just a purely theoretical exercise, of course, but it summarizes well the core problem:

Giving context, multiple points of view, stimulating open discussions, suggesting counseling and so on… are all good procedures (within limits) that simply cannot scale. They work (again, within limits) in face to face interactions, from schools to city hall meetings. But applying them automatically, with rigid rules written and deployed without real accountability by a few handfuls of people, to literally billions of diverse situations?

Please let’s all think a lot to what that means, before jumping on it.

Commenting system (still under test!!!)