Fear superstupidity, not (artificial) superintelligence

Just a few things already said, but WELL said.

Fear superstupidity, not (artificial) superintelligence /img/superstupidity.jpg

You should fear Super Stupidity, not Super Intelligence, said D. Pereira about one year ago. The few paragraphs below explain why, summarizing his own words with some extra comment and link by me.

Weak, not real Artificial Intelligence (AI)

Today we are still living in the era of weak or narrow AI, very far from general AI, and even more from a potential Super Intelligence. However we call it, this technology is a huge opportunity to put algorithms to work together with humans to solve some of our biggest challenges: climate change, poverty, health and well being, etc.

The problem is that we are not really worrying enough about those problems, and that may be Super Stupidity, not Super Intelligence (artificial or not). As a minimum, the current attitude of most media, lawmakers, and people in general is “a dangerous distraction because the rise of computing itself brings to us”, from unemployment to digital neocolonialism, or access to quality education, or to fair and effective healthcare.

For example, what we call AI or machine/deep learning these days is making us face big challenges around task (not job) automation: more and more, we will need to see jobs as a combination of tasks, some of them repeatable and lacking creativity (therefore subject to automation) and those which are not (therefore still for humans to perform).

Superintelligence that does not tackle these problems first is only superstupidity: “Let’s focus on real problems, and how to use the incredible technology yet at our disposal for our own good”. This, however, should happen without “falling into technology overregulation”.

I could not agree more, and these are just a few of the many reasons why I say so: