Where do the dangers of artificial intelligence come from?

(Paywall-free popularization like this is what I do for a living. To support me, see the end of this post)

Is Artificial Intelligence something that should, or could be responsible?

Last year, a sponsored article by Accenture argued that current tech trends “prove we need to embrace Responsible AI sooner—not later”. Part of this process would consist of:

  1. ensuring that AI in everything from lending to household goods is designed to be fair, transparent, and human centric
  2. making sure AI is socially accountable
  3. [evaluating] the specific types of job training and counseling programs that help people adapt.

“Responsible” Artificial Intelligence? Really?

Where do the dangers of artificial intelligence come from? /img/ai-no-match-for-natural-stupidity-joseph-addison.jpg

No, thanks. I argue that the very concept of “Responsible AI” is dumb. On one hand, the most, if not the only responsible Artificial Intelligence is the one that only exist where is needed. The first thing that that piece should have proved is that we humans need to behave responsibly. And that includes using as little AI as possible, and as little interconnectivity of AI systems as possible. On, and about point 3: helping people to “adapt” is OK… as long as it doesn’t mean turning THEM into robots!

In that same week, another article pointed out the real problem: “Don’t worry about AI going bad - the minds behind it are the danger”.

Even that article, however, does not explore or clarify enough some important points. It says that there are three domains in which we can expect problems from Artificial Intelligence: digital security, physical security and political security.

I agree that the “political security” problems depend, before anything else, on who design and controls the involved AI. This, by the way, is as true in politics as it is in the workplace. If Amazon’s bracelets were not reporting to a central, Amazon-owned computer they would not be a workers' rights problem.

Zeynep Tufekci summarized this issue very well when she said “don’t worry about AI. Worry about what power can do with AI”.

Stupidity always comes first

At the same time, it seems to me that the article does not clarify enough how much what it calls “political security” is different from the other two domains.

Very often, problems of digital and physical security arise from much more trivial reasons, ranging from sheer stupidity to the induced needs to buy useless stuff.

Autonomous cars, for example, do not necessarily have to be connected to the Internet. Certainly their guidance system does not need to be. Just remember that, and remote attacks to autonomous cars almost disappear.

Ditto for “smart” refrigerators. It is the initial idea that a fridge or any other appliance should be connected to the Internet that is stupid, and it would remain so even if all the governments and companies of the world were ruled by saints.

Who writes this, why, and how to help

I am Marco Fioretti, tech writer and aspiring polymath doing human-digital research and popularization.
I do it because YOUR civil rights and the quality of YOUR life depend every year more on how software is used AROUND you.

To this end, I have already shared more than a million words on this blog, without any paywall or user tracking, and am sharing the next million through a newsletter, also without any paywall.

The more direct support I get, the more I can continue to inform for free parents, teachers, decision makers, and everybody else who should know more stuff like this. You can support me with paid subscriptions to my newsletter, donations via PayPal (mfioretti@nexaima.net) or LiberaPay, or in any of the other ways listed here.THANKS for your support!