Artificial Intelligence summing the demons? WHICH demons?
Look in the wrong place, and you’ll get the wrong answers.
Five years ago, Elon Musk said “With Artificial Intelligence (AI) we are summoning the demon”.
A report by the Brookings Institution’s AI and Emerging Tech Initiative notes that fearful warnings about new technology like this are nothing new. When C. Babbage created the first “computer” in the mid-19th century:
“The idea that God-given human reason could be replaced by a machine was fearfully received by Victorian England in a manner similar to today’s concern about machines being able to think like humans."
However, while Musk called for (preventive) “[r]egulatory oversight… just to make sure that we don’t do something foolish”, the report warns that regulation of AI would be meaningful only if it “focused on the tangible effects of the technology.” Starting from there, the report makes assertions, and gives suggestions, on the impacts of AI and how to deal with them. Some of them make lots of sense, others don’t.
What the report got right
The great majority of inventions “helped people at the same time they created harms”, and the solution is, indeed, in “policy and regulatory responses to counteract those actual or potential harms”.
With AI, however, focusing on tangible effects is “a simple goal with complex components”, for two reasons. One is its speed of advancement, much faster than both economic and regulatory activity:
"[With earlier technologies], time allowed the development of accountability and regulatory systems to identify and correct the mistakes humans make*".
The other problem is that AI, being software, impacts every conceivable human activity, just like software. Besides, what the Internet, and now AI, transform into a corporate asset, is an individual’s personal information. Monopoly on such assets has more far-reaching effects than, say, monopoly of oil or steel.
So, what should we regulate, asks the report:
- AI weaponry?
- AI impacts on jobs, or privacy?
- AI’s alteration of market competition?
- the machines that drive AI?
- the activities of the people who create the AI algorithms?
This also means that with AI there cannot be one “purpose-built agency or department”, like the Federal Aviation Administration for air travel.
And what it got wrong, or incomplete
- As practical examples of how to “focus on tangible effects”, the report reminds that:
- We didn’t regulate the railroad tracks and switches; we regulated the effects of their usage, such as rates and worker safety”
- We didn’t regulate the telegraph (and later telephone) wires and switches, but whether access to them was just and reasonable
- In a time of technological change, it is the innovators who make the [initial] rules… since it is they who see the future
- In the information era, artificial intelligence and machine learning have similarly eliminated jobs, but they have also created jobs.
- [In the educational realm], the individual’s right to a future… means adequate training to be meaningfully employed.
Point 1: as worded, it completely hides that “we” did, indeed, also regulate railroad tracks and telephone wires. Without non proprietary, interoperable, hugely important technical standards on everything from track sizes to current levels, there would be no tangible effects of railroads, or the Internet on people. They just would not work.
Point 2: I’d say that many “innovators” do not “see the future”: they try to impose their very own, strictly personal idea of what the future should be like on everybody else. Subtle, but crucial distinction.
Point 3 is a scam, and point 4 a dystopia
The report itself states that AI is very complex to handle, and is spreading very fast. But this is exactly the reason why it cannot create enough accessible jobs, soon enough for the masses hit by technological unemployment. If meaningful employment is part of the right to a future, this fact is enough to conclude that many people do not have a future, until we reason in this way.
This said, the same equation “right to a future = meaningful employment” is scary and depressing enough without AI that everybody should think carefully before struggling to keep it around. For more on all this, see:
- Digital abundance, scarce genius
- Two other blows to the “automation will give you better jobs” myth
- What China AI-based education tells us about work
- Workers, consumers and depressing myths
Last but not least, going back to Musk’s statement:
Who is the demon here?
AI is not summoning demon. It is empowering demons who already existed. As Z. Tufekci said, “don’t fear AI, fear the people who control AI”. Otherwise, you won’t know where to go to regulate tangible effects.
Who writes this, why, and how to help
I am Marco Fioretti, tech writer and aspiring polymath doing human-digital research and popularization.
I do it because YOUR civil rights and the quality of YOUR life depend every year more on how software is used AROUND you.
To this end, I have already shared more than a million words on this blog, without any paywall or user tracking, and am sharing the next million through a newsletter, also without any paywall.
The more direct support I get, the more I can continue to inform for free parents, teachers, decision makers, and everybody else who should know more stuff like this. You can support me with paid subscriptions to my newsletter, donations via PayPal (mfioretti@nexaima.net) or LiberaPay, or in any of the other ways listed here.THANKS for your support!