I just discovered some declarations from a former vice chairman of General Motors and a Bay Area think thank that confirm what I recently proposed about driverless cars. Quoting Bob Lutz from this post at QZ.com:
According to many people, a huge, if not THE main long-term problem with self-driving cars is how to write software that concretely “helps those cars make split-second decisions that raise real ethical questions”. The most common example is the self-driving car variant of the Trolley Problem: “your self-driving car realizes that it can either divert itself in a way that will kill you and save, say, a busload of children; or it can plow on and save you, but the kids all die. What should it be programmed to do?”
In my opinion, this way to look at the problem is greatly misleading. First of all…
It’s always fun, and useful, when two or more news, that somehow go against each other, are published in the same day. Last Friday we had:
Microelectronics and, to a much greater extent, software, are two strategic, immensely powerful technologies. Here I try to explain, in the simplest possible way, why this happens and the basic characteristics of some modern integrated circuits.