There are “autonomous vehicles” and then there are “autonomous vehicles”. But some kinds of autonomous vehicles are much dumber than others, and proofs of this fact continue to come.
- Six months ago, I wrote that “the REAL name of self-driving cars must become something like SOMT: Shared, On-Demand, Micro… TRAIN”. Today, I realized I should explain better a part of that concept, because I received on Twitter the following, sensible critique: “Well, by definition a train runs in predetermined courses. I find it hard to imagine how it could work in practice to have a transport that is both separated by pedestrians etc and does not run in predetermined courses”.
There are now twice as many people as 50 years ago, and they are already concentrating into cities anyway. Therefore, they say, doing more of that, until we all live in Caves of Steel leaving the planet “empty” could be the best thing we may ever do. Maybe, but only if it happens in the right way
Have you seen that video of a driverless car that killed a cyclist, with the car “passenger” unable to do anything about it? Personally, I find that video a perfect proof of something I’ve always thought: the huge, BASIC problem with “driverless cars” with a driver that should step in when needed. The videos shows that the driver was playing with a smartphone instead of looking at the street. Of course!
Uber believes that Self-Driving Trucks will result in MORE jobs for truck drivers, not less. Why, and what does this REALLY mean?
Two recent articles from the top of the car industry confirm my assumptions and positions on driverless cars, and the right way to call and manage them.
I just discovered some declarations from a former vice chairman of General Motors and a Bay Area think thank that confirm what I recently proposed about driverless cars. Quoting Bob Lutz from this post at QZ.com:
According to many people, a huge, if not THE main long-term problem with self-driving cars is how to write software that concretely “helps those cars make split-second decisions that raise real ethical questions”. The most common example is the self-driving car variant of the Trolley Problem: “your self-driving car realizes that it can either divert itself in a way that will kill you and save, say, a busload of children; or it can plow on and save you, but the kids all die. What should it be programmed to do?”
In my opinion, this way to look at the problem is greatly misleading. First of all…
Somebody says that it is “Time to buckle up: Why the possibilities of connected cars are endless”, because: