Have you seen that video of a driverless car that killed a cyclist, with the car “passenger” unable to do anything about it? Personally, I find that video a perfect proof of something I’ve always thought: the huge, BASIC problem with “driverless cars” with a driver that should step in when needed. The videos shows that the driver was playing with a smartphone instead of looking at the street. Of course!
Uber believes that Self-Driving Trucks will result in MORE jobs for truck drivers, not less. Why, and what does this REALLY mean?
Two recent articles from the top of the car industry confirm my assumptions and positions on driverless cars, and the right way to call and manage them.
According to many people, a huge, if not THE main long-term problem with self-driving cars is how to write software that concretely “helps those cars make split-second decisions that raise real ethical questions”. The most common example is the self-driving car variant of the Trolley Problem: “your self-driving car realizes that it can either divert itself in a way that will kill you and save, say, a busload of children; or it can plow on and save you, but the kids all die. What should it be programmed to do?”
In my opinion, this way to look at the problem is greatly misleading. First of all…