According to many people, a huge, if not THE main long-term problem with self-driving cars is how to write software that concretely “helps those cars make split-second decisions that raise real ethical questions”. The most common example is the self-driving car variant of the Trolley Problem: “your self-driving car realizes that it can either divert itself in a way that will kill you and save, say, a busload of children; or it can plow on and save you, but the kids all die. What should it be programmed to do?”

In my opinion, this way to look at the problem is greatly misleading. First of all…