is right behind, or above, urban population growth.
A very interesting article from India says, among other things, something we in so-called “first world” countries already knew, but says it well:
- Six months ago, I wrote that “the REAL name of self-driving cars must become something like SOMT: Shared, On-Demand, Micro… TRAIN”. Today, I realized I should explain better a part of that concept, because I received on Twitter the following, sensible critique: “Well, by definition a train runs in predetermined courses. I find it hard to imagine how it could work in practice to have a transport that is both separated by pedestrians etc and does not run in predetermined courses”.
There are now twice as many people as 50 years ago, and they are already concentrating into cities anyway. Therefore, they say, doing more of that, until we all live in Caves of Steel leaving the planet “empty” could be the best thing we may ever do. Maybe, but only if it happens in the right way
There is a good article at Medium about “Ethical Electronics”, that is the need for a MUCH more environmentally responsible design of the Internet of Things (IoT). The most relevant parts, which say everything but one thing, are these:
- “Blockchain-powered smart cities are more attainable than people imagine”, says this article. Cool. But so is … global warming, and many other things, both good and bad. Any answer to a question like the title of that article, that is “Can an entire city run on the blockchain?” has very little value if it doesn’t come together with serious answers to “Would it be good to run an entire city on the blockchain?
I just discovered some declarations from a former vice chairman of General Motors and a Bay Area think thank that confirm what I recently proposed about driverless cars. Quoting Bob Lutz from this post at QZ.com:
Five years ago I wrote that all of precarious workers of the Fiumicino airport should partner and tell, to the many private bus companies that move tourists in and out of the same airport:
According to many people, a huge, if not THE main long-term problem with self-driving cars is how to write software that concretely “helps those cars make split-second decisions that raise real ethical questions”. The most common example is the self-driving car variant of the Trolley Problem: “your self-driving car realizes that it can either divert itself in a way that will kill you and save, say, a busload of children; or it can plow on and save you, but the kids all die. What should it be programmed to do?”
In my opinion, this way to look at the problem is greatly misleading. First of all…