Do you remember “Who is Keyser Soze?”? That was the recurring line of a great movie, which is now related to another question: who, in the staff of italian newspaper Repubblica (and many other newspapers too, of course) still makes, or allows, crap like this?
There is lot of fear around about robots, or more exactly software-powered automation, destroying jobs by the millions. But maybe there is still not enough awareness, and concern, about software creating jobs that only a robot would be happy, and fit, to do.
Whenever the results of a vote would not have substantial impacts on the people who did not, or could not vote, eligible voters are welcome to use whatever suits their fancy: e-voting, tossing dices, goat entrails… We won’t notice, so no problem. In all cases of really important voting instead, that is (at least) all political or administrative elections: here is why online/e-voting is something that you should just avoid, period.
Five years ago I wrote that all of precarious workers of the Fiumicino airport should partner and tell, to the many private bus companies that move tourists in and out of the same airport:
It looks like Denmark is discussing a proposal “to encourages students to grant schools access to their personal laptops” in order to avoid cheating at exams. I like it because it is so DUMB that it may achieve a very positive effect anyway, albeit one totally unforeseen by its authors.
There’s a Medium post about “The Birth and Death of Privacy” that in my opinion is great… except for its very last paragraph.
The “Law of Unintended Consequences” states that “an intervention in a complex system tends to create unanticipated and often undesirable outcomes”. I wonder if some 100% legitimate aspirations and well-intentioned proposals to make the internet less “white and western” may have just such consequences.
A few weeks ago, one of my Facebook contacts, whom I’ll call “Jane”, complained on her wall that Facebook had blocked one of her posts. Facebook and free speech myths ensued.
According to many people, a huge, if not THE main long-term problem with self-driving cars is how to write software that concretely “helps those cars make split-second decisions that raise real ethical questions”. The most common example is the self-driving car variant of the Trolley Problem: “your self-driving car realizes that it can either divert itself in a way that will kill you and save, say, a busload of children; or it can plow on and save you, but the kids all die. What should it be programmed to do?”
In my opinion, this way to look at the problem is greatly misleading. First of all…