The problem with AI? Sometimes is humans. Human PARENTS, that is

Here is some food for thought on the consequences of handling smartphones, social networks and profiling algorithms to parents who, in 99.9999% of cases, are thoroughly unable to handle them.

The food consistst of a few, slightly edited tweets, mostly from DHH, and other quotes that I saw online in November 2018, thanks to this Twitter thread.

The topic? The parenting hell, I mean bliss, coming from the possibility to find the perfect babysitter thanks to artificial intelligence scans for respect and attitude. We already have moms snitching on nannies who dare to check phones while at the playground. What happens when the same attitude is automatically enforced before even meeting the potential babysitter in person? Here are some questions, mostly from the above mentioned Twitter thread:

Let’s go find the right babysitter for you, Tommy

The problem with AI? Sometimes is humans. Human PARENTS, that is /img/shining-parents.jpg

Parents don’t get fired if they lose it for a second or decompress in ways the algorithm would disapprove of. Babysitters need it just as much or more. Denying that is literally inhuman.

How long until these parents demand that nannies install malware on their phones to track their movements, usage, or record them?

What exactly is the lesson that a kid is supposed to take away from her parents subjecting the sitter to this indignity? The same one they always learn from parents like this: the people in your employee are less than human and don’t deserve rights like “privacy.”

Oh, and of course: how about giving the babysitters an option of demanding the reverse? Why not subject parents too to the same checks?

And the more important question is…

In some extreme cases, babysitters declared 100% perfect by highly invasive, malicious algorithmic interrogations of their entire social life may just be, for the same reasons, serial-killer material. In all the others, any decent adult knows that kids have an uncanny ability to detect fakes and poses. When that happens, kids tend to take home from those “models” all kinds of lessons and conclusions, except the ones their parents would like, which brings us to the next question:

The people who can get (or cheat…) the highest scores from certain interrogations may often be, almost by definition, not the most authentic, mature, well-balanced human role models. Which sane parent would leave their kids with such “champions”?

All in all, this babysitters-by-AI madness really is just another example of the huge contradiction of these times: algorithms that “prod you to be the worst/fake you can be” and then, when it is hiring time, reject you for what they have stimulated you to become.