Two views of privacy, and consent made meaningless
For a great description of the Californian surveillance model, look no further.
This week, Mr. Maciej CegÅ‚owski, Founder of Pinboard, spoke before a U.S. Senate Committee on the topic of “Privacy Rights and Data Collection in a Digital Economy”. I had already taken the liberty to summarize his addendum on Machine Learning here. The main points of his actual speech, which describe the impacts of having “five internet giants (Google, Amazon, Facebook, Apple and Microsoft, or GAFAM for short) operate like sovereign states”, are summarized beloww. Emphasis (and errors, if any) are mine.
Two Views of Privacy
We use the word “privacy” in two different ways.
[One is] the idea of protecting designated sensitive material from unauthorized access. From this point of view, it is true that the GAFAM companies are the ones working hardest to defend them against unauthorized access.
But there is a second, more fundamental sense of the word privacy, one which until recently was so common and unremarkable that it would have made no sense to try to describe it.
That is the idea that there exists a sphere of life that should remain outside public scrutiny, in which we can be sure that our words, actions, thoughts and feelings are not being indelibly recorded. This includes not only intimate spaces like the home, but also the many semi-private places where people gather and engage with one another in the common activities of daily life - the workplace, church, club or union hall.
As these interactions move online, our privacy in this deeper sense withers away [and] it is no longer possible to opt out of this ambient surveillance.
From this point of view the giant tech companies are not the guardians of privacy, but its gravediggers.
Behavioral Data
Further complicating the debate on privacy is the novel nature of the data being collected.
While the laws around protecting data have always focused on intentional communications, much of what computer systems capture about us is incidental, behavioral data.
All these data are collected, and given away, by objects that we “own”, and use every day: computers, cell phones, televisions, cars, security cameras, our children’s toys, home appliances, wifi access points, even at one point trash cans in the street.
Some preliminary conclusions about GDPR
[Among other things] the GDPR rollout has demonstrated to what extent the European ad market depends on Google, who has assumed the role of de facto technical regulatory authority due to its overwhelming market share.
Overall, the GDPR has significantly strengthened Facebook and Google at the expense of smaller players in the surveillance economy.
A final, and extremely interesting outcome of the GDPR, was that it bolstered the argument that surveillance-based advertising offers no advantage to publishers, and may in fact harm them.
The Limits of (informed???) Consent
[In the GDPR] there is a tension between its concept of user consent and the reality of a surveillance economy.
A key assumption of the consent model is any user can choose to withhold consent from online services. But not all services are created equal - there are some that you really can’t say no to.
When landlords, employers and border patrols all demand access to your Facebook accounts to give you housing, a job or permission to enter a country, you do not have many options.
If you can’t afford to opt out, what does it mean to consent? [And] what it means to give informed consent? At no point will internet users have the information they would need to make a truly informed choice.
Consent (and GDPR) in a world of inference
Finally, machine learning and the power of predictive inference may be making the whole idea of consent irrelevant. At this point, companies have collected so much data about entire populations that they can simply make guesses about us, often with astonishing accuracy: a 2017 study 13 showed that a machine learning algorithm examining photos posted to Instagram was able to detect signs of depression before it was diagnosed in the subjects, and outperformed medical doctors on the task.
(Marco’s comment: had you realized that a lot of the inference about your friends is YOUR OWN FAULT?)
[This raises] troubling questions about what it means to have any categories of protected data at all. [What good it makes] automatic ownership of personal data in a world where such data can be independently discovered by an algorithm?
[For these reasons] the consent framework exemplified in the GDPR is simply not adequate to safeguard privacy.
The result of this all
For sixty years, we have called the threat of totalitarian surveillance ‘Orwellian’, but the word no longer fits the threat. The better word now may be ‘Californian’. A truly sophisticated system of social control will not compel obedience, but nudge people towards it. Rather than censoring or punishing those who dissent, it will simply make sure their voices are not heard.
Who writes this, why, and how to help
I am Marco Fioretti, tech writer and aspiring polymath doing human-digital research and popularization.
I do it because YOUR civil rights and the quality of YOUR life depend every year more on how software is used AROUND you.
To this end, I have already shared more than a million words on this blog, without any paywall or user tracking, and am sharing the next million through a newsletter, also without any paywall.
The more direct support I get, the more I can continue to inform for free parents, teachers, decision makers, and everybody else who should know more stuff like this. You can support me with paid subscriptions to my newsletter, donations via PayPal (mfioretti@nexaima.net) or LiberaPay, or in any of the other ways listed here.THANKS for your support!