Two views of privacy, and consent made meaningless
For a great description of the Californian surveillance model, look no further.
This week, Mr. Maciej Cegłowski, Founder of Pinboard, spoke before a U.S. Senate Committee on the topic of “Privacy Rights and Data Collection in a Digital Economy”. I had already taken the liberty to summarize his addendum on Machine Learning here. The main points of his actual speech, which describe the impacts of having “five internet giants (Google, Amazon, Facebook, Apple and Microsoft, or GAFAM for short) operate like sovereign states”, are summarized beloww. Emphasis (and errors, if any) are mine.
Two Views of Privacy
We use the word “privacy” in two different ways.
[One is] the idea of protecting designated sensitive material from unauthorized access. From this point of view, it is true that the GAFAM companies are the ones working hardest to defend them against unauthorized access.
But there is a second, more fundamental sense of the word privacy, one which until recently was so common and unremarkable that it would have made no sense to try to describe it.
That is the idea that there exists a sphere of life that should remain outside public scrutiny, in which we can be sure that our words, actions, thoughts and feelings are not being indelibly recorded. This includes not only intimate spaces like the home, but also the many semi-private places where people gather and engage with one another in the common activities of daily life - the workplace, church, club or union hall.
As these interactions move online, our privacy in this deeper sense withers away [and] it is no longer possible to opt out of this ambient surveillance.
From this point of view the giant tech companies are not the guardians of privacy, but its gravediggers.
Further complicating the debate on privacy is the novel nature of the data being collected.
While the laws around protecting data have always focused on intentional communications, much of what computer systems capture about us is incidental, behavioral data.
All these data are collected, and given away, by objects that we “own”, and use every day: computers, cell phones, televisions, cars, security cameras, our children’s toys, home appliances, wifi access points, even at one point trash cans in the street.
Some preliminary conclusions about GDPR
[Among other things] the GDPR rollout has demonstrated to what extent the European ad market depends on Google, who has assumed the role of de facto technical regulatory authority due to its overwhelming market share.
Overall, the GDPR has significantly strengthened Facebook and Google at the expense of smaller players in the surveillance economy.
A final, and extremely interesting outcome of the GDPR, was that it bolstered the argument that surveillance-based advertising offers no advantage to publishers, and may in fact harm them.
The Limits of (informed???) Consent
[In the GDPR] there is a tension between its concept of user consent and the reality of a surveillance economy.
A key assumption of the consent model is any user can choose to withhold consent from online services. But not all services are created equal - there are some that you really can’t say no to.
If you can’t afford to opt out, what does it mean to consent? [And] what it means to give informed consent? At no point will internet users have the information they would need to make a truly informed choice.
Consent (and GDPR) in a world of inference
Finally, machine learning and the power of predictive inference may be making the whole idea of consent irrelevant. At this point, companies have collected so much data about entire populations that they can simply make guesses about us, often with astonishing accuracy: a 2017 study 13 showed that a machine learning algorithm examining photos posted to Instagram was able to detect signs of depression before it was diagnosed in the subjects, and outperformed medical doctors on the task.
(Marco’s comment: had you realized that a lot of the inference about your friends is YOUR OWN FAULT?)
[This raises] troubling questions about what it means to have any categories of protected data at all. [What good it makes] automatic ownership of personal data in a world where such data can be independently discovered by an algorithm?
[For these reasons] the consent framework exemplified in the GDPR is simply not adequate to safeguard privacy.
The result of this all
For sixty years, we have called the threat of totalitarian surveillance ‘Orwellian’, but the word no longer fits the threat. The better word now may be ‘Californian’. A truly sophisticated system of social control will not compel obedience, but nudge people towards it. Rather than censoring or punishing those who dissent, it will simply make sure their voices are not heard.
Commenting system (still under test!!!)
You may also:
- Follow my courses on Free Software, Digital Rights and more
- Read my free ebooks and other publications
- Support this and my other works
- Calicut: How and Why Open Hardware and Open Source can and should be used in non-western countries
- La Comunificadora is back with some new, challenging projects
- About Marco
- Expanding horizons in Lesotho
- Open Letter to EU on Open Banking
- Palm Oil Factoids of 2019, and its next battle
- NextCloud 16 review
- Geopolitical take-away of the week, from UK, Italy and China
- Four ways to take DNS services in your hand and WHY do it
- Save forests, not tigers or wolves
- What if that shooting guy had been a Thru...