Who deserves welfare? It's all a matter of data
What is poverty? Who is poor? But above all: who REALLY KNOWS who is poor?
Helping the poor is good. Knowing who the poors are is the prerequisite to help them. The problem is knowing who the poorest (or sickest) people are (that is defining poverty and illness), and watching the watchmen who know.
Almost 20 years ago, in Mexico, only analysis of big quantities of data proved proved that the Seguro Popular was successfully protecting families from catastrophic healthcare expenditures, leading an analyst to comment that “People are literally dying every day, simply because data are not being shared”.
In 2006, an automated welfare eligibility system in Indiana, USA, was specifically designed, according to, on the assumption that most welfare recipients do not deserve or need welfare.
Still in the US, until at least 2015, lack of information on how huge amounts of data were either missing or misused meant that people “cannot judge and weigh and choose our health care in any rational manner”.
In 2018, the municipality of Gladsaxe in Copenhagen started experimenting with algorithms to identify children at risk of abuse, that is the ones who may need to be forcibly removed from their families, for their own good. The only problem is that even the involved civil servants “would be largely unable to understand and explain why the algorithm identified a family or another”
In the same year, in UK, changing the parameters that define poverty led to swap more than two millions of officially poor citizens with about the same number of other citizens, not officially poor until that moment. The ones left out are mostly pensioners with assets, and the ones who replaced them are mostly people with disabilities and families with children. Who deserves help first?
Suicide by Artificial Intelligence?
One of the articles I quote above is titled “The Welfare State Is Committing Suicide by Artificial Intelligence”. But a more appropriate title for all the four cases may be “How big data is helping states kick poor people off welfare”.
The common traits I see in all the stories above are two. One is the idea that one can set up algorithms inside computers, and then blindly delegates decisions to them. In this age, algorithms and digitization are unavoidable, in healthcare or any other public service. They save too much time and money to ignore them. It’s only blind acceptance of their suggestions that is wrong. The other common issue is lack of awareness, and of enough usable information, in both the beneficiaries and the ultimate financial sponsors of those programs: that is, in both cases, citizens. Don’t refuse algorithms. But always demand that they are only used as inputs, instead of ultimate decision makers, and always demand to know which data they use, and how.
Image source: “Improving Public Welfare with Big Data”, 2014
You may also:
- Follow my courses on Free Software, Digital Rights and more
- Read my free ebooks and other publications
- Support this and my other works
- Calicut: How and Why Open Hardware and Open Source can and should be used in non-western countries
- La Comunificadora is back with some new, challenging projects
- About Marco
- The myth of passive social media users, and their war on absence
- WHO can AFFORD not to fly in 2020? People or companies?
- Geopolitical take-away of the week, from UK, Italy and China
- Two surely unrelated primacies the USA can be proud of
- Four ways to take DNS services in your hand and WHY do it
- DNS glossary and tricks
- Save forests, not tigers or wolves
- What if that shooting guy had been a Thru...