Who deserves welfare? It's all a matter of data
What is poverty? Who is poor? But above all: who REALLY KNOWS who is poor?
Helping the poor is good. Knowing who the poors are is the prerequisite to help them. The problem is knowing who the poorest (or sickest) people are (that is defining poverty and illness), and watching the watchmen who know.
Almost 20 years ago, in Mexico, only analysis of big quantities of data proved proved that the Seguro Popular was successfully protecting families from catastrophic healthcare expenditures, leading an analyst to comment that “People are literally dying every day, simply because data are not being shared”.
In 2006, an automated welfare eligibility system in Indiana, USA, was specifically designed, according to, on the assumption that most welfare recipients do not deserve or need welfare.
Still in the US, until at least 2015, lack of information on how huge amounts of data were either missing or misused meant that people “cannot judge and weigh and choose our health care in any rational manner”.
In 2018, the municipality of Gladsaxe in Copenhagen started experimenting with algorithms to identify children at risk of abuse, that is the ones who may need to be forcibly removed from their families, for their own good. The only problem is that even the involved civil servants “would be largely unable to understand and explain why the algorithm identified a family or another”
In the same year, in UK, changing the parameters that define poverty led to swap more than two millions of officially poor citizens with about the same number of other citizens, not officially poor until that moment. The ones left out are mostly pensioners with assets, and the ones who replaced them are mostly people with disabilities and families with children. Who deserves help first?
Suicide by Artificial Intelligence?
One of the articles I quote above is titled “The Welfare State Is Committing Suicide by Artificial Intelligence”. But a more appropriate title for all the four cases may be “How big data is helping states kick poor people off welfare”.
The common traits I see in all the stories above are two. One is the idea that one can set up algorithms inside computers, and then blindly delegates decisions to them. In this age, algorithms and digitization are unavoidable, in healthcare or any other public service. They save too much time and money to ignore them. It’s only blind acceptance of their suggestions that is wrong. The other common issue is lack of awareness, and of enough usable information, in both the beneficiaries and the ultimate financial sponsors of those programs: that is, in both cases, citizens. Don’t refuse algorithms. But always demand that they are only used as inputs, instead of ultimate decision makers, and always demand to know which data they use, and how.
Image source: “Improving Public Welfare with Big Data”, 2014
Who writes this, why, and how to help
I am Marco Fioretti, tech writer and aspiring polymath doing human-digital research and popularization.
I do it because YOUR civil rights and the quality of YOUR life depend every year more on how software is used AROUND you.
To this end, I have already shared more than a million words on this blog, without any paywall or user tracking, and am sharing the next million through a newsletter, also without any paywall.
The more direct support I get, the more I can continue to inform for free parents, teachers, decision makers, and everybody else who should know more stuff like this. You can support me with paid subscriptions to my newsletter, donations via PayPal (mfioretti@nexaima.net) or LiberaPay, or in any of the other ways listed here.THANKS for your support!