Responsibility and artificial intelligence

Where and when it should start? By whom?

Responsibility and artificial intelligence /img/responsibility-and-artificial-intelligence.jpg

One of the things I am doing in these weeks of nationwide lockdown, besides reporting how it really is in Italy, is to finally clear my huge backlog of “stuff to read and catalog, because it may be useful for my work and talks. This morning (*) I come across a 2019 report about a study of the Council of Europe about Responsibility and artificial Intelligence, whose conclusions deserve wider sharing. Here they are.

Existing human rights institutions may struggle…

… to provide effective and meaningful protection, for several reasons including this one:

“many of the larger adverse societal concerns cannot be readily expressed in the language and discourse of human rights because they concern collective values and interests, including threats to the broader and more amorphous moral, social and political culture and context in which advanced digital technologies operate. At the same time, the speed and scale at which these technologies now operate poses novel threats, risks and challenges which contemporary societies have not hitherto had to contend with."

Responsibility has at least two sides

In the regulation of Artificial Intelligence (or anything else, really), it is useful to distinguish explicitly between:

  • Historic (or retrospective) responsibility: which looks backwards, seeking to allocate responsibility for conduct and events that occurred in the past. As we shall see, considerable difficulties are claimed to arise in allocating historic responsibility for harms and wrongs caused by AI systems; and
  • Prospective responsibilities: which establish obligations and duties associated with roles and tasks that look to the future, directed towards the production of good outcomes and the prevention of bad outcomes.

Some of the main findings of the study

It is particularly important to have effective and legitimate mechanisms that will operate to prevent and forestall human rights violations, particularly given that many human rights violations associated with the operation of advanced digital technologies may not result in tangible harm. The need for a preventative approach is especially important given the speed and scale at which these technologies can operate, and the real risk that such violations may erode the collective socio-technical foundations that are essential for freedom, democracy and human rights to exist at all. This [implies that]:

  1. States have an important responsibility to ensure that they attend to the larger socio-technical environment in which human rights are anchored.
  2. Stronger collective complaints mechanisms may be needed to ameliorate the collective action problem that individuals may encounter in responding to rights violations generated by the operation of AI systems.
  3. Our existing conceptions of human rights may need to be reinvigorated in a networked, data-driven age in order to account for the way in which these technologies may reconfigure our socio-technical environmen

At minimum, responsible development and implementation of AI requires both democratic participation in the setting of the relevant standards and the existence of properly resourced, independent authorities equipped with adequate powers systematically to gather information, to investigate non-compliance and to sanction violations.

Said like this, it seem obvious…

But go read the whole report, and you will find that, in practice, its executive summary may be: most applications of artificial intelligence to social media and other online service… could not exist, if what we recommend here were already working properly”.

(This post was drafted in April 2020, but only put online in August, because… my coronavirus reports, of course)