Danaher2016ThreatAlgocracy-A

There are multiple notes about this text. See also: Danaher2016ThreatAlgocracy-B and Danaher2016ThreatAlgocracy-C

Literature note on John Danaher, “The Threat of Algocracy: Reality, Resistance and Accommodation.”

https://doi.org/10.1007/s13347-015-0211-1

Bibliographic info

John Danaher. “The Threat of Algocracy: Reality, Resistance and Accommodation.” Philosophy of Technology 29 (January 2016): 245–268.

ABSTRACT: "One of the most noticeable trends in recent years has been the increasing reliance of public decision-making processes (bureaucratic, legislative and legal) on algorithms, i.e. computer-programmed step-by-step instructions for taking a given set of inputs and producing an output. The question raised by this article is whether the rise of such algorithmic governance creates problems for the moral or political legitimacy of our public decision-making processes. Ignoring common concerns with data protection and privacy, it is argued that algorithmic governance does pose a significant threat to the legitimacy of such processes. Modelling my argument on Estlund’s threat of epistocracy, I call this the ‘threat of algocracy’. The article clarifies the nature of this threat and addresses two possible solutions (named, respectively, ‘resistance’ and ‘accommodation’). It is argued that neither solution is likely to be successful, at least not without risking many other things we value about social decision-making. The result is a somewhat pessimistic conclusion in which we confront the possibility that we are creating decision-making processes that constrain and limit opportunities for human participation."

Commentary

Danaher reacts to the trend of the outsourcing of decision making to algorithms in various domains of society. The central questions he explores are the if algorithmic decision making is politically legitimate and morally unharmful. First of all, what does Danaher mean with the term ‘algocracy,’ or, ‘the rule of the algorithm’? Danaher uses the term neutrally, so as to consider both the positive and the negative qualities of governance systems organized around computer-based algorithms. The way such governance systems depend on algorithms is through data collection, processing and decision making. As (semi-)automatic mediators between institutions and citizens, algorithmic decision making constrains and structures the way humans act and are able to act. To understand human-algorithm interaction, Danaher invokes the famous tricolon of humans in (human interaction is required), on (human intervention is possible) and out (human interaction is not possible) of the loop of algorithmic processes.

Excerpts & Key Quotes

“Suppose that the creation of new legislation, or the adjudication of a legal trial, or the implementation of a regulatory policy relies heavily on algorithmic assistance. Would the resulting outputs be morally problematic? As public decision-making processes that issue coercive rules and judgments, it is widely agreed that such processes should be morally and politically legitimate (Peter 2014). Could algorithm-based decision-making somehow undermine this legitimacy?”

Comment:

The passage above captures the central question of Danaher’s paper. The notion of legitimacy is aptly applied throughout the paper, but, narrowly construed, it gives the paper a character that is too abstractly focused on the politico-philosophical concept, rather than on the concrete development, deployment and consequences of algorithms, which are always societal and (alas) corporate.

“After all, the term ‘democracy’ has the same suffix and typically has positive (or at least neutral) connotations. I intend for the bare term ‘algocracy’ to have similarly neutral connotations. As will become clear below […], I think that algocratic systems can have many positive qualities; it is just that they can also have negative qualities, some of which are identified by the argument to the threat of algocracy.”

Comment:

I really endorse the approach to a topic to pose something as parenthetically neutral, so as to more clearly analyze its problems and merits. Skaug Sætra's paper does the same for the concept of "(AI) technocracy", which is closely connected to Danaher’s algocracy. One central difference between algocracy and the evoked sentiments regarding democracy is that the Athenian Ideal of democracy involves direct participation, while in Western societies the participation is outsourced to representatives. We are fuzzily duped into warm sentiments towards democracy by this supervenient ideal. Algocracy and technocracy have only sci-fi supervenient Ideals, which will in our biased preconceptions, only work against it sentimentally speaking.

“A sub-population could have superior epistemic access to legitimizing outcomes for emergent and highly contingent reasons. In other words, the individual members of the sub-population need not have generally superior epistemic abilities. They may have superior access for a limited set of decisions, for a narrowly constrained period of time, or because the sub-population as a whole (and not any individual) emergently satisfies some set of conditions that enables them to have superior epistemic access.”

Comment:

Danaher is right to refine Estlund’s conception of elite privileged epistemic access. He makes the notion more contingent by also involving limited periods of time and attributes that only apply qua membership of a group. But does he socialize the notion enough? For example, the notion of lived experience remains unmentioned here, but it is of the utmost importance regarding the development of epistemic expertise. As an example, consider Fiona Woollard’s arguments regarding the epistemic expertise mothers have regarding pregnancy and childbirth This insight shows that including the right experts in a decision making process, when the process is for example the development of hospital protocols for how to deal with women in childbirth, is not a bad thing at all. Epistocracy is a danger when the same set of elites that are in power, dominate all other groups in society. When the set of elites is diversely employed, epistocracy is adequately channeling dispersed knowledge to the right decision making processes.

“ First, it is not clear that resistance of this sort would be practically achievable across the full spectrum of public decision-making processes. Second, and probably more importantly, it is not clear that resistance of this sort is morally preferable: there is a moral case to be made for the use of algocratic systems both on instrumentalist and proceduralist grounds. There is consequently a tradeoff of values involved that may render accommodation more appropriate than resistance.”

Comment:

Thinking about whether the threat of algocracy should be resisted, Danaher expresses doubts. He frames the opposition of procedural and instrumental value with democratic value as a trade-off. The notion of a trade-off construes the procedural and instrumental benefits gained from algorithmic governance as detractions from democratic values. This is somewhat oversimplified in my opinion. Because is it really a trade-off to detract from the robustness of democratic values to attain a procedural benefit like efficiency? Maybe from a utilitarian perspective, where total future well-being outweighs the protection of democratic value. But is efficiency more important that democratic rights? I think it is clear that this is no trade-off, but rather an immoral deference. For instrumental benefits, the case is different. Maybe important societal goals can be obtained for the price of the usage of an opaque algorithm and a hidden process. Here it turns into a trade-off within democracy.