Danaher2016ThreatAlgocracy-A
There are multiple notes about this text. See also: Danaher2016ThreatAlgocracy-B and Danaher2016ThreatAlgocracy-C
Literature note on John Danaher, “The Threat of Algocracy: Reality, Resistance and Accommodation.”
https://doi.org/10.1007/s13347-015-0211-1
Bibliographic info
John Danaher. “The Threat of Algocracy: Reality, Resistance and Accommodation.” Philosophy of Technology 29 (January 2016): 245–268.
ABSTRACT: "One of the most noticeable trends in recent years has been the increasing reliance of public decision-making processes (bureaucratic, legislative and legal) on algorithms, i.e. computer-programmed step-by-step instructions for taking a given set of inputs and producing an output. The question raised by this article is whether the rise of such algorithmic governance creates problems for the moral or political legitimacy of our public decision-making processes. Ignoring common concerns with data protection and privacy, it is argued that algorithmic governance does pose a significant threat to the legitimacy of such processes. Modelling my argument on Estlund’s threat of epistocracy, I call this the ‘threat of algocracy’. The article clarifies the nature of this threat and addresses two possible solutions (named, respectively, ‘resistance’ and ‘accommodation’). It is argued that neither solution is likely to be successful, at least not without risking many other things we value about social decision-making. The result is a somewhat pessimistic conclusion in which we confront the possibility that we are creating decision-making processes that constrain and limit opportunities for human participation."
Commentary
Danaher reacts to the trend of the outsourcing of decision making to algorithms in various domains of society. The central questions he explores are the if algorithmic decision making is politically legitimate and morally unharmful. First of all, what does Danaher mean with the term ‘algocracy,’ or, ‘the rule of the algorithm’? Danaher uses the term neutrally, so as to consider both the positive and the negative qualities of governance systems organized around computer-based algorithms. The way such governance systems depend on algorithms is through data collection, processing and decision making. As (semi-)automatic mediators between institutions and citizens, algorithmic decision making constrains and structures the way humans act and are able to act. To understand human-algorithm interaction, Danaher invokes the famous tricolon of humans in (human interaction is required), on (human intervention is possible) and out (human interaction is not possible) of the loop of algorithmic processes.
- Danaher recognizes two types of ethical-political threats that algocracy poses to human society, which are catalyzed by the interpretability of the algorithm under consideration:(i) hiddenness (secrecy regarding data collection and infringement on consent for data collection) and (ii) opacity (algorithmic systems work in ways which are incomprehensible to the human mind). Analyzing the concerns surrounding hiddenness and opacity, Danaher takes a mixed approach towards the way coercive decision making processes can be legitimized: based both on the construction of the decision making procedure (proceduralism: the morality of the processes’ components are important) as well as on the virtue of the consequences (instrumentalism: ends justifying the means). Danaher’s arguments for the mixed approach are agreeable, for he rightfully points out instrumentalism can warrant immoral components in a procedure, while the problem with proceduralism is that an ethically constructed procedure does not necessarily lead to the desired outcome. Based upon Estlund’s analysis of the problems with epistocracy (the rule of elites), of which algocracy could be said daughter, with technocracy as its other parent, Danaher develops the problem of opacity as follows: As follows from the mixed approach, we should legitimize decision making process both in terms of efficiency and in terms of the quality of the outcome and if there are elites in society with premium epistemic access to how to construe those processes, processes will be more legitimate if constructed and authorized by those elites.
- Danaher is however rightfully critical of the notion of ‘elite,’ because he emphasizes the contingency of some individuals’ premium epistemic access. The threat of algocracy then boils down to the favoring of algorithms with respect to decision making for epistemic reasons, which would undermine legitimacy if the algorithm is opaque. The algorithmic invisible barbed wire, as Danaher puts it, is this undermining of legitimacy in the form of things that seemingly increase our health, autonomy and well-being. In discussing the validity of this argument, Danaher presents us with two doubts: how much participation is required to foster legitimacy, i.e. what is the minimum of comprehensibility for an algorithm not to undermine legitimacy? Secondly, the threat of algocracy is reducible to epistocracy. I strongly agree with the second doubt because algorithms, while operating automatically, remain digital tools, designed for certain purposes by certain experts: technical elites. But I have a point of disagreement with the second doubt. It is true that the workings of algorithms are often interconnected and, qua machine learning, opaque in terms of outcomes even for the experts who design them. That being said, this need not be the individual citizen’s problem, rather institutions could be set up to safeguard political participation by providing general explanations to the public and overseeing and analysing the performance an behaviour of governmental algorithmic decision making.
- Concerning the question if the threat of algocracy should be resisted, Danaher concludes a kind of utilitarian weighing of the costs (bias reproduction, loss of participation, hiddenness of data processes) versus the benefits (efficiency, protection of minorities (Zarsky), well-being, health) of algocratic governance.
- But rather than providing us with an economy of algocracy, Danaher comes up with four alternative options that aim to preserve participation while accommodating the threat of algocracy. (i) Human review of algorithms (akin to the suggestion of placing protective institutions in between individuals and algorithms; (ii) Epistemic enhancement of humans; (iii) Adopting sousveillance technologies; (iv) Partnerships with algorithms. While I clearly endorse option (i), Danaher doesn’t. Danaher argues the reviewability option is not to be preferred over the other individual-oriented options, because according to Danaher the technology may be opaque to review and it would replace algocracy with epistocracy. But if the algorithm is in principle comprehensible for its designer(s), it is also in principle comprehensible for experts employed by the protective institution. Completely opaque algorithms, in my opinion, should not be used at all for they cannot supply to the demand of accounting for decisions. As for the threat of epistocracy; in certain fields this is inevitable: reliance on doctors, judges, midwives, economists and politicians is required for society as is. Outsourcement of responsibility and expertise with regard to a certain domain is inevitable in a society where specialization is not outlawed.
- As a conclusion, I will argue why option (iv) is deficient. Danaher himself provides a balanced discussion of the merits and problems with human enhancement and sousveillance tech. Forming a partnership with algorithms, whether integrated into the body or not, is an outrageous way to deal with the problem of algocracy, because this type of transhumanism, drives humans right into the arms of capitalism. Danaher is naïve in thinking the worst problem with such an extensive individual reliance on algorithms is the fact that epistocracy looms over us since the average individual cannot design her own algorithm. The situation is even worse, because fully partnering with algorithms on a scale that fully substitutes political participation (or even further, i.e. Metaverse) turn this participation into capital for tech companies to profit from. Far from being a viable wet dream of techno-utopians, partnering with algorithms only companies have the expertise to develop will only perpetuate the current techno-capitalist nightmare. The only way to adequately deal with algocracy, in my opinion, is through (supranational) governmental reviews, precautionary principles, only using explainable AI and pondering on whether to algorithmize EVERYTHING is the right conception of societal progress.
Excerpts & Key Quotes
- Page: 2
“Suppose that the creation of new legislation, or the adjudication of a legal trial, or the implementation of a regulatory policy relies heavily on algorithmic assistance. Would the resulting outputs be morally problematic? As public decision-making processes that issue coercive rules and judgments, it is widely agreed that such processes should be morally and politically legitimate (Peter 2014). Could algorithm-based decision-making somehow undermine this legitimacy?”
Comment:
The passage above captures the central question of Danaher’s paper. The notion of legitimacy is aptly applied throughout the paper, but, narrowly construed, it gives the paper a character that is too abstractly focused on the politico-philosophical concept, rather than on the concrete development, deployment and consequences of algorithms, which are always societal and (alas) corporate.
- Page 3:
“After all, the term ‘democracy’ has the same suffix and typically has positive (or at least neutral) connotations. I intend for the bare term ‘algocracy’ to have similarly neutral connotations. As will become clear below […], I think that algocratic systems can have many positive qualities; it is just that they can also have negative qualities, some of which are identified by the argument to the threat of algocracy.”
Comment:
I really endorse the approach to a topic to pose something as parenthetically neutral, so as to more clearly analyze its problems and merits. Skaug Sætra's paper does the same for the concept of "(AI) technocracy", which is closely connected to Danaher’s algocracy. One central difference between algocracy and the evoked sentiments regarding democracy is that the Athenian Ideal of democracy involves direct participation, while in Western societies the participation is outsourced to representatives. We are fuzzily duped into warm sentiments towards democracy by this supervenient ideal. Algocracy and technocracy have only sci-fi supervenient Ideals, which will in our biased preconceptions, only work against it sentimentally speaking.
- Page 9:
“A sub-population could have superior epistemic access to legitimizing outcomes for emergent and highly contingent reasons. In other words, the individual members of the sub-population need not have generally superior epistemic abilities. They may have superior access for a limited set of decisions, for a narrowly constrained period of time, or because the sub-population as a whole (and not any individual) emergently satisfies some set of conditions that enables them to have superior epistemic access.”
Comment:
Danaher is right to refine Estlund’s conception of elite privileged epistemic access. He makes the notion more contingent by also involving limited periods of time and attributes that only apply qua membership of a group. But does he socialize the notion enough? For example, the notion of lived experience remains unmentioned here, but it is of the utmost importance regarding the development of epistemic expertise. As an example, consider Fiona Woollard’s arguments regarding the epistemic expertise mothers have regarding pregnancy and childbirth This insight shows that including the right experts in a decision making process, when the process is for example the development of hospital protocols for how to deal with women in childbirth, is not a bad thing at all. Epistocracy is a danger when the same set of elites that are in power, dominate all other groups in society. When the set of elites is diversely employed, epistocracy is adequately channeling dispersed knowledge to the right decision making processes.
- Page 16:
“ First, it is not clear that resistance of this sort would be practically achievable across the full spectrum of public decision-making processes. Second, and probably more importantly, it is not clear that resistance of this sort is morally preferable: there is a moral case to be made for the use of algocratic systems both on instrumentalist and proceduralist grounds. There is consequently a tradeoff of values involved that may render accommodation more appropriate than resistance.”
Comment:
Thinking about whether the threat of algocracy should be resisted, Danaher expresses doubts. He frames the opposition of procedural and instrumental value with democratic value as a trade-off. The notion of a trade-off construes the procedural and instrumental benefits gained from algorithmic governance as detractions from democratic values. This is somewhat oversimplified in my opinion. Because is it really a trade-off to detract from the robustness of democratic values to attain a procedural benefit like efficiency? Maybe from a utilitarian perspective, where total future well-being outweighs the protection of democratic value. But is efficiency more important that democratic rights? I think it is clear that this is no trade-off, but rather an immoral deference. For instrumental benefits, the case is different. Maybe important societal goals can be obtained for the price of the usage of an opaque algorithm and a hidden process. Here it turns into a trade-off within democracy.