Chomanski2022LimitsAlgogracy

Bartek Chomanski, "Legitimacy and automated decisions: the moral limits of algocracy

Bibliographic info

⇒ Chomanski, B. (2022). Legitimacy and automated decisions: the moral limits of algocraChomanski2022LimitsAlgogracycy. Ethics and Information Technology, 24(3), 34.

Commentary

⇒ What makes this article particularly interesting, is its perspective on algogracy. Following a famous paper by Danaher 'the threat of algogracy' Danaher2016ThreatAlgocracy-A, Chomanski builds on his ideas that an algogracy has the potential to undermine democratic legitimacy. On the other hand, he also discusses reasons why there seem to be proper motives for governments to rely on algorithmic decision-making. Where he adds to the existing literature, is in his attempt to resolve the apparent tension between these two observations. However, I want to emphasize the fact that this is merely an attempt. His conclusions are not significantly different from Danaher's in his original paper, as Chomanski concludes by stating that we should look towards optimizing social outcomes when thinking about algogracy. His main reason is that citizens are largely (rationally) ignorant of the mechanisms behind government decision, and hence the threat of an algocracy is not worth writing it off completely. Even though I can understand his line of reasoning, I find that it lacks some originality and is, albeit sensible, barely different from Danaher's original point of view.

Excerpts & Key Quotes

Opacity undermining legitimacy

Consequently, opacity due to illiteracy is in principle possible to overcome by learning the relevant skills (“writing (and reading) code”). opacity due to the black box nature of some algorithms, in contrast, may make these algorithms forever inscrutable.

Comment:

The two definitions of opacity used throughout this paper illustrate how it forms such a central issue when discussing an algocracy. Currently, writing and reading code is considered a specialist skill, and hence largely unaccessible to the larger public. As the quote suggests, in theory it would be possible to overcome this. However, it is hard to imagine how this would be widely achieved. Furthermore, a possibly larger contributor to the opacity in algocratic decision-making, is the inherent black-box nature of many algorithmic models. When the workings of full machine learning algorithms escape the understanding and interpretation of humans, many decisions will be extremely hard to justify. Even if an algorithmic model outperforms human judgment, the lack of feasible explanations may form a central barrier to an algocracy and therefore this passage is very central to the entire paper.

Conditional democratic participation

Having to bear substantial costs as a condition on democratic participation is prima facie unjustifed.

Comment:

This passage on itself is not ground-breaking. However, I wanted to highlight it as it adequately proves that one of the main issues surrounding an algogracy. Namely, that there are two serious conditions to democratic participation in the case of an algocracy. One is the presence of an incentive to do so and the other is the possession of resources to fulfill the desire once the incentive is present. These are two rather strong conditions and empirical research shows that it is generally rational to remain ignorant, given the substantial amount of costly study required to make an informed decision.

Inherent randomness in decision-making

studies demonstrate that reliance on the articulated reasons for sentencing decisions can be misleading as these reasons are not reliable indicators of the considerations and factors that influence judicial sentences

Comment:

This is a finding from a previous study that would advocate in favour of algorithmic decision-making. It questions whether we have access to the real reasons behind judicial decisions in a classical setting. The study appears to indicate that even highly respected, human decision-makers seem frequently unaware of the mechanisms underlying their own decisions. This quote is particularly interesting as it sheds light on the inherent fluctuations in decision-making, regardless of whether the decision is made by a human or a machine. Thereby the difference in transparent explanations may not be as significant as often imagined.

Algocracy as an aid

under some circumstances, procedural scruples notwithstanding, algocracy could be morally acceptable because of the goods it can deliver

Comment:

This quote adequately summarizes the findings throughout the paper. Even though an algocracy would have to be implemented with great care and in the earlier phases algorithms should be seen as an assistive tool rather than as undisputable decision-makers, its possible benefits should not be dismissed due to the lack of transparency.

#comment/Anderson Interesting in the radical embrace of consequentialism!

Commentary

This text focuses on the dangers and consequences of relying on algorithms to make decisions. Specifically, this text addresses the issue of algorithmic decision making in public decision-making processes. The text brings up two notions that are especially interesting. Firstly, the introduction of the term 'algocracy' and secondly, the societal power imbalance that occurs in an algocracy.

To start, the term algocracy is defined as a governance system that uses algorithms to make decisions. These algorithm driven decisions affect society's structure and the limitations enforced on citizens. The author identifies two main dangers that arise in an algocracy: the hiddenness concern and the opacity concern. The hiddenness concern addresses the process of data collection that is required to be able to feed enough data into computer algorithms in order to extract valuable conclusions. The concern is that this process happens in an obscure manner and that citizens' information is being gathered without their explicit consent.

The second notion worth mentioning that is brought up by the author revolves around the opacity concern. This concern revolves around humans' inability to understand the decisions made by algorithms. What is most interesting with respect to this argument is that the author splits this notion of opacity in two groups. Firstly, the most popular definition of opaque is brought up, namely; algorithms that are in principle uninterpretable by humans but do perform well.

However, the most interesting interpretation of opacity is the one brought up next. There are algorithms that are in principle interpretable by humans, however, due to the complexity of these algorithms and the background knowledge required to understand these algorithms, they are still to be considered as opaque for the majority of individuals. The author argues that this changes the threat of algocracy into a threat of epistocracy. The conclusion remains the same whether labelled algocracy or epistocracy, algorithm-driven decision making obscures the process of decision making. This infringes on the political legitimacy of a society (as defined by Estlund) as it becomes harder or even impossible for most individuals to understand the procedures that affect them.
These risks together form what the author refers to as the 'threat of algocracy'.

The text also aims to provide solutions to the dangers that come along with an algocracy. However, one of the suggested solutions seems to be unreasonable and lacks convincing arguments. The author suggests human enhancement as a possible antidote to the opacity caused by complex decision-making algorithms and to the epistemic imbalanced caused by these algorithms. However, in the current day and age science is nowhere near enhancements of the human brain on that scale. Bringing in the notion of human enhancement to the discussion seems to be unreasonable as there is no evidence that such enhancements could ever be physically attainable. Furthermore, from a more abstract perspective, introducing an unrealistic solution to a problem does solve the problem. In this specific scenario, the only conclusion that can be drawn is that if such human enhancements ever become available, they would level the playing field and the understanding of decisions made by algorithms would be equal among humans. Again, such enhancements are very far from our current reality and does not solve the current problem that an algocracy causes.