Peters2022ExplainableAI-A

There are multiple notes about this text. See also: Peters2022ExplainableAI-B

Uwe Peters - Explainable AI lacks regulative reasons: why AI and human decision‑making are not equally opaque

Bibliographic info

Peters, U. (2022). Explainable AI lacks regulative reasons: why AI and human decision-making are not equally opaque. AI and Ethics, 1-12.

Commentary

This text critically assesses the equal opacity argument of human decision-making (HDM) and algorithmic decision-making (ADM). Peters argues that HDM is sometimes more transparent and trustworthy than ADM because when people explain their decisions, this often prompts them to regulate themselves so as to think and act in ways that confirm their reason reports. AI explanation systems lack this self-regulative feature. However, I would say that although this self-regulation can make human decision-making trustworthy because it will be more consistent and predictable, this does not make the decision-making itself more transparent, especially not for outsiders. One notion I miss in this text is a thorough description of transparent human decision-making (and whether 100% HDM is even possible), only factors that influence the opacity have been mentioned. Even with this self-regulatory feature, I feel like it is still impossible to predict exactly what humans will do or decide or say, because humans and their surroundings are very complex.

One other thought that I had: as the self-regulation of humans takes place after the decision-making and has no impact on the decision-making's transparency of the current decision, but more so on future decisions, is it possible to be more transparent in the current decision than ADM? I think no, because this regulation is not an explanation of the current decision. Explanation can make a decision transparent and explanations for decisions always come afterwards, but the self-regulation is about consistency in future behaviour. Overall, this paper is thorough and well-written.

Excerpts & Key Quotes

Mindshaping

" ... in ascribing a mental state to oneself, one often does not just describe what one detects in one’s own mind (e.g., through introspection or interpretation), but one commits oneself to thinking and acting in ways that confirm the ascriptions and make oneself more predictable by other people"

Comment:

I find this a very strong quote about mind-shaping that people are able to do and that machines aren't. Mindshaping can provide a forward-looking insight into one's own mental states, and that is something that AI do not have yet. I could imagine a world where AI would also have a build-in feature that would do this and I wonder whether Peters then would still claim that HDM can be more transparent than ADM. Additionally, I agree with Peters that self-regulation can make HDM more consistent compared to when they would not have this feature, but I do not think that this is something that can make the HDM sometimes more transparent than ADM (because there are still a lot of factors that can influence HDM). Peters also addresses this feeling that I have in the next quote.

Intuition about accuracy

"one might have the intuition that AI rationalizing explanations are more predictively accurate because black-box systems cannot deceive and are not as complex as human brains. But it seems equally intuitively plausible to hold that exactly because human brains are more complex and human cognition is more sophisticated, human explanations of HDM are much more accurate"

Comment:

When I read this passage, I found myself having this intuition as well. Because humans are so complex and are influenced by infinitely many variables, I believe that humans are very unpredictable and complex, and that AI models are easier to understand. However, Peters mentions that there is no empirical study that has pitted the two kinds of explanations against each other to compare their predictive accuracy. So who knows, maybe humans are in fact more reliable and accurate in their predictions...

More transparent HDM?

" ... their commitment has causal force that can merge with (or contradict) that of the original, causally determinative decision factor(s) and ensures that, moving forward, the individuals’ thinking and acting is consistent with the reason self-report [13]. This can turn the self-ascribed reasons into determinative, accurately reported decision factors, and so, moving forward, make the reason-reporter’s HDM more transparent to other people."

Comment:

Peters does not address the fact that although it is easier for other people to predict what this person will decide later, the decision-making itself has not become more transparent e.g. the exact reason why we decide something might still be influenced by the factors Peters mentioned earlier: unarticulated hunches, mistakes in processing logic, intuition (and I think mood, environment, hormones). Here I assume that transparent HDM is not influenced by (unknown) external variables to make it less predictable. So in conclusion, I believe this self-regulatory component in HDM can make decisions more predictable, but I don't agree with Peters that this is an argument for a higher degree of transparency than ADM.