Jacovi2021FormalizingTrust
See also: trust
Alon Jacovi, Ana Marasović, Tim Miller, and Yoav Goldberg - Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and Goals of Human Trust in AI
Bibliographic info
Jacovi, A., Marasović, A., Miller, T., & Goldberg, Y. (2021, March). Formalizing trust in artificial intelligence: Prerequisites, causes and goals of human trust in AI. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency (pp. 624-635).
Commentary
This text is very thorough in systematically defining concepts that are related to trust. Jacovi et al., explore the concept of trust in human-AI interactions, discuss the prerequisites and goals of trust, formalize trust transactions through contracts, and introduce the concepts of trustworthiness, and warranted and unwarranted (dis)trust. They also discuss the causes of intrinsic and extrinsic trust in AI models. Because of all these formal definitions, the text is quite difficult to read.
One weakness is that the paper focuses on technical aspects of trust in AI models. There is no extensive discussion about the broader societal and ethical implications of trust in AI. Which would have been an interesting addition.
Overall, the text presents a good framework for understanding and designing trustworthy AI models.
Excerpts & Key Quotes
Warranted distrust
- Page 633:
"Just as trustworthiness and warranted trust must both manifest for the AI to be useful in practice, so too must warranted distrust follow non-trustworthiness for contracts that are relevant
to the application."
Comment:
This quote acknowledges the importance of warranted distrust as a mechanism to ensure the usefulness of AI models. However, the text could have explored the challenges and potential risks associated with unwarranted trust further, such as the possibility of AI models being exploited or misused due to misplaced trust.
Explanation for trust
- Page 631:
"Explanation seems to be uniquely positioned for Human-AI trust as a method for causing intrinsic trust for general users. Other causes of trust, such as empirical evaluation and authority, are extrinsic."
Comment:
This quote points out one of the strengths of the text in recognizing the role of explanation in fostering trust in AI models. By explaining the decision-making process of AI models to users, it becomes more transparent and understandable, leading to increased trust. This recognition highlights the significance of explainability in building trust between humans and AI.
Formal
- Page 625:
"We provide a formal perspective of Human-AI trust that is rooted in, but nevertheless not the same as, interpersonal trust as defined by sociologists. We use this formalization to inform notions of the causes behind Human-AI trust, the connection between trust and XAI, and the evaluation of trust"
Comment:
This quote adequately shows that they will systematically lay out trust for AI, which they base on definitions of trust in other fields, and they relate to XAI. It also highlights the strength of the text in recognizing the complexity of trust and its significance in human-AI interaction. By acknowledging the complex nature of trust, the text sets the stage for a nuanced and comprehensive exploration of the topic.