Glavaničová2022VicariousLiability
Glavaničová & Pascucci 2022 Vicarious liability: a solution to a problem of AI responsibility?
Bibliographic info
Glavaničová, D., & Pascucci, M. (2022). Vicarious liability: a solution to a problem of AI responsibility? Ethics and Information Technology, 24(3), 28.
Commentary
This article explores the issue of responsibility when AI machines cause harm. It discusses different perspectives on the existence and nature of the responsibility gaps-A in AI. While it is difficult to identify a suitable bearer of responsibility for the actions of AI machines, the article focuses on scenarios involving design defects. It argues that there is currently no appropriate holder of responsibility in such cases. The proposed solution is to establish vicarious liability of AI manufacturers, offering two variants: one easily integrated into existing legal frameworks but with a narrower scope, and another requiring legal framework revisions but with broader application. A weakness of the text is its narrow focus on the legal aspect of AI responsibility. While it acknowledges a moral AI responsibility gap in the concluding sentence, it does not delve into this aspect in depth. A more comprehensive exploration of the moral implications and ethical considerations surrounding AI responsibility could enhance the article's argument.
- See also: reliabilism
Excerpts & Key Quotes
Challenges in applying vicarious liability_897b7j to owners/users in AI Context
- Page 6:
If the proposals are to be understood in terms of the vicarious liability of owners/users, even more problems arise. The modern doctrine of vicarious liability (...) does not require control as a necessary condition, nor does it assume that the contributing party is obeying the commands of the vicariously liable party. The prevailing enterprise risk justification of vicarious liability is based on entirely diferent, modern ideas: an enterprise both creates risk and comes with benefits, and for this reason the imposition of liability (when the created risk materialises) is justified (Brodie, 2007).
Comment:
The quote raises concerns about applying the concept of vicarious liability to owners/users of AI machines. It points out that if the proposals view owners/users as the vicariously liable party, additional challenges arise. Modern doctrines of vicarious liability do not necessarily require control or the obedience of commands from the liable party. The prevailing justification for vicarious liability is based on the idea that an enterprise creates both risks and benefits, and therefore, liability is justified when the created risk materializes.
- #comment/Anderson What does it mean for a risk to "materialize"?
It questions the idea that control or obedience is the main factor in determining liability. Instead, it proposes a wider approach that takes into account the risks and benefits of a business. This viewpoint acknowledges that enterprises, including those involving AI, come with inherent risks that should be considered when assigning liability.
However, implementing this approach in practice may raise practical and conceptual challenges. Determining the extent of liability and assessing the risks and benefits of an enterprise can be complex and context-dependent. Moreover, striking a fair balance between accountability and incentivizing innovation and technological advancements is crucial.
Schema for ascriptions of vicarious liability
- Page 8:
"We propose below a first schema for ascriptions of vicarious liability, which relies on the idea that there is a set of deontically relevant actions in the scope of an employment relation. The schema leaves open how the scope of an employment relation ought to be interpreted; however, we assume that an action is deontically relevant in this sense only if it is connected to a legally protected interest."
Comment:
This quote highlights the importance of establishing a framework for determining the boundaries of an employment relationship and its implications for assigning responsibility. By focusing on deontically relevant actions connected to legally protected interests, it seeks to establish a clear link between responsibility and the impact on protected rights or interests.
However, the challenge lies in defining the precise criteria for deontically relevant actions and legally protected interests. The interpretation and application of these concepts may vary across different contexts and legal systems. There is a need for careful consideration and ongoing discussion to ensure that the framework strikes a fair balance between accountability and the protection of individual rights.
Proposing a sensible approach
- Page 9:
"We believe it is nevertheless a sensible avenue to pursue if the scope of employment is traded for the purpose of manufacturing (i.e., what the machine is designed for) and if we grant the AI machine a form of agency sufficient for featuring as the party who performed or omitted a deontically relevant action."
Comment:
The quote suggests that one sensible approach to addressing the responsibility gap in AI is to trade the scope of employment for the purpose of manufacturing. By shifting the focus to what the AI machine is specifically designed for, it allows for a clearer attribution of responsibility. Additionally, the quote proposes granting the AI machine a certain level of agency that enables it to be considered as the party responsible for its actions or omissions.
This idea stresses the need to match responsibility with the planned purpose and design of the AI machine. This approach allows for holding the machine responsible within its intended limits. Giving the AI machine a sense of agency recognizes its ability to make decisions and have an impact on results.