Champagne2015BridgingResponsibility
Marc Champagne & Ryan Tonkens, "Bridging the responsibility gaps-A in Automated Warfare"
Bibliographic info
Champagne, M., & Tonkens, R. (2015). Bridging the responsibility gap in automated warfare. Philosophy & Technology, 28(1), 125-137. (APA)
Commentary
⇒ In general, what about this text makes it particularly interesting or thought-provoking? What do you see as weaknesses?
This text proposes a solution to the 'responsibility gaps-A' problem. This problem specifically applies to war-related situations. According to the international laws of war, one (or more) individuals should be held responsible for all actions undertaken during a war. However, with the introduction of AI-driven autonomous weaponry (such as drones), there seems to be no direct link between the actions undertaken by the AI and the responsibility that comes with those actions. This is caused by the fact that AIs cannot be held responsible for their actions as they are not able to experience pain or repercussions of any type. This is what the literature refers to as the responsibility gap. What is most interesting about this text is that the authors propose a new manner of 'filling' this responsibility gap. The authors argue that humans should be able to formally agree (through a contract) to be held responsible for an AI's actions. In exchange, the individual would receive social prestige (prestigious positions in the military or politics). The authors argue that individuals in high (decision-making) positions should be able to decide to deploy autonomous AI weaponry and accept the positive (or negative) consequences that come with the actions undertaken by these autonomous AIs. Furthermore, this would leave these high placed individuals with the option to not deploy the autonomous AIs if they would not be certain that their actions align with the international rules of war.
I do think that this text brings up an interesting and novel solution to the 'responsibility gap' problem. However, by letting individuals make the decision to deploy or not deploy these autonomous AIs, the authors do not consider the overall increase in utility (in this case, less harm) that may be caused by deploying these AIs. For instance, if a military commander does not want to take responsibility for the actions undertaken by autonomously operating drones, this means that they would not be deployed. However, as has been the case for many applications of AI, using AI often yields an increase in efficiency. Therefore, in this scenario, deploying the drones might ultimately lead to less harm being inflicted to society as a whole. Filling the responsibility gap by assigning the entire responsibility to a single person therefore does not consider the overall gain/loss that comes with the use of autonomously operating weaponry.
Excerpts & Key Quotes
⇒ For 3-5 key passages, include the following: a descriptive heading, the page number, the verbatim quote, and your brief commentary on this
The impossibility of assigning responsibility to autonomous machines
- Page 126:
"It is natural to want to carry this principle over to automated warfare. However, since the hypothetical robots that interest Sparrow would be midway between us and plain machines, they would slip by this requirement of moral responsibility: They would be independent enough to allow humans plausible denial for their actions—yet too unlike us to be the targets of meaningful blame or praise."
Comment:
One of the notions that lies at the foundation of the responsibility gap is that in order for an agent to be able to be held responsible, there must be some form of moral worth at stake that can be leveraged if the agent's actions are classified as unacceptable. However, in the current day and age we are able to create AI-based agents that have the ability to autonomously complete certain tasks, but lack moral worth. This means that they cannot experience concepts as freedom, pain, welfare. Therefore, it becomes impossible to hold these agents responsible for their actions as no consequences can be put in place that cause suffering for the agent. However, as has been explained above, the international laws of war require that an agent / individual must be responsible for every action undertaken during a war. When autonomous AIs are used during a war, they cannot be held responsible for their actions and thus a responsibility gap occurs in which the International laws of war are not abided.
Exchanging prestige for responsibility
- Page 132:
"Should fully autonomous robotic agents be available to a military, the decision to deploy these robots could fall on the highest person in command, say, the president (or general, etc.). The president would be fully aware that the autonomous robots are capable of making their own decisions. Yet, a nonnegotiable condition for accepting the position of authority would be to accept blame (or praise) for whatever robotic acts transpire in war.
Admittedly, the theater of war is foggy. Yet, with the rigid imputation of blame secured, if the deployment of these machines renders the resulting war “foggy” to a degree which makes the authority figure uncomfortable with accepting her surrogate responsibility for the robots' actions, then it follows that these robots should not be deployed"
Comment:
This passage refers to the core idea presented by the authors in their proposition to fill the responsibility gap. By enabling individuals to claim ownership/responsibility for the actions undertaken by autonomously operating AI agents, the responsibility gap is filled as every action undertaken in a war can once again be attributed to one (or more) human individuals. Furthermore, this forces these individuals to think about the possible consequences that come with the deployment of autonomous weaponry. If they think that the risks are too high, they can simply choose not to deployment them. The incentive here is that the individual is responsible and will thus be more cautious as undesirable behavior by the AI systems will lead to punishment for the human individual.
The absence of causal authorship between actions and consequences
- Page 136:
"Although it may not be possible to trace a tenable physical link between the negative (or positive) actions of a fully autonomous robot and its human commander, our suggestion has been that this transitive chain is inessential to the ethical question and can be bypassed by using a more explicit social contract or “blank check.”"
Comment:
In the conclusion of the text, the authors point out a fundamental assumption that is made in their proposition to hold human individuals responsible for actions undertaken by autonomously operating AI systems. The authors suggest that in order for someone to be responsible for a certain action, there does not need to be a physical causal authorship between an action and a consequence. In other words; a person can be hold responsible for something that she or he has not physically done as long as the individual consents to being responsible for that action. In this case, this refers to the human individual formally consenting to being held responsible for the (unforeseeable) physical actions undertaken by an AI agent.