Nyholm2022NewControlProblem
Sven Nyholm, "A new control problem? Humanoid robots, artifcial intelligence, and the value of control"
Bibliographic info
Nyholm, Sven. "A new control problem? Humanoid robots, artificial intelligence, and the value of control." AI and Ethics (2022): 1-11.
Commentary
What makes this paper particularly interesting is the different perspective the author takes on the control problem. More specifically, he further investigates the common assumption that exerting much control over AI systems is inherently good and losing control is something bad. In his article, Nyholm argues that the more resemblance robots have with humans, the more problematic it becomes to exert complete control over them. This 'new control problem' as he calls it, intends to address the question under what circumstances exercising complete control over robots is unambiguously ethically good.
He distinguishes between two types of control: self-control and control over other persons. Self-control is often valued as good in itself, which means if control over AI can be seen as a form of self-control, control over AI is instrumentally good or even good as an end in itself. Similarly, he argues that if AI systems can properly be regarded as moral persons, it would be ethically problematic to want to exert full control over them. The latter statement is where I partly disagree: I feel like there's an inherent difference between an AI system approaching the status of a moral agent, or a true human being. I therefore do not share his view on how problematic it can become to exert control over an AI system. It is unlikely that AI systems match the level of agency of that of human beings anytime soon. Up until that point is reached, being able to exert control over an AI system should be seen as a responsibility that comes with the creation of the system.
Excerpts & Key Quotes
⇒ For 3-5 key passages, include the following: a descriptive heading, the page number, the verbatim quote, and your brief commentary on this
Multidimensionality of control
- Page 1233:
When all of these—and any other—aspects or dimensions of control are all in the same hands, so to speak, and the person has a full measure of all of them, that person can be said to have a maximum amount of control over the thing in question, especially if their control is also very robust across a maximal range of diferent circumstances.
Comment:
This quotes relates back to 5 different aspects of control. Whether something aligns with one's values or instructions, whether one understands a thing and if so to what extent and detail, whether one is able to monitor what one's controlling, whether one can take interventions and how often and easy this can be done, whether one is able to change or stop something one's controlling. It goes to show that serious criteria need to be met in order for a person to have maximum control over the thing in question. The reason this quote is particularly interesting is that it highlights the complexity of the control problem, given there's so many facets to it.
Excessive control?
- Page 1234:
Control—e.g., self-control— can be a good thing and even good in itself, but it also seems possible to be too obsessed with control to a point where one
can appropriately be labeled a “control freak.”
Comment:
Even though I see where the author is coming from, I find this a very inexhaustive illustration as to why control can be too much. It assumes that a social construct such as a 'control-freak', which indeed as a negative connotation socially, is a reason which demonstrates that you can exert too much control and that exerting too much control is inherently a bad thing.
Control over robots symbolizing human status
- Page 1237:
The reason for this would not be that it would be immoral towards these robots themselves, but rather that it would symbolize or represent something that is deeply morally problematic: namely, persons trying to control other persons.
Comment:
Just before this, Nyholm acknowledges that it is not very realistic to assume that robots will achieve full moral status soon. He however argues that a symbolization/representation of such a moral status by a robot, is enough for it to become morally problematic. Even though morally problematic leaves some room for interpretation, I struggle to see why this is so morally problematic. I would find it more problematic if we lose control over humanoid robots potentially leading to unsafe situations, rather exert a little too much control over them.