Nyholm2022NewControlProblem

Sven Nyholm, "A new control problem? Humanoid robots, artifcial intelligence, and the value of control"

Bibliographic info

Nyholm, Sven. "A new control problem? Humanoid robots, artificial intelligence, and the value of control." AI and Ethics (2022): 1-11.

Commentary

What makes this paper particularly interesting is the different perspective the author takes on the control problem. More specifically, he further investigates the common assumption that exerting much control over AI systems is inherently good and losing control is something bad. In his article, Nyholm argues that the more resemblance robots have with humans, the more problematic it becomes to exert complete control over them. This 'new control problem' as he calls it, intends to address the question under what circumstances exercising complete control over robots is unambiguously ethically good.
He distinguishes between two types of control: self-control and control over other persons. Self-control is often valued as good in itself, which means if control over AI can be seen as a form of self-control, control over AI is instrumentally good or even good as an end in itself. Similarly, he argues that if AI systems can properly be regarded as moral persons, it would be ethically problematic to want to exert full control over them. The latter statement is where I partly disagree: I feel like there's an inherent difference between an AI system approaching the status of a moral agent, or a true human being. I therefore do not share his view on how problematic it can become to exert control over an AI system. It is unlikely that AI systems match the level of agency of that of human beings anytime soon. Up until that point is reached, being able to exert control over an AI system should be seen as a responsibility that comes with the creation of the system.

Excerpts & Key Quotes

⇒ For 3-5 key passages, include the following: a descriptive heading, the page number, the verbatim quote, and your brief commentary on this

Multidimensionality of control

Comment:

This quotes relates back to 5 different aspects of control. Whether something aligns with one's values or instructions, whether one understands a thing and if so to what extent and detail, whether one is able to monitor what one's controlling, whether one can take interventions and how often and easy this can be done, whether one is able to change or stop something one's controlling. It goes to show that serious criteria need to be met in order for a person to have maximum control over the thing in question. The reason this quote is particularly interesting is that it highlights the complexity of the control problem, given there's so many facets to it.

Excessive control?

Comment:

Even though I see where the author is coming from, I find this a very inexhaustive illustration as to why control can be too much. It assumes that a social construct such as a 'control-freak', which indeed as a negative connotation socially, is a reason which demonstrates that you can exert too much control and that exerting too much control is inherently a bad thing.

Control over robots symbolizing human status

Comment:

Just before this, Nyholm acknowledges that it is not very realistic to assume that robots will achieve full moral status soon. He however argues that a symbolization/representation of such a moral status by a robot, is enough for it to become morally problematic. Even though morally problematic leaves some room for interpretation, I struggle to see why this is so morally problematic. I would find it more problematic if we lose control over humanoid robots potentially leading to unsafe situations, rather exert a little too much control over them.