Weber-Guskar2021EmotionalizedAI

Eva Weber‑Guskar, "How to feel about emotionalized artificial intelligence? When robot pets, holograms, and chatbots become affective partners"

Bibliographic info

Weber-Guskar, E. (2021). How to feel about emotionalized artificial intelligence? When robot pets, holograms, and chatbots become affective partners. Ethics and Information Technology, 23(4), 601-610.

Commentary

In this text, Weber-Guskar examines the ethical implications of the introduction of AI systems that simulate emotions and of which an affective relationship can be formed. Such relationships can be beneficial, for example to tackle loneliness. However, there exists some intuition that such an affective relationship is morally wrong, in this paper the author discusses in what sense it could be problematic. I choose this paper since this is not a case of ethical implications of aspects of the technology of AI (such as explainability, bias etc.) which are discussed on a large scale, but ethical implications on the social impact of AI on our relationships and emotions.

Weber-Guskar identifies three main arguments that seem to oppose affective human - AI relationships. The first argument is that of self-deception and involves the idea that a robot created to form an affective relationship is deceiving the person it forms a relationship with. Since these emotional AIs are only able to recognise emotion and act accordingly a relationship is formed, but this system itself has no emotion what so over. Secondly, the lack of mutuality, which presumes that a good relationship would entail emotional mutuality. Finally, the argument for moral negligence of other people surrounding a person in a relationship with AI. I will discuss the premises for each of the arguments, and which one do and do not hold according to Weber Guskar using the excerpts and key quotes discussed below.

Generally, I think the author does a good job at describing all the potential arguments and the premises they are based on. Then by carefully examining the premises, the author weights the ethical implications of relationships with EAI's. I think the second argument, although it is important, mostly a specific version of the first argument and thus does not add a lot. Furthermore, I would have found it interesting instead of mainly discussing why the justifications concerning the problems are not as strong, if the author weighted both the benefits and the challenges of such relationships.

Excerpts & Key Quotes

The EAI

The EAI is, to put it in the most neutral terms possible, an entity that is able to interact in an individualized way (it “learns” during the interactions, i.e., it saves information about the person and can build on that in later interactions), including in the emotional dimension (i.e., recognizing emotions, eliciting emotions and simulating emotions itself) so that it becomes emotionally important for the person using it.

Comment:

Before discussing the arguments that could oppose affective relationships between humans and robots, it is first important to establish what is meant with an emotional artificial intelligence (EAI). The author defines it using this quote. I think another important notion to make here is that these EAI systems do not have to be seen as complete alternative or substitute of human to human relationships. From this definition it becomes clear that such a system is simply an entity that interacts with humans that have chosen to interact with it, so the initial initiative is at the human side. Furthermore, such a system uses the interactions in the past to create the current relationship.

Self-deception

Comment:

According to the author, the self-deception argument could be presented as such. Here the author tries to refute P3, by comparing a human-robot relationship to an imagination of art, movies books etc. However, I think there are important differences in these two scenario's. Firstly, we cannot directly interact with fictional characters and plus we ourselves chose what these fictional characters say to us, while in the case of robots this is both not the case.

Lack of mutuality

Here I think P5 is mostly a specified version of P1. In the text, the author discusses examples which show that affective relationships do not presuppose emotional mutuality, for example as is the case with animals. The authors makes the distinction between relationships with animals or EAI (affective relationship) and actual goods such as a piece of clothing (affective relation). I am not sure if I agree with this comparison, since animals are able to feel emotion, for example can feel pain, happiness etc. Although the animal example might show that emotional mutuality is not presupposed, the comparison of animals and EAI does not hold for the self-deception argument.