Andreotta2020HardProblem
Adam J. Andreotta, "The hard problem of AI rights"
Bibliographic info
⇒ Andreotta, A. J. (2021). The hard problem of AI rights. AI & society, 36(1), 19-32. https://doi.org/10.1007/s00146-020-00997-x
Commentary
Adam Andreotta discusses an interesting topic, and argues why intelligence and empathy are not good measures of whether AI systems should have rights, stressing that consciousness should determine whether AI systems should have (direct) rights. He then continues by discussing the difference between assuming non-human animals are conscious and AI systems being conscious, demonstrating that non-human animals have a similar evolutionary history, similar brain functions, and similar reactions and emotions to humans. Andreotta concludes that it would be good to solve the Hard Problem of consciousness to determine whether AI systems are conscious and to make sure they are not suffering. While the problem is not solved, he assumes that AI systems similar to humans are more likely to be conscious, contrary to AI designs very different from humans, which might be less likely to be conscious.
While I found this argumentation very interesting, the paper seemed to discuss two topics: how to determine whether AI systems should have rights, and how to determine whether AI systems are conscious (arguing that we cannot easily determine whether AI systems are conscious). I believe having these two topics as parts of the same paper gave less space to each of the arguments. Particularly, I find the argument in favor of why consciousness should be what determines whether AI systems have rights very short. While I agree with the argumentation, spending two paragraphs on what appears to be a very important point for the paper seems quite brief to me, and I would have been interested in more arguments for this point.
Excerpts & Key Quotes
Against intelligence as a criterion for AI rights
- Page 23:
If an incredibly sophisticated AI communicated a preference not to be destroyed, or protested that its rights were being infringed upon, and did not actually undergo any experiences, then such utterances would not be representing any internally felt experiences. Their utterances would be analogous to the one a robot vacuum cleaner makes when its batteries are low—namely, ‘Please charge Roomba’. Such a sound does not represent a conscious preference not to be harmed. It is merely an audio file uploaded by a programmer, to inform users about the artefact’s status. If these sounds do not have moral significance, then I do not think that a system which has the capacity to play complex audio files in response to human questioning is automatically deserving of rights.
Comment:
I chose this quote as it presents the argument against choosing intelligence as a criterion for AI rights. I find this argument quite convincing, as even if the AI system can imitate emotions and appeal to the user to treat it differently, if it does not actually experience these emotions, there seems to be no necessity for it to have direct rights protecting it from harm. Any rights it would have should thus be indirect and serve humans rather than the system itself (e.g. a superintelligent AI could have the right to not be destroyed if its destruction would have severe negative consequences for society).
Against empathy as a criterion for AI rights
- Page 24:
The central problem with grounding AI rights in empathy, then, is that empathy can easily push one’s moral decisions in the direction of injustice. It is not only harder for us to empathise with people who are different from us, but it is easy to make mistakes when we attempt to put ourselves in the shoes of another. For example, if a zombie AI robot looked and acted like a human, and it was given a back story, it would be very natural for us to have sympathy for it if it was deprived of rights.
Comment:
This quote represents the argument against empathy as a criterion for AI rights. Again, it seems reasonable to say that given how easily our empathy is influenced depending on how similar people are to us or whether a machine is given a backstory, empathy does not seem to be a suitable criterion to judge whether an AI system should have rights. However, experiencing empathy for an AI system might unavoidable, as its behavior and communication could resemble humans. I think that even though empathy is not a fitting criterion to determine whether an AI system should have rights, it is important to consider that it could still influence this decision.
Consciousness as a criterion for AI rights
- Page 24:
The aim should be to improve the experiences of AIs—meaning that unnecessary suffering of AIs should be eliminated. It is for this reason that a capacity for consciousness or experience is necessary.
Comment:
This quote presents the argument for consciousness as a criterion for AI rights. It is specifically aimed as a criterion for direct AI rights, meaning that there is a direct necessity to protect AI rights. Similarly to animal rights, this is connected to reducing suffering. Therefore, to me it seems reasonable to only give AI systems direct rights if they have phenomenal consciousness and are able to experience suffering.
Why it is difficult to determine whether AIs are conscious
- Page 28:
It would be humancentric to suppose that only close functional isomorphs can be conscious. There could be multiple ways in which consciousness could arise in a system. Neurons interacting in the ways that they do in the human brain may be one way, but there could be countless others.
Comment:
This quote is one of the arguments for why we cannot easily claim whether AI is conscious or not. I found it especially interesting, and I think it is a good point. We assume that non-human animals are conscious because of their similarity to humans, but there is no reason to believe that only systems similar to us can be conscious. This makes it even more important to solve the hard problem of consciousness, as it would provide us with answers on how exactly consciousness is created, which would give us a better understanding of whether AIs can actually be conscious.