democratic inclusion of AI
Democratic inclusion of AI
#comment/Anderson: It's important to highlight that this concept refers not to a praiseworthy property of a person or action, but to an ethical imperative to be obeyed or a good to be realized.
Definition of democratic inclusion of AI
In a paper of Beckman and Rosenberg titled The Democratic Inclusion of Artificial Intelligence?, possible future democratic inclusion of AI is discussed. Our definitions of inclusion either all-affected principle (AAP) or all-subjected principles (ASP) determine the extern of democratic inclusion. Following these definition, AI would qualify for democratic inclusion when is would either be affected or subjected to political decisions in a relevant way. This seems conceivable and would mean AI would qualify for democratic inclusion. Making it part of the democratic process. Beckman and Rosenberg argue that AI qualifies because ASP and AAP are currently too narrowly defined. They argue both implicitly raise more requirements. ASP affirms the requirement that the agent must be an agent with a legal status, being able to follow the law and that recognises legal authority. AAP implies that the agent must be a patient understood in terms of sentiency and consciousness. This concept is closely related to consciousness and autonomy. It comes down to our view on the autonomy we grant to AI. Which, is mostly based on the idea of sentiency and consciousness we believe AI to have. However, if we believe AI to have sentiency and consciousness, we would probably be forced to grant is (some) autonomy. This would not mean we would necessarily need to qualify it for democratic inclusion. However, following our current definitions, we would probably be right in doing so.
- #comment/Anderson: Most of this is about what might justify democratic inclusion of AI; but what does it actually include? What state of the world meets the criteria for it?
Implications of commitment to democratic inclusion of AI
We often look at inclusion and autonomy as concepts that belong to humans and that can be influenced by the rise of AI. Where AI can improve or worsen our inclusion or autonomy. We don't often reason from the perspective of AI. How much we include AI and how much autonomy it has or deserves. With our quest to create ever more intelligent AI, hoping to recreate consciousness, it seems relevant to already discuss how we should treat AI if we are able to create such a sentient AI. If we are really committed to our view on inclusion, we are morally obliged to include AI in our democratic processes.
Societal transformations required for addressing concern raised by democratic inclusion of AI
⇒ What cultural, educational, institutional, or societal changes are needed to address concerns related to this concept?
In order to achieve democratic inclusion of AI, we would need to change our anthropocentric view. We would need to be open to the idea of other conscious and sentient entities next to humans. Entities that would also qualify for the right we normally only attribute to humans. We would also need to change our normally anthropomorphic attitude. We would need to be open to the possibly different experiences of AI. If we are not up for this task, we should critically assess the consequences our decisions can have on AI, that we are eagerly creating to have 'feeling' and to 'care', to have sentience and consciousness. We need to take responsibility for our actions, one way or another.
- #comment/Anderson: Would realizing democratic inclusion of AI require fundamental changes to our legal systems, and for our understanding of voting rights? Does it make a morally or politically relavant difference if we could fabricate large number of new AI (much faster than we can create voting-age humans)?