García-MarzaCalvo2024VirtualPolitician
Domingo García-Marza and Patrici Calvo (2024) The Virtual Politician: On Algorithm-Based Political Decision-Making
Bibliographic info
⇒ Insert full bibliographic information here, formatted for a bibliography (your choice of style) #comment/Anderson : Missing bibliographic info
Commentary
The Virtual Politician describes the good and the bad of letting AI govern us. Something I see as a weakness however, is that the text only gives examples of AI as a virtual politician. AI algorithms also have an impact on the democratic process in other ways and are far from neutral. The text talks a bit about the weaknesses of including AI in the democratic process and one of the things mentioned is bias. And while that would definitely pose a problem when letting AI run in elections, it is already posing problems in different ways by influencing the democratic process, without having virtual politicians. It would have made the text stronger if it addressed that aspect as well. #comment/Anderson : It would be interesting to have heard them discuss this, indeed. But there's only so much you can do in an chapter. Is there something about their analysis that depends on them suppressing this aspect? That would be more of a criticism of their argument.
Algorithms don’t absorb feelings
- Page 44:
On the other hand, algorithms participate in politics primarily by absorbing the voice, behaviour and opinions, but not the feelings, of the people, making fair, objective decisions based on the common good.
Comment:
I would like to argue that by absorbing the voice behaviour and opinions of people an algorithm does not necessarily make fair and objective decisions. A lot of people have opinions and beliefs that do not contribute to the common good but rather to the individual. I would also like to add onto that that feelings are not a bad thing. Although feelings can get in the way of rational thoughts, feelings are also often the result of one’s environment. And participating in politics is also about responding to your environment and see what change you would like to see. To feel something is not a bad thing.
Absorbing the voice of the citizens
- Page 52:
Supporters of algorithmic democracy therefore propose that technological criteria should replace ideological ones when electing political representatives. In this way, voters will not decide which virtual politician best represents their interests, but which of them is best able to “absorb the voice of citizens broadly and decide policy priorities through dialogue” (Johnston 2018)
Comment:
This gives the same idea as the Habermas machine. “Absorbing the voice of the citizens broadly” does not equate to having a good democratic debate. It sounds like it averages out the general opinion, resulting in a middle ground that avoids all the more diverging topics.
Algorithmic responsibility
- Page 57:
As algorithms are mathematical models based on machine learning, which learn through analysis and trial-and-error processes, both the designers and those responsible for their application and use are exempt from any liability. Any responsibility rests solely upon the mathematical model, not the person who designed it nor the institution, organization or company that applied and used it.
Comment:
I would like to disagree with the statement that neither the person who designed the model nor the institution, organization or company that applied and used it is responsible. I think that when using such models everyone in the pipeline should be aware of how they design it as well as how it is used or could possibly be used. It is as if someone were to say that a parent is not responsible for how a child behaves. Even if there was no input from the parents and we would view the child in question as some biocomputer, then the child still picks up things from its environment. I would like to argue that the parent is to some extent responsible for what the child experiences.