Awad2018MoralMachine-B
There are multiple notes about this text. See also: Awad2018MoralMachine-A
Awad, E. - The moral machine experiment
Bibliographic info
Awad, E., Dsouza, S., Kim, R., Schulz, J., Henrich, J., Shariff, A., ... & Rahwan, I. (2018). The moral machine experiment. Nature, 563(7729), 59-64.
Commentary
This article describes a unique dataset generated by users worldwide who have been asked to make moral choices for self-driving cars. On the site of the Moral Machine, you can see a car filled with people/animals, a blockade follows on which the people/animals in the vehicle do not survive, or the car drives into another group of people/animals. The occupant survives, but not the hit. I agree that the dataset size, 40 million decisions, is exceptional and can paint a picture of moral values worldwide, but I still have some doubts about this. Firstly, there is a strong self-selection bias; you will make these choices if you have an interest/interest in this, for example, because you own a self-driving car. In addition, Gamification is a problem that should not be underestimated. The moral dilemma is shown to the participant playfully, sometimes with absurd situations of dogs driving the car. I think this is not reaching the level these moral choices are about. I am also concerned about to what extent policymakers take the Moral Machine seriously. The concerns and much more that I have are well known, but what is above all transcends the size of the dataset. I'm afraid policymakers will always keep these results in mind and therefore base policy on them, while I don't think the results are good because of gamification. Moral choices for self-driving cars are always tricky, but we shouldn't let a game dictate it.
Excerpts & Key Quotes
Most notable effects: gender and religiosity
- Page 60:
the most notable effects are driven by gender and religiosity of respondents. For example, male respondents are 0.06% less inclined to spare females, whereas one increase in standard deviation of religiosity of the respondent is associated with 0.09% more inclination to spare human
Comment:
Although the effects are small percentages, again, considering how many respondents there are (40 million), it is substantial. So I think the results are strong, and I think the distinction in gender is very stupid anyway, but again I am afraid that policymakers will still do something with it. I don't see any practical implementation either. I think computer vision would look at who walks by and identify this person with age and gender; that's impossible.
I believe that at a higher level, there should be no distinction between who does or does not get rescued depending on who one is. Still, instead, we decide that if you choose to get into a self-driving car, you are responsible, so you always end up being the victim. This is because the pedestrian has at no time chosen for you to arrive in the car.
#comment/Anderson : not sure I understand this.
Policymakers
- Page 61:
Policymakers should be, if not responsive, at least cognizant of moral preferences in the countries in which they design artificial intelligence systems and policies. Whereas the ethical preferences of the public should not necessarily be the primary arbiter of ethical policy, the people’s willingness to buy autonomous vehicles and tolerate them on the roads will depend on the palatability of the ethical rules that are adopted.
Comment:
There are two points raised here that I disagree with. Firstly, the policy involvement in the ethical outcomes of the Moral Machine, as mentioned in my general commentary.
Secondly, the national borders and different ethical views, mainly because this is practically totally impracticable. Do your settings in your self-driving car change as soon as you cross the border into France? I think the Location labelling can also be questioned.
I agree with clusters made between ethical differences between countries/regions/cultures; I think there is a difference.
Macro-economic variables and ethics
- Page 63:
The fact that the cross-societal variation we observed aligns with previously established cultural clusters, as well as the fact that macro-economic variables are predictive of Moral Machine responses, are good signals about the reliability of our data, as is our post-stratification analysis
Comment:
I find it challenging to connect macro-economic variables to this research. The fact that you live in an economically weak country is a significant variable and often intangible and uninfluenceable; that feels unfair to associate a country's ethical principles with it. Would a country have an economic boom? It doesn't feel right to associate a very objective thing, like the financial status of a country, with a subjective something like an ethical view.