Ajunwa2016HiringAlgorithm

Ifeoma Ajunwa, Sorelle Freidler, Carlos Scheidegger, Suresh Venkatasubramanian, "Hiring by algorithm: Predicting and preventing disparate impact"

Bibliographic info

Ajunwa, I., Friedler, S., Scheidegger, C. E., & Venkatasubramanian, S. (2016). Hiring by algorithm: predicting and preventing disparate impact. Available at SSRN.

Commentary

The aim of this article is to address the use of artificial intelligence systems to decide whether or not to hire someone. The authors mention that discrimination takes place due to training data and the use of "sensitive attributes"[1]. Among other things, they propose to remove these attributes from the database and make predictions without them. They also recommend improving transparency and explainability to prevent these situations from occurring. From my point of view, I agree that these systems have to be as transparent and explainable as possible, in order to be able to detect and correct bias and discrimination, as mentioned by the authors. However, although I understand their point in saying that these systems should be used without these sensitive attributes, I think they say it too lightly without addressing the possible consequences of doing so. These sensitive attributes are often related to other attributes (which do not have to be sensitive) so eliminating them is not sufficient since the systems would find another way to establish relationships and continue to carry out biases. An example might be the attribute race with the level of studies. Eliminating the race attribute might not be enough if the system continues to establish relationships based on the level of studies.

Excerpts & Key Quotes

Role of humans in the system

"An important feature of algorithms is that they tend to obscure the role of the human, the final result is attributed solely to the machine."

Comment:

This passage is interesting because it opens up the debate about how much "culpability" the systems really have for these biases to exist. As the authors say, I agree that when a system is found to have biases, it is treated as if the system is the bad one and the one that has caused that discrimination to take place. However, we cannot forget that ultimately it is humans who have built that system and that, as of today, there are enough examples to know that the available data, if not processed correctly, will almost certainly be biased in some way. Although as I said I agree with the authors and I think that's a good point, I think the article lacks depth. I think this would be a good time to talk about traceability and responsibility. Beyond the fact that the systems can generate some kind of bias, for this type of sensitive cases, it is essential that they are transparent, explainable and that some entity can take responsibility for the results. Although it is well known that assigning responsibility is not an easy task[2], I believe it is an issue that should be addressed before putting a system into use.

Avoid bias by anticipating them

"Yet, try as they might to disavow their creation, the creators of algorithms owe a duty to create situations in which the good behavior of their algorithms are reasonably assured."

Comment:

This passage is interesting because it opens up the discussion of how one can try to avoid these biases. While I agree with the authors that systems should be thoroughly tested before start using them, I think they take very superficially what that entails. By this I mean that it is very difficult to predict and test all the scenarios in which a system might discriminate. In fact, one study revealed[3] that women were receiving fewer advertisements for job opportunities in STEM (science, technology, engineering, math) even though the system was intended to be gender neutral. The reason is that younger women were more expensive to show ads to. Therefore, in order to optimize costs, fewer ads were shown to women, and a bias was generated. This is a great example of unpredictable situations where systems can generate biases that go beyond the system itself. This therefore is certainly something to be aware of, but in my view it is one of the risks associated and to some extent "unavoidable" when using these systems. However, I agree with the authors that for this very reason the systems should be subject to constant review and correction.

"Corporations and organizations bear a legal duty to correct algorithmic bias; and this duty is not mitigated by a lack of intent to discriminate or even a lack of awareness that an algorithm is producing biased results."

Comment:

This passage is interesting because it opens up the debate of the role that companies should take to correct these biases. On the one hand, I agree that once a bias is detected it should be corrected, however, how one measures what is a bias or not can be difficult to establish. By this I mean that there is more than one way to measure and establish criteria when putting a fair metric. Also, it should be noted that "guaranteeing" group fairness does not guarantee fairness at the individual level. For example, having two groups (A and B), one way to measure it could be that the system hires the same number of people from both groups, regardless of whether they are actually qualified or not. In group B it might be correct that the people hired are qualified, but in group A they are not. While there are more qualified people in group B who have not been hired to maintain equity between the groups. In this case, although the equity of the groups is fair at the individual level, it is not. The opposite case would be that only qualified people will be hired and it turns out that all qualified people are from group B and none from group A. In this case it would be fair at the individual level but not at the group level. With these cases, I intend to exemplify that it is not so simple to recognize these biases and that companies will of course be able to find arguments to justify that their systems are free of bias. This does not detract from the fact that there are indeed biases that should continue to be studied and that we should try to make the systems as fair as possible.

discrimination
fairness
TestGorilla


  1. Ministerie van Justitie en Veiligheid, "Prohibition of discrimination," discrimination | Government.nl, Oct. 05, 2016. https://www.government.nl/topics/discrimination/prohibition-of-discrimination ↩︎

  2. A. Lambrecht and C. Tucker, "Algorithmic bias? An Empirical Study of Apparent Gender-Based Discrimination in the Display of STEM Career Ads," Management Science, vol. 65, no. 7, pp. 2966-2981, Jul. 2019, doi: 10.1287/mnsc.2018.3093. ↩︎

  3. A. Matthias, "The responsibility gaps-A: Ascribing responsibility for the actions of learning automata," Ethics and Information Technology, vol. 6, no. 3, pp. 175-183, Jan. 2004, doi: 10.1007/s10676-004-3422-1. ↩︎