Fazelpour and David Danks (2021AlgorithmicBias Senses, sources, solutions

Sina Fazelpour and David Danks (2021) Algorithmic bias: Senses, sources, solutions

Bibliographic info

⇒ Insert full bibliographic information here, formatted for a bibliography (your choice of style)

Commentary

The text gives a good overview of what algorithmic bias is and how it can arise in various ways. However there is not really a clear solution. The solutions section however, mostly describes why certain things don’t work. The solution they do give (fix the underlying real-world problem) seems a bit shortsighted. It also does not go in depth on the debate of what counts as a bias/discrimination. Overall, they don’t elaborate a lot on the positive side of using algorithms whilst still trying to claim that we should not stop using algorithms altogether.

Normative standards

An algorithm can be morally, statistically, or socially biased (or other), depending on the normative standard used.

Comment:

I think this quote is nice especially due to the second part of the sentence. I wished the authors would have elaborated more on this. But bias is dependent on the normative standard used and that is an important thing to be aware of.

Fixing real-world problems

In particular, when biases are deeply entrenched in a particular organizational or societal setting, then the proper response should often be to try to fix the underlying real‐world problem (Antony, 2016; Barabas et al., 2018; Mayson, 2018), rather than turning to the technical methods of fair ML.

Comment:

I think this is a bit short-sighted. Although fixing the underlying real-world problem is very important, algorithms are going to be created nonetheless. This does not give a solution on how to navigate that. Whilst I do agree that technical methods of fair ML are not the only solution, they do give a good measurable baseline of how fair ML should look like. Although critical reasoning is still needed to employ these methods, further assessment is probably also necessary in other parts of the process.

Value-laden vs. Value-free

Algorithms are not objective, but rather embody the value‐laden view that some performance is better or more important than others

Comment:

This is a very extensive debate captured in one sentence. I think I very much agree with this. Algorithms cannot be truly objective when both the creation and the use of it is being done by subjective beings.

#noteType/litnote

Sina Fazelpour and David Danks (2021) Algorithmic bias: Senses, sources, solutions

Bibliographic info

Fazelpour, S., & Danks, D. (2021). Algorithmic bias: Senses, sources, solutions. Philosophy Compass, 16(8), e12760.

Maheshwari, G., Bellet, A., Denis, P., & Keller, M. (2023). Fair without leveling down: A new intersectional fairness definition. arXiv preprint arXiv:2305.12495.

Commentary

The aim of this article is to provide a thorough analysis of algorithmic bias in machine learning. It does this by naming sources of bias, reporting potential dangers and providing some solutionsl. The paper explains that bias can enter a model in different stages of algorithmic pipeline: during the problem specification, data collection, modelling and deployment. However, they specifically mention that bias in machine learning is not just a technical one, but also an ethical one. Therefore, the help of philosophers is necessary to move forward. This is because in all steps of creating a model, value judgements are required. For example, choosing the target variables and the performance metrics involve human judgments that can shape model outcomes.

While the article is quite critical about machine learning and the deployment of models, it is not completely against it. They highlight that the best response to reducing bias and inequal outcomes is not to stop using machine learning all together, but to find ways in which to make these models more fair.

I think that the paper provides a great overview of algorithmic bias, and it argues in a great way why philosophy and ethics is highly important in the field of AI. They can help the field by deepening out what bias is, how to identify it, and how to respond to it. However, in my opinion it lacks in mentioning the aspect of explainabillity in AI. Explainabillity is essential to understanding how a model behaves and shows us how biased behavior happens. To me, this is a weakness of the paper.

Excerpts & Key Quotes

Representing socieity vs achieving fairness

More generally, specificity about the normative standard enables more nuanced evaluations: not all statistically (or legally) biased behaviors are ethically or morally problematic, while not all statistically fair or unbiased predictions are ethically or morally acceptable.

Comment:

The thing that the authors point out here is very significant. Models often perform in a biased way, because they were trained on biased data. This data, in and of itself, is biased since it is created by humans. Humans and our society is biased in a lot of ways, and not all bias is harmfull or unwanted solely because it is bias. In each situation, therefore, we need to think whether we want a model to represent society, or we want a model to be completely fair. Creating these models requires a lot of decisions in a lot of different steps, which is very much in line with the spirit of this paper.

Performance metrics

The algorithm developer's selection of a particular performance metric is thus key in determining these implemented values. This issue clearly connects with debates in the philosophy of science about the value‐ladenness of science.

Comment:

It is interesting and essential that the authors mention the value-ladenness of choosing which performance metrics or fairness methods to use when creating a model. Not only because this is a subjective choice that is placed in the hands of a human, but also because performance metrics could create a false representation of how the model performs. Specific fairness metrics could be hand-picked to, for example, make the model appear fairer than it really is. Furthermore, there are a lot of performance metrics still being created to cover a deeper understanding of the models behaviour. For example, Maheshwari et al. (2023) proposed a novel way of comparing a models performance on subgroups, without promoting a "levelling down" effect. This happens when the performance of the best group is dramatically lowered, solely to make the model more fair.

Explainabillity

Nonetheless, we suggest that the proper response is to find ways to use algorithms to reduce inequities and injustices, not to stop using computational algorithms altogether.

Comment:

While the authors are very critical of the use of algorithms, I think it is important that they clear up their standing on the use of computational algorithms. In their article, while they point out a lot of ways in which bias can be introduced in a model, they miss out on mentioning another important factor when it comes to countering bias and unwanted behaviour: explainabillity. Using models that are explainable in and of itself, or employing methods that could help us understand models that are more like a black box, is in my opinion essential to this as well.