VealeBorgesius2021DemystifyingtheDraft

#digitalethics2022, #noteType/litnote

Michael Veale and Frederik Zuiderveen Borgesius, "Demystifying the Draft EU Artificial Intelligence Act"

Bibliographic info

Veale, M., & Zuiderveen Borgesius, F. (2021, July 6). Demystifying the Draft EU Artificial Intelligence Act. https://doi.org/10.9785/cri-2021-220402

Commentary

This is a commentary on the EU artificial intelligence act. For that reason alone there are somethings to point out. Commentaries aim to critique a given text. They go into this paper with a very critical mindset of the act. The aim is to point out the weaknesses and that is what it does. So that is what makes an commentary interesting, including this one, but at the same time it is that critical attitude that might limit the takeaways we get from the original text.
This commentary in particular does a great job at pointing out the weakness within the act. They essentially look at the whole act and point out the weaknesses. For the most part they do a good job at this however they do this verry briefly and verry technical. Because of this they sometimes seem to miss some grounding for the claims they make.

Unacceptable Risks: The Harm Requirement

Manipulative AI systems appear permitted insofar as they are
unlikely to cause an individual (not a collective) ‘harm’. This
harm requirement entails a range of problematic loopholes. A
cynic might feel the Commission is more interested in prohibitions’ rhetorical value than practical effect

Comment:

This quotation demonstrates the argument that the laws, found in the AI pact, are not that waterproof. Manipulative AI systems are still permitted to cause harm to the individuals but not to groups, think of racism sexism etc. This quotation also notes that the cynic might argue that the rules don't have an actual practical value but serve a mere rhetorical function. I think that this is an actual valuable point they are making. These rules the pact presents are obviously not completely waterproof as all these commentaries point out however there is value in at least trying to create a system to address the concerns around a fast growing but verry prominent technology. The world is changing and AI is definitely going to be prominent in the future. I think that rhetorical value might be not to be bad even though practical value must always be the aim.

Emotion Recognition and Biometric Categorisation Disclosure

Either way, arguing the main issue with emotional or biometric categorisation is a lack of transparency risks legitimising a practice with little-to-no scientific basis and potentially unjust societal consequences.

Comment:

In this quotation, they are looking at biometric categorization as a part of larger examination of specific cases of transparency obligations. Here they express concern about the practice of emotional or biometric categorization and mainly that there are a lot more issues than just transparency. They argue that the main problem is not the lack of transparency surrounding it but rather the practice itself. By lacking transparency, the practice may be perceived as legitimate despite having little or no scientific basis. Resulting in the fact that categorization methods could have unjust consequences for society. This quotation highlights the importance of the understanding of a particular AI process before arguing that it is the lack of transparency which is the problem at

Fundamental problem with the Act may lead to failure of a promising attempt

However, the Draft AI Act also has severe weaknesses. It is stitched together from 1980s product safety regulation, fundamental rights protection, surveillance and consumer protection law. We have sought to illustrate how this patchwork does not make the Draft AI Act comprehensive and watertight. Indeed, these pieces and their interaction may leave the instrument making little sense and impact.

Comment:

In this quotation, they are highlighting their main point critique on the Draft AI Act. They acknowledge that while the act attempts to address important issues such as product safety, fundamental rights protection, surveillance, and consumer protection, it is constructed from regulations and laws dating back to the 1980s. It is because of this that the way they approached this complex and verry new problem is verry much limited. They have argued throughout their commentary that this patchwork approach results in severe weaknesses within the act. They suggest that combining disparate elements from different areas of law may not effectively address the complexities of AI technology. Additionally, they express concern that the interaction between these pieces of legislation within the act may be confusing or contradictory, ultimately rendering the legislation ineffective or having little impact.