Smuha2021SocietalHarm

Nathalie A. Smuha, "Beyond the individual: governing AI's societal harm"

Bibliographic info

Smuha, N. A. (2021). Beyond the individual: governing AI’s societal harm.
Internet Policy Review, 10(3). https://doi.org/10.14763/2021.3.1574

Commentary

⇒ In general, what about this text makes it particularly interesting or thought-provoking? What do you see as weaknesses?

Smuha presents an easly overlooked dimension of harm within AI regulation, namely societal harm. She gives examples that illustrate that legal mechanisms aimed at addressing individual and collective harm are not sufficient to address societal harm.
Smuha identifies three main mechanisms (public oversight, public montoring, procedural rights) to counter societal harm using environmental law as a case study.
A key takeaway here is that legal mechanisms differ when you consider societal dimension by shifting the burden from the individual towards supervisory authorities. Finally, Smuha highlights areas where the AI Act falls short on providing legal mechanisms for societal harms.

There could have been more focus on (uniquely AI societal harms/issues) (examples of how those mechanisms for work for an example) (the cost of implementing the enforcement mechanisms)

Excerpts & Key Quotes

Challenge: Identifying the Causes of AI Societal Harm

"Fourth, societal harm typically does not arise from a single occurrence of the problematic AI practice. Instead, it is often the widespread, repetitive or accumulative character of the practice that can render it harmful from a societal perspective (Kernohan, 1993).
This raises further obstacles. Thus, it may be challenging to instil a sense of responsibility in those who instigate a certain practice, if it is only the accumulative effect of their action together with the action of other actors that causes the harm (Kutz, 2000).
Moreover, this phenomenon also gives rise to the difficulty of the many hands-problem (Thompson, 1980; van de Poel et al., 2012; Yeung, 2019a). Besides the accumulative effects raised by a certain type of conduct, also the opacity of different types of (interacting) conduct by many hands can contribute to the harm and to the difficulty of identifying and addressing it (Nissenbaum, 1996).
(...)
A broadened perspective of AI-enabled harm is hence not only required at the suffering side, but also at the causing side of the harm."

Comment:

This quote highlights often cumulative characteristic of societal harm and the issue of the many hands-problem, which both pose obstacles for creating accountability mechanisms.

Societal Impact Assessment of AI System

"An additional element to consider concerns the introduction of mandatory ex ante impact assessments —modelled to the existing environmental impact assessments or data protection impact assessments—for AI applications that can affect societal interests.
Importantly, this assessment should not only focus on the impact of the AI system’s use on human rights, but also on broader societal interests such as democracy and the rule of law (CAHAI, 2020).
AI developers and deployers would hence need to anticipate the potential societal harm their systems can generate as well as rationalise and justify their system’s design choices in light of those risks.
For transparency and accountability purposes, the impact assessments should be rendered public."

Comment:

Taking a lesson from exisiting frameworks, namely environmental impact assessments, Smuha highlights the importance of including a societal dimension in AI impact assessments, which I believe is an important step towards shared ethical reflection and responsibilty. However, one could question the effectiveness of including the reflection on societal impact since it is very difficult to measure or anticipate the societal impact of a given AI model.

Lack of Procedural Rights for Societal Dimension

"Third, as regards the introduction of procedural rights with a societal dimension, the proposed regulation is entirely silent.
In fact, the drafters of the proposal seem to have been very careful not to establish any new rights in the regulation, and only impose obligations.
This means that, if an individual wishes to challenge the deployment of an AI system that breaches the requirements of the regulation or adversely affects a societal interest, she will still need to prove individual harm.
As already mentioned above, she will also not be able to lodge a complaint with the national supervisory authority to investigate the potentially problematic practice—unless the national authority decides to provide this possibility on its own motion"

Comment:

This quote presents a limitation of the AI Act, namely that it does not support a direct complaint mechanism for individuals on the grounds of infringment of societal right since no new societal rights are introduced in the legislation.

(Replace this heading text for this passage)

"Finally, and more generally, it is clear that the proposal remains imbued by concerns relating almost exclusively to individual harm, and seems to overlook the need for protection against AI’s societal harms.
This manifests itself in at least four ways: (1) the prohibition of certain manipulative AI practices in Title II are carefully delineated so as to require individual harm—and in particular, “physical or psychological harm”—thus excluding the prohibition of AI systems that cause societal harm;
(2) as regards the high-risk systems of Title III, their risk is only assessed by looking at the system in an isolated manner, rather than taking into account the cumulative or long term effects that its widespread, repetitive or systemic use can have;
(3) the Commission can only add high-risk AI systems to the list of Annex III in case these systems pose “a risk of harm to the health and safety, or a risk of adverse impact on fundamental rights”, thus excluding risks to societal interests;
(4) certain AI practices that are highly problematic from a societal point of view—such as AI-enabled emotion recognition 23 or the AI-enabled biometric categorisation of persons —are currently not even considered as high-risk."

Comment:

This quote summarizes four ways in which the AI Act is focused on protection against individual harm and overlooks protection against societal harm.