Floridi2018AI4People

Floridi et al. "AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations."

Bibliographic info

Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., ... & Vayena, E. (2018). AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and machines, 28(4), 689-707.

Commentary

The article introduces a foundation or framework for a "Good AI society". This is done by first introducing the core opportunities and risks of AI for society, whereafter the ethical principles to which a good AI society should abide are explicated. In this sense, it is a more practical and societal approach to the ethics discussion within AI. I believe this is a very valid and important conversation that should be addressed. Since it marks the more applicable side of it all and this is necessary to be able to make the notion of ethical AI more concrete in society.

Excerpts & Key Quotes

Enabling Human Self‐Realisation, Without Devaluing Human Abilities

Furthermore, at the level of society, the deskilling in sensitive, skill-intensive domains, such as health care diagnosis or aviation, may create dangerous vulnerabilities in the event of AI malfunction or an adversarial attack. Fostering the development of AI in support of new abilities and skills, while anticipating and mitigating its impact on old ones will require both close study and potentially radical ideas, such as the proposal for some form of “universal basic income”, which is growing in popularity and experimental use.

Comment:

The point made in this excerpt is that whilst the process of human self-realisation by AI overtaking less fulfilling tasks should be applauded, we may not loose the complete ability to perform such tasks. Even though it would take a lot out of people's hands, it is important to keep in mind that we should not fully rely on machines and AI for everything. Especially in the sense of autonomy I would personally feel like it is important to preserve the basic abilities for important fields such as healthcare in case we are faced with a malfunctioning of a system.

The power to decide

With AI, the situation becomes rather more complex: when we adopt AI and its smart agency, we willingly cede some of our decision-making power to machines. Thus, affirming the principle of autonomy in the context of AI means striking a balance between the decision-making power we retain for ourselves and that which we delegate to artificial agents.

Comment:

In line with the first excerpt, this quotation focuses on ceding decision-making to machines. This may reduce a person's autonomy and it is argued that a balance should be found between the power that we retain as humans ourselves and the amount of it we delegate to AI. Personally I think this is one of the most important aspects that we should have in mind when automating decisions that may have an impact on human life. To what extent is it actually desirable to cede such important decisions to AI? And do we not surrender a part of being an human with autonomy when we give such decisions out of hands?

Making explainability central

The addition of this principle, which we synthesise as “explicability” both in the epistemological sense of “intelligibility” (as an answer to the question “how does it work?”) and in the ethical sense of “accountability” (as an answer to the question: “who is responsible for the way it works?”), is therefore the crucial missing piece of the jigsaw when we seek to apply the framework of bioethics to the ethics of AI. It complements the other four principles: for AI to be beneficent and non- maleficent, we must be able to understand the good or harm it is actually doing to society, and in which ways; for AI to promote and not constrain human autonomy, our “decision about who should decide” must be informed by knowledge of how AI would act instead of us; and for AI to be just, we must ensure that the technology— or, more accurately, the people and organisations developing and deploying it—are held accountable in the event of a negative outcome, which would require in turn some understanding of why this outcome arose. More broadly, we must negotiate the terms of the relationship between ourselves and this transformative technology, on grounds that are readily understandable to the proverbial person “on the street”.

Comment:

The authors use the concept of explainability to underline and strengthen their line of reasoning. They do this by explaining that designing an ethical framework to which AI should abide, is driven by understanding how the AI works, how decisions are made and what is the underlying reasoning for these decisions. This touches upon one of the most important parts of designing ethical AI and I completely agree with the authors that explainability should be the baseline when designing such a framework. After all, if we don't understand the inner workings of the AI we are trying to control and make behave in an ethical manner, we are never able to do so in a satisfying way.