Porsdam2023Credit-BlameAsymmetry

Literature note on Generative AI entails a credit-blame asymmetry

bibliographic info

Porsdam Mann, S., Earp, B. D., Nyholm, S., Danaher, J., Møller, N., Bowman-Smart, H., Hatherley, J., Koplin, J., Plozza, M., Rodger, D., Treit, P. V., Renard, G., McMillan, J., & Savulescu, J. (2023). Generative AI entails a credit–blame asymmetry. Nature Machine Intelligence, 5(5), 472–475. https://doi.org/10.1038/s42256-023-00653-1

Commentary

The article by Porsdam Mann et al. (2023) discusses the notion of "credit-blame asymmetry". This term is defined first in the context of humans, using traditional theories of blame. These theories roughly state that in order to receive credit for a positive outcome, one must show effort or sacrifice, while getting blame for a negative outcome can already be accomplished by a lack of effort. In the case of AI, or specifically Generative AI, this pattern can also be found. Take for example ChatGPT: if a user of ChatGPT were to copy wrongful output from this LLM, they would be blamed for it. However, if someone would use the output of ChatGPT to come to a positive outcome, they would not receive the same amount of credit. The paper explains these concepts, using the definitions of authorship, requirements for the use of LLMs and other generative AI systems and general recommendations. The authors consider a future scenario in which the credit that can be obtained by human workers is changing due to the increased use of generative AI tools in the workfield. That jobs are likely changing due to this new technology, is not a new insight, but the way this impacts received credit for human workers, is an interesting perspective to bring to this topic. As a person working in a creative job, it could be harmful if your creativity is less appreciated due to the strong competition of generative AI tools. However, with this view I think the authors do not consider the complete scenario that might come about when generative AI becomes more common in the workfield. As can already be seen, many artists use generative AI tools to produce new, creative forms of art. They are not praised any less for the use of these tools, but rather applauded for being creative with the use of new technology. It is reasonable to expect a similar outcome in the workfield, where workers are appreciated for using generative AI tools to their benefit. As these tools can increase efficiency, help working out creative endeavours or give prompts for new ideas, they could very likely help people in creative jobs receiving more praise. This point of view is not contained in the article, as the main focus of the article is on replacing jobs by generative AI tools. By seeing these tools more as an aid, the amount of praise a worker receives, might not decrease. In spite of missing this particular scenario, the article provided a good overview of the options within their sketched scenario. They also included some recommendations for policy makers.

Excerpts & Key Quotes

Brainstorm with generative AI

Page 2:

"Note: a brainstorming document for this Comment was produced using ChatGPT. The authors then drafted their own manuscript incorporating some of the themes from the synthetically produced material. "

Comment:

As this text considers the use of generative AI and the way that credit or blame can be ascribed to the users of this tool or the tool itself, it is of interest that the authors decided to use generative AI for their first brainstorm session. However, it should be noted that they did not use it to write the draft itself, but simply to come up with some ideas.

Criteria for authorship

Page 3:

"The remaining criterion requires an author to be accountable for all aspects of the work and for any questions regarding its accuracy or integrity. LLMs cannot currently meet this criterion. Nature and Science prohibit assigning authorship to LLMs for this reason."

Comment:

In this section the authors shortly discuss the criteria that they consider important for authorship. The fourth criterion includes having an author that is accountable for the work, and that can answer questions regarding its accuracy or integrity. It is clearly stated that LLMs are not able to do that. However, this statement is not supported by any arguments. Do the authors consider an AI generated explanation regarding the accuracy or integrity of the statement as not trustworthy? There exist LLMs that are based on for example linked data structures, that can answer using only the truths contained in this structure. If that type of LLMs are used, would the authors not agree that this provides some sort of security about the accuracy of the answers? And if this is not the case, what would the authors consider an acceptable way to meet this criterion? It now seems as if this criterion is defined in such a way that LLMs cannot meet it, while there might be ways in which LLMs could in some way answer questions regarding the accuracy or integrity of their outputs. This could for example be done by providing a probability of the accuracy of the response.

IP protection

Page 4:

"Whether humans generating output via LLMs can obtain IP protection depends on the extent to which they demonstrate labour or creativity"

Comment:

In this quote the authors provide the requirement for human authors to obtain IP protection. They clarify that authors can obtain such rights based on the extent to which they demonstrate labour or creativity. However, by defining this in this manner, they automatically exclude generative AI systems from ever receiving IP protection. Later in the text, they provide the following:

“Where human input does not reach a threshold of significance, the work remains authorless”

This indicates that either a human functions as an author, and if this human did not provide enough creativity or labour, the work does not have an author at all. This results in the situation where there would rather be work without an author, than assigning generative AI systems as an author. The question is whether this is actually more favorable.

#comment/Anderson : Good question. What would the "costs" be of being unable to assign authorship or, more specifically, to characterize text has having no author. Does it become something like tea-leaves or koffiedik, a natural occurance to which people attach meaning, although none was intended? But is that really realistic to expect people to withhold the attribution of intentionality, when something is so convincing?