CoeckelberghGunkel2024ChatGPTDeconstructing

CoeckelberghGunkel2024ChatGPTDeconstructing

M. Coeckelbergh and D. J. Gunkel

ChatGPT: deconstructing the debate and moving it forward

Bibliographic information

Coeckelbergh, M., Gunkel, D.J. ChatGPT: deconstructing the debate and moving it forward. AI
& Soc 39, 2221–2231 (2024). https://doi.org/10.1007/s00146-023-01710-4

Commentary

This dense paper includes several perspectives on ChatGPT on authorship, written text, and its meaning. In my opinion, it is sometimes hard to keep track of what side the author is
defending. #comment/Anderson [1] The inclusion of a lot of perspectives is at the expense of the clarification of those perspectives. At times, I felt that a statement could have been elaborated on more. Moreover,
the author has the tendency to be repetitive in describing certain standpoints. However, this could have been done on purpose to make the long texts more comprehensible. Overall, the
article presents a novel perspective on why ChatGPT as an author need not be a negative thing.

### “Writing, by contrast, is a derived image and mere appearance of the spoken word.”

I tend to disagree with the demeaning tone towards writing. The way that I look at the act of writing is that more thought, effort, and intention are put into written words compared to spoken words. The priority that logocentrism gives to the spoken word over the written word is something that I struggle to wrap my head around. I consider written and spoken words to be two distinct approaches to language. Not necessarily putting more value on one or the other. The paper states that 'speech has a direct and intimate connection to the real'. I personally don't see how the words lose value once they are written down. The argumentation for this perspective could have been developed a little further, in my opinion.

“Meaning is not a matter of finding the source of meaning in the original thoughts or intentions of an author. Instead, meaning is produced, made in the process and in the performance. ”

This statement, within the context of ChatGPT and authorship, does not consider the harmful consequences of including AI, such as hallucinations. If ChatGPT includes statements that are not true due to hallucinations, can one still find meaning? Or does the responsibility of recognizing the risk of hallucinations lie within the performance?

“There is no absolute moral truth and no ultimate source of meaning that authorizes what comes to be said.”

This statement takes a very broad, philosophical approach to the subject. To me, it is unclear what the added value is of this statement in the context of ChatGPT as a potential author. I can agree with the ethical pluralism in the sense that there can be multiple conflicting moral truths. However, in the discussion of including ChatGPT in the process of writing scientific papers, for example, I don't think I agree with this statement. When it comes to scientific papers, for example, there are rules that ensure the objectivity of a study. The scientific way of doing research relies on facts or observations that can be proven through methods that are replicable. So, in my view, it depends on the context whether there is a source of meaning.


  1. This is generally a difficult thing to track, but really important. Often philosophers will discuss so many sides of an argument that I have to use different colored highlighting to keep track of it. But there's a point to it: considering the opposition position is the best way to clarify your own position. #comment/Anderson ↩︎