CoeckelberghGunkel2023ChatGPT

#noteType/litnote

Mark Coeckelbergh1 and David J. Gunkel2 (2023) ChatGPT: deconstructing the debate and moving it forward

Bibliographic info

Coeckelbergh, M., & Gunkel, D. J. (2024). ChatGPT: deconstructing the debate and moving it forward. AI & SOCIETY, 39(5), 2221-2231.

Commentary

In this article, the authors try to take an philosophical approach to answer rising questions about Large Language Models (LLMs). What makes it interesting, is that they approach these questions in a different way than the three most common stances on LLMs: the good (LLMs show big steps towards Artificial General Intelligence (AGI)), the bad (LLms are dangerous) and the uninteresting (the things LLMs do are nothing new). The authors of this paper do not try to approach LLMs as something good or bad, but they try to provide a deeper philofophical stance to offer new perspectives.

The authors fall back on one of the most influential philosophers of all time: Plato and his framework of "appearance versus reality" and argue for the fact that we should not approach LLMs in this way to move forward. They argue that having this binary stance on LLMs does not help us understand them, while a more relational approach between humans and LLMs would.

The great strength of this paper is, in my opinion, the fact that they approach LLMs in a way that I have not read about before. It proposes deeper, new ways to look at LLMs and their connection with humans. The way we should approach these questions that are raised about these models should not be binary, since the reality is way too complex to be reduced to "simple" things like real vs. fake, intelligent vs. not intelligent.

However, I think that the thing that is a strength of this paper, is also a weakness. The paper stays very abstract in these deeper philosophical arguments. I think, to get from these ideas to more concrete, practical implications, there is still a lot of work to be done.

Excerpts & Key Quotes

Language also creates meaning

Second, language is seen in an anthropocentric and instrumental way: it is used by humans or by LLMs like ChatGPT but does not itself influence the outcome of the process and the humans (or the LLMs) have (or are assumed to have) control over language as authors.

Comment:

The authors name this stance, as it is taken by almost everyone that writes about LLMs. Namely, that language is just a tool that humans use. However, they provide a different way of viewing it, which I find very interesting. When you think about it, the text that a LLM generates is actually based on it's training data: language itself. We could now see that language is not just a tool, but also part of the authorship of meaning. This is another way the authors broaden something binary into a more complex way of looking at a problem.

Absolute truth

An alternative—one that learns from and follows the example of feminist STS approaches—is to drop the Platonic appearance versus real distinction and see what is going on as one process and performance in which realities and meanings are produced and performed.

Comment:

I think this perspective is quite interesting, because it challenges the usual idea that there is a fixed “real” behind what we see or read. We could view authorship as not solely a human trait, but we could see the production of meaning as a kind of coorperation between humans and LLMs. However, making the distinction between what is real and what is appearance more vague could also create problems and potential harms. When a human and a LLM work together to create a text about how to take a certain medicine, it is of importance what is real and what is not. We, somewhere in this process, need accountability, reliability and the absolute truth in many practical examples.

Where is meaning located?

Second, once a written text is cut-loose from its anchoring authority in the figure of the (presumed) living/speaking author, the question of the significance of the written text and its truthfulness shifts from what the author seeks to say to what the reader discovers in the material of the text.

Comment:

This approach to thinking about authorship, meaning and intention is really interesting to me. Instead of putting the creation meaning in the shoes of the author, we can place this creation in the shoes of the person that is reading the text. Essentially, this is something that already happens a lot. There can often be huge difference between the interpretation of the reader and the intention of the writer. If this already happens, why not cut the intention of the writer out of the equation? In context of LLMs, a person reading the output can make the choice if some text that is generated is meaningful enough.