Get the latest tech news

ChatGPT is incredible (at being average) - Ethics and Information Technology


In this article, we examine a peculiar issue apropos large language models (LLMs) and generative AI more broadly: the frequently overlooked phenomenon of output homogenization. It describes the tendency of chatbots to structure their outputs in a highly recognizable manner, which often amounts to the aggregation of verbal, visual, and narrative clichés, trivialities, truisms, predictable argumentations, and similar. We argue that the most appropriate conceptual lens through which said phenomenon can be framed is that of Frankfurtian bullshit. In this respect, existing attempts at applying the BS framework to LLMs are insufficient, as those are chiefly presented in opposition to the so-called algorithmic hallucinations. Here, we contend that further conceptual rupture from the original metaphor of Frankfurt (1986) is needed, distinguishing between the what-BS, which manifests in falsehoods and factual inconsistencies of LLMs, and the how-BS, which reifies in the dynamics of output homogenization. We also discuss how issues of algorithmic biases and model collapse can be framed as critical instances of the how-BS. The homogenization problem, then, is more significant than it initially appears, potentially exhibiting a powerful structuring effect on individuals, organizations, institutions, and society at large. We discuss this in the concluding section of the article.

Accordingly, by framing all LLM outputs as bullshit, we risk deemphasizing the importance of critical real-life examples of factually inconsistent synthetic texts, like when models generate false court citations (Milmo, 2023) or non-existing academic references (Gravel et al., 2023). In other words, what is the effect of the overrepresentation in training set in relation to not only the manifest semantic content of the text but, first and foremost, the more nuanced and subtle ways of linguistic structuring that can be tentatively described as “feel,” “tone,” “register,” “writing style,” or “manner of speaking” of the chatbot? In a world where reactionary politics are on the rise, and the discourses of those in power are frequently centered on AI as an instrument of maximizing efficiency, exerting control, transcending human “irrationality,” tempering our “flaws” and biases, and many similar (McQuillan, 2022), these are some good questions to ask oneself.

Get the Android app

Or read this on r/technology

Read more on:

Photo of ChatGPT

ChatGPT

Photo of ethics

ethics

Related news:

News photo

ChatGPT is rolling out 'personality' toggles to become your assistant

News photo

OpenAI confirms ChatGPT's new study feature, helps with exams

News photo

So you think you've awoken ChatGPT