Get the latest tech news

LLMorphism: When humans come to see themselves as language models


LLMorphism is the biased belief that human cognition works like a large language model. I argue that the rise of conversational LLMs may make this bias increasingly psychologically available. When artificial systems produce human-like language, people may draw a reverse inference: if LLMs can speak like humans, perhaps humans think like LLMs. This inference is biased because similarity at the level of linguistic output does not imply similarity in cognitive architecture. Yet, LLMorphism may spread through two mechanisms: analogical transfer, whereby features of LLMs are projected onto humans, and metaphorical availability, whereby LLM vocabulary becomes a culturally salient vocabulary for describing thought. I distinguish LLMorphism from mechanomorphism, anthropomorphism, computationalism, dehumanization, objectification, and predictive-processing theories of mind. I outline its implications for work, education, responsibility, healthcare, communication, creativity, and human dignity, while also discussing boundary conditions and forms of resistance. I conclude that the public debate may be missing half of the problem: the issue is not only whether we are attributing too much mind to machines, but also whether we are beginning to attribute too little mind to humans.

None

Get the Android app

Or read this on Hacker News

Read more on:

Photo of Humans

Humans

Photo of language models

language models

Photo of LLMorphism

LLMorphism

Related news:

News photo

Humans still matter more than AI in finance

News photo

ProgramBench: Can language models rebuild programs from scratch?

News photo

Just in time for Labour Day, China makes it illegal to fire humans if AI takes their jobs