Get the latest tech news

Deciphering language processing in the human brain through LLM representations


21, 2025 Mariano Schain, Software Engineer, and Ariel Goldstein, Visiting Researcher, Google Research Large Language Models (LLMs) optimized for predicting subsequent utterances and adapting to tasks using contextual embeddings can process natural language at a level close to human proficiency. This study shows that neural activity in the human brain aligns linearly with the internal contextual embeddings of speech and language within large language models (LLMs) as they process everyday conversations.

This allows them to produce context-specific linguistic outputs drawn from real-world text corpora, effectively encoding the statistical structure of natural speech (sounds) and language (words) into a multidimensional embedding space. A few hundred milliseconds later, when the listener starts to decode the meaning of the words, language embeddings predict cortical activity in Broca’s area(located in the inferior frontal gyrus; IFG). These findings provide compelling new evidence for fundamental computational principles of pre-onset prediction, post-onset surprise, and embedding-based contextual representation shared by autoregressive LLMs and the human brain.

Get the Android app

Or read this on Hacker News

Read more on:

Photo of human brain

human brain

Photo of LLM representations

LLM representations

Related news:

News photo

What Ketamine Does to the Human Brain

News photo

Formation of organic glass from a human brain

News photo

Microplastics in the human brain