Get the latest tech news

Why do LLMs have emergent properties?


Large language models show unexpected "emergent" behaviors as they are scaled up. This should not at all be surprising. Here we give a possible explanation.

Large language models display emergence behaviors: when the parameter count is scaled to a certain value, suddenly the LLM is capable of performing a new task not possible at a smaller size. For a basis up to degree less than N-1, for most possible sets of data points (excluding “special” cases like collinear), the regression error will be non-zero, and reciprocally, the accuracy will be some finite value. For example, X = “Write a short story that resonates with the social mood of the present time and is a runaway hit” (and do the same thing again once a year based on new data, indefinitely into the future).

Get the Android app

Or read this on Hacker News

Read more on:

Photo of LLMs

LLMs

Photo of emergent properties

emergent properties

Related news:

News photo

LLMs as Unbiased Oracles

News photo

Run LLMs on Apple Neural Engine (ANE)

News photo

RustAssistant: Using LLMs to Fix Compilation Errors in Rust Code