Get the latest tech news

Scaling up test-time compute with latent reasoning: A recurrent depth approach


We study a novel language model architecture that is capable of scaling test-time computation by implicitly reasoning in latent space. Our model works by iterating a recurrent block, thereby unrolling to arbitrary depth at test-time. This stands in contrast to mainstream reasoning models that scale up compute by producing more tokens. Unlike approaches based on chain-of-thought, our approach does not require any specialized training data, can work with small context windows, and can capture types of reasoning that are not easily represented in words. We scale a proof-of-concept model to 3.5 billion parameters and 800 billion tokens. We show that the resulting model can improve its performance on reasoning benchmarks, sometimes dramatically, up to a computation load equivalent to 50 billion parameters.

View a PDF of the paper titled Scaling up Test-Time Compute with Latent Reasoning: A Recurrent Depth Approach, by Jonas Geiping and 8 other authors View PDFHTML (experimental) Abstract:We study a novel language model architecture that is capable of scaling test-time computation by implicitly reasoning in latent space. We show that the resulting model can improve its performance on reasoning benchmarks, sometimes dramatically, up to a computation load equivalent to 50 billion parameters.

Get the Android app

Or read this on Hacker News

Read more on:

Photo of time compute

time compute

Photo of latent reasoning

latent reasoning

Related news:

News photo

DeepMind and UC Berkeley shows how to make the most of LLM inference-time compute