Get the latest tech news

New AI architecture delivers 100x faster reasoning than LLMs with just 1,000 training examples


Hierarchical Reasoning Models (HRM) tackle complex reasoning tasks while being smaller, faster, and more data-efficient than large AI models.

This effectively resets the L-module, preventing it from getting stuck (early convergence) and allowing the entire system to perform a long sequence of reasoning steps with a lean model architecture that doesn’t suffer from vanishing gradients. HRM (left) smoothly converges on the solution across computation cycles and avoids early convergence (center, RNNs) and vanishing gradients (right, classic deep neural networks) Source: arXiv According to the paper, “This process allows the HRM to perform a sequence of distinct, stable, nested computations, where the H-module directs the overall problem-solving strategy and the L-module executes the intensive search or refinement required for each step.” This nested-loop design allows the model to reason deeply in its latent space without needing long CoT prompts or huge amounts of data. Instead of the serial, token-by-token generation of CoT, HRM’s parallel processing allows for what Wang estimates could be a “100x speedup in task completion time.” This means lower inference latency and the ability to run powerful reasoning on edge devices.

Get the Android app

Or read this on Venture Beat

Read more on:

Photo of LLMs

LLMs

Photo of reasoning

reasoning

Photo of 100x

100x

Related news:

News photo

Efficient Computer's Electron E1 CPU – 100x more efficient than Arm?

News photo

White House bans 'woke' AI, but LLMs don't know the truth

News photo

Working on a Programming Language in the Age of LLMs