Get the latest tech news

Hierarchical Reasoning Model – 1k training samples SoTA reasoning v/s CoT


Hierarchical Reasoning Model Official Release. Contribute to sapientinc/HRM development by creating an account on GitHub.

Current large language models (LLMs) primarily employ Chain-of-Thought (CoT) techniques, which suffer from brittle task decomposition, extensive data requirements, and high latency. The model operates without pre-training or CoT data, yet achieves nearly perfect performance on challenging tasks including complex Sudoku puzzles and optimal path finding in large mazes. Furthermore, HRM outperforms much larger models with significantly longer context windows on the Abstraction and Reasoning Corpus (ARC), a key benchmark for measuring artificial general intelligence capabilities.

Get the Android app

Or read this on Hacker News

Read more on:

Photo of SoTA

SoTA

Photo of reasoning model

reasoning model

Photo of Cot

Cot

Related news:

News photo

Extend (YC W23) is hiring engineers to build SOTA document processing

News photo

Mistral’s first reasoning model, Magistral, launches with large and small Apache 2.0 version

News photo

Magistral — the first reasoning model by Mistral AI