Get the latest tech news
Hierarchical Reasoning Model – 1k training samples SoTA reasoning v/s CoT
Hierarchical Reasoning Model Official Release. Contribute to sapientinc/HRM development by creating an account on GitHub.
Current large language models (LLMs) primarily employ Chain-of-Thought (CoT) techniques, which suffer from brittle task decomposition, extensive data requirements, and high latency. The model operates without pre-training or CoT data, yet achieves nearly perfect performance on challenging tasks including complex Sudoku puzzles and optimal path finding in large mazes. Furthermore, HRM outperforms much larger models with significantly longer context windows on the Abstraction and Reasoning Corpus (ARC), a key benchmark for measuring artificial general intelligence capabilities.
Or read this on Hacker News