Get the latest tech news

M1: Towards Scalable Test-Time Compute with Mamba Reasoning Models


Effective reasoning is crucial to solving complex mathematical problems. Recent large language models (LLMs) have boosted performance by scaling test-time computation through long chain-of-thought reasoning. However, transformer-based models are inherently limited in extending context length due to their quadratic computational complexity and linear memory requirements. In this paper, we introduce a novel hybrid linear RNN reasoning model, M1, built on the Mamba architecture, which allows memory-efficient inference. Our approach leverages a distillation process from existing reasoning models and is further enhanced through RL training. Experimental results on the AIME and MATH benchmarks show that M1 not only outperforms previous linear RNN models but also matches the performance of state-of-the-art Deepseek R1 distilled reasoning models at a similar scale. We also compare our generation speed with a highly performant general purpose inference engine, vLLM, and observe more than a 3x speedup compared to a same size transformer. With throughput speedup, we are able to achieve higher accuracy compared to DeepSeek R1 distilled transformer reasoning models under a fixed generation time budget using self-consistency voting. Overall, we introduce a hybrid Mamba reasoning model and provide a more effective approach to scaling test-time generation using self-consistency or long chain of thought reasoning.

View a PDF of the paper titled M1: Towards Scalable Test-Time Compute with Mamba Reasoning Models, by Junxiong Wang and 5 other authors However, transformer-based models are inherently limited in extending context length due to their quadratic computational complexity and linear memory requirements. With throughput speedup, we are able to achieve higher accuracy compared to DeepSeek R1 distilled transformer reasoning models under a fixed generation time budget using self-consistency voting.

Get the Android app

Or read this on Hacker News

Read more on:

Photo of time compute

time compute

Related news:

News photo

Tao: Using test-time compute to train efficient LLMs without labeled data

News photo

Scaling up test-time compute with latent reasoning: A recurrent depth approach

News photo

DeepMind and UC Berkeley shows how to make the most of LLM inference-time compute