Get the latest tech news

New open-source math model Light-R1-32B surpasses equivalent DeepSeek performance with only $1000 in training costs


Companies can freely deploy Light-R1-32B in commercial products, maintaining full control over their innovations.

Developed by Liang Wen, Fenrui Xiao, Xin He, Yunke Cai, Qi An, Zhenyu Duan, Yimin Du, Junchen Liu, Lifu Tang, Xiaowei Lv, Haosheng Zou, Yongchao Deng, Shousheng Jia, and Xiangzheng Zhang, the model surpasses previous open-source alternatives on competitive math benchmarks. Companies can freely deploy Light-R1-32B in commercial products, maintaining full control over their innovations while benefiting from an open and transparent AI ecosystem. For CEOs, CTOs, and IT leaders, Apache 2.0 ensures cost efficiency and vendor independence, eliminating licensing fees and restrictive dependencies on proprietary AI solutions.

Get the Android app

Or read this on Venture Beat

Read more on:

Photo of surpasses

surpasses

Photo of Light-R1-32B

Light-R1-32B

Photo of DeepSeek performance

DeepSeek performance

Related news:

News photo

DeepSeek-R1-Distill-Qwen-1.5B Surpasses GPT-4o in certain benchmarks

News photo

Apple’s new AI model ReALM ‘surpasses GPT-4’

News photo

AI Surpasses Doctors In Spotting Early Breast Cancer Signs In NHS Trial