Get the latest tech news

Intellect-2 Release: The First 32B Model Trained Through Globally Distributed RL


We're excited to release INTELLECT-2, the first 32B parameter model trained via globally distributed reinforcement learning. Unlike traditional centralized training efforts, INTELLECT-2 trains a reasoning language model using fully asynchronous RL across a dynamic, heterogeneous swarm of permissionless compute contributors.

SHARDCAST: A library for distributing large files via a HTTP-based tree-topology network that efficiently propagates updated model weights to the decentralized inference workers. Now, we’re focusing on tool-assisted reasoning, crowdsourcing higher-quality data, and optimizing our infrastructure and training recipe to build frontier open models. Prime Intellect Research Team: Sami Jaghouar, Justus Mattern, Jack Min Ong, Jannik Straube, Manveer Basra, Aaron Pazdera, Matthew Di Ferrante, Kushal Thaman, Felix Gabriel, Fares Obeid, Kemal Erdem, Michael Keiblinger, Johannes Hagemann

Get the Android app

Or read this on Hacker News

Read more on:

Photo of release

release

Photo of 32b model

32b model

Photo of distributed rl

distributed rl

Related news:

News photo

No Fortnite on U.S. App Store today, but Epic still targeting release this week

News photo

Death Stranding 2 PS5 controller: pre-order, price and release date

News photo

Trump and DOJ try to spring former county clerk Tina Peters from prison | Trump directs DOJ to help secure release of Peters from Colorado prison.