Get the latest tech news

Reinforcement Pre-Training


In this work, we introduce Reinforcement Pre-Training (RPT) as a new scaling paradigm for large language models and reinforcement learning (RL). Specifically, we reframe next-token prediction as a reasoning task trained using RL, where it receives verifiable rewards for correctly predicting the next token for a given context. RPT offers a scalable method to leverage vast amounts of text data for general-purpose RL, rather than relying on domain-specific annotated answers. By incentivizing the capability of next-token reasoning, RPT significantly improves the language modeling accuracy of predicting the next tokens. Moreover, RPT provides a strong pre-trained foundation for further reinforcement fine-tuning. The scaling curves show that increased training compute consistently improves the next-token prediction accuracy. The results position RPT as an effective and promising scaling paradigm to advance language model pre-training.

RPT offers a scalable method to leverage vast amounts of text data for general-purpose RL, rather than relying on domain-specific annotated answers. By incentivizing the capability of next-token reasoning, RPT significantly improves the language modeling accuracy of predicting the next tokens. The results position RPT as an effective and promising scaling paradigm to advance language model pre-training.

Get the Android app

Or read this on Hacker News

Read more on:

Photo of pre-training

pre-training

Photo of Reinforcement Pre-

Reinforcement Pre-

Related news:

News photo

CatLIP: Clip Vision Accuracy with 2.7x Faster Pre-Training on Web-Scale Data