Get the latest tech news

Byte latent transformer: Patches scale better than tokens (2024)


We introduce the Byte Latent Transformer (BLT), a new byte-level LLM architecture that, for the first time, matches tokenization-based LLM performance at scale with significant improvements in inference efficiency and robustness. BLT encodes bytes into dynamically sized patches, which serve as the primary units of computation. Patches are segmented based on the entropy of the next byte, allocating more compute and model capacity where increased data complexity demands it. We present the first FLOP controlled scaling study of byte-level models up to 8B parameters and 4T training bytes. Our results demonstrate the feasibility of scaling models trained on raw bytes without a fixed vocabulary. Both training and inference efficiency improve due to dynamically selecting long patches when data is predictable, along with qualitative improvements on reasoning and long tail generalization. Overall, for fixed inference costs, BLT shows significantly better scaling than tokenization-based models, by simultaneously growing both patch and model size.

Authors: Artidoro Pagnoni, Ram Pasunuru, Pedro Rodriguez, John Nguyen, Benjamin Muller, Margaret Li, Chunting Zhou, Lili Yu, Jason Weston, Luke Zettlemoyer, Gargi Ghosh, Mike Lewis, Ari Holtzman, Srinivasan Iyer View a PDF of the paper titled Byte Latent Transformer: Patches Scale Better Than Tokens, by Artidoro Pagnoni and 13 other authors Patches are segmented based on the entropy of the next byte, allocating more compute and model capacity where increased data complexity demands it.

Get the Android app

Or read this on Hacker News

Read more on:

Photo of patches

patches

Photo of tokens

tokens

Photo of byte

byte

Related news:

News photo

Andes Voyager RISC-V Micro-ATX Board Seeing Patches For Mainline Linux Support

News photo

NVIDIA Posts 60 Patches For Open-Source Hopper & Blackwell GPU Support On Nouveau

News photo

Meta unleashes Llama API running 18x faster than OpenAI: Cerebras partnership delivers 2,600 tokens per second