Get the latest tech news

Hierarchical Autoregressive Modeling for Memory-Efficient Language Generation


Transformers operate as horizontal token-by-token scanners; at each generation step, the model attends to an ever-growing sequence of token-level states. This access pattern increases prefill latency and makes long-context decoding increasingly memory-bound, as KV-cache reads and writes dominate inference throughput rather than arithmetic computation. We propose Parallel Hierarchical Operation for Top-down Networks (PHOTON), a hierarchical autoregressive model that replaces flat scanning with vertical, multi-resolution context access. PHOTON maintains a hierarchy of latent streams: a bottom-up encoder progressively compresses tokens into low-rate contextual states, while lightweight top-down decoders reconstruct fine-grained token representations. Experimental results show that PHOTON is superior to competitive Transformer-based language models regarding the throughput-quality trade-off, offering significant advantages in long-context and multi-query tasks. This reduces decode-time KV-cache traffic, yielding up to $10^{3}\times$ higher throughput per unit memory.

None

Get the Android app

Or read this on Hacker News

Read more on:

Photo of Memory

Memory

Related news:

News photo

Swapping two blocks of memory inside a larger block, in constant memory

News photo

Brave overhauled its Rust adblock engine with FlatBuffers, cutting memory 75%

News photo

Linux Addressing Out-Of-Memory Killer Inaccuracy On Large Core Count Systems