Get the latest tech news

Expected Attention: KV Cache Compression by Estimating Attention


Memory consumption of the Key-Value (KV) cache represents a major bottleneck for efficient large language model inference. While attention-score-based KV cache pruning shows promise, it faces critical practical limitations: attention scores from future tokens are unavailable during compression, and modern implementations like Flash Attention do not materialize the full attention matrix, making past scores inaccessible. To overcome these challenges, we introduce $\textbf{Expected Attention, a training-free compression method}$ that estimates KV pairs importance by predicting how future queries will attend to them. Our approach leverages the distributional properties of LLM activations to compute expected attention scores in closed form for each KV pair. These scores enable principled ranking and pruning of KV pairs with minimal impact on the residual stream, achieving effective compression without performance degradation. Importantly, our method operates seamlessly across both prefilling and decoding phases, consistently outperforming state-of-the-art baselines in both scenarios. Finally, $\textbf{we release KVPress, a comprehensive library to enable researchers to implement and benchmark KV cache compression methods, already including more than 20 techniques}$.

None

Get the Android app

Or read this on Hacker News

Read more on:

Photo of attention

attention

Photo of kv cache compression

kv cache compression

Related news:

News photo

Clean Hydrogen at a Crossroads: Why Methane Pyrolysis Deserves Attention

News photo

I tried every new Apple Watch released in 2025 - here's why the Ultra 3 deserves your attention

News photo

I unified convolution and attention into a single framework