Get the latest tech news

26× Faster Inference with Layer-Condensed KV Cache for Large Language Models


Huge memory consumption has been a major bottleneck for deploying high-throughput large language models in real-world applications. In addition to the large number of parameters, the key-value (KV) cache for the attention mechanism in the transformer architecture consumes a significant amount of memory, especially when the number of layers is large for deep language models. In this paper, we propose a novel method that only computes and caches the KVs of a small number of layers, thus significantly saving memory consumption and improving inference throughput. Our experiments on large language models show that our method achieves up to 26$\times$ higher throughput than standard transformers and competitive performance in language modeling and downstream tasks. In addition, our method is orthogonal to existing transformer memory-saving techniques, so it is straightforward to integrate them with our model, achieving further improvement in inference efficiency. Our code is available at https://github.com/whyNLP/LCKV.

View a PDF of the paper titled Layer-Condensed KV Cache for Efficient Inference of Large Language Models, by Haoyi Wu and 1 other authors View PDFHTML (experimental) Abstract:Huge memory consumption has been a major bottleneck for deploying high-throughput large language models in real-world applications. In this paper, we propose a novel method that only computes and caches the KVs of a small number of layers, thus significantly saving memory consumption and improving inference throughput.

Get the Android app

Or read this on Hacker News

Read more on:

Photo of Layer

Layer

Photo of condensed kv

condensed kv

Photo of faster inference

faster inference

Related news:

News photo

New neural tech could power insect-sized intelligent flying robots | The system uses a five-layer spiking neural network with 28,800 neurons to analyze raw event-based camera data and estimate the camera’s 3D motion.

News photo

Embedded accounting startup Layer secures $2.3M toward goal of replacing QuickBooks

News photo

Intel Updates OpenCL Intercept Layer With New Abilities