Get the latest tech news
26× Faster Inference with Layer-Condensed KV Cache for Large Language Models
Huge memory consumption has been a major bottleneck for deploying high-throughput large language models in real-world applications. In addition to the large number of parameters, the key-value (KV) cache for the attention mechanism in the transformer architecture consumes a significant amount of memory, especially when the number of layers is large for deep language models. In this paper, we propose a novel method that only computes and caches the KVs of a small number of layers, thus significantly saving memory consumption and improving inference throughput. Our experiments on large language models show that our method achieves up to 26$\times$ higher throughput than standard transformers and competitive performance in language modeling and downstream tasks. In addition, our method is orthogonal to existing transformer memory-saving techniques, so it is straightforward to integrate them with our model, achieving further improvement in inference efficiency. Our code is available at https://github.com/whyNLP/LCKV.
View a PDF of the paper titled Layer-Condensed KV Cache for Efficient Inference of Large Language Models, by Haoyi Wu and 1 other authors View PDFHTML (experimental) Abstract:Huge memory consumption has been a major bottleneck for deploying high-throughput large language models in real-world applications. In this paper, we propose a novel method that only computes and caches the KVs of a small number of layers, thus significantly saving memory consumption and improving inference throughput.
Or read this on Hacker News