Get the latest tech news

Meta’s new multi-token prediction makes AI models up to 3X faster


Multi-token prediction instructs the LLM to predict several future tokens from each position in the training corpora at the same time.

In a recent study, researchers at Meta, Ecole des Ponts ParisTech and Université Paris-Saclay suggest improving the accuracy and speed of AI large language models (LLMs) by making them predict multiple tokens simultaneously. Don’t miss out on the chance to gain insights from industry experts, network with like-minded innovators, and explore the future of GenAI with customer experiences and optimize business processes. What might make this research and its future iterations useful for enterprise applications is the potential to provide faster inference and higher accuracy at little or no extra cost for generative tasks such as code completion.

Get the Android app

Or read this on Venture Beat

Read more on:

Photo of Meta

Meta

Photo of Models

Models

Related news:

News photo

Stack Overflow signs deal with OpenAI to supply data to its models

News photo

Meta is updating how Threads quote permissions work

News photo

“Meta spent almost as much as the Manhattan Project on GPUs in today's dollars”