Get the latest tech news

Toward Inference-Optimal Mixture-of-Expert Large Language Models


Mixture-of-Expert (MoE) based large language models (LLMs), such as the recent Mixtral and DeepSeek-MoE, have shown great promise in scaling model size without suffering from the quadratic growth of training cost of dense transformers. Like dense models, training MoEs requires answering the same question: given a training budget, what is the optimal allocation on the model size and number of tokens? We study the scaling law of MoE-based LLMs regarding the relations between the model performance, model size, dataset size, and the expert degree. Echoing previous research studying MoE in different contexts, we observe the diminishing return of increasing the number of experts, but this seems to suggest we should scale the number of experts until saturation, as the training cost would remain constant, which is problematic during inference time. We propose to amend the scaling law of MoE by introducing inference efficiency as another metric besides the validation loss. We find that MoEs with a few (4/8) experts are the most serving efficient solution under the same performance, but costs 2.5-3.5x more in training. On the other hand, training a (16/32) expert MoE much smaller (70-85%) than the loss-optimal solution, but with a larger training dataset is a promising setup under a training budget.

View a PDF of the paper titled Toward Inference-optimal Mixture-of-Expert Large Language Models, by Longfei Yun and 4 other authors View PDFHTML (experimental) Abstract:Mixture-of-Expert (MoE) based large language models (LLMs), such as the recent Mixtral and DeepSeek-MoE, have shown great promise in scaling model size without suffering from the quadratic growth of training cost of dense transformers. We propose to amend the scaling law of MoE by introducing inference efficiency as another metric besides the validation loss.

Get the Android app

Or read this on Hacker News

Read more on:

Photo of Expert

Expert

Photo of inference

inference

Photo of optimal mixture

optimal mixture

Related news:

News photo

Women In AI: Lee Tiedrich, AI expert at the Global Partnership on AI

News photo

AI Expert Falsely Fined By Automated AI System, Proving System and Human Reviewers Failed

News photo

Jan. 6 Was an Example of Networked Incitement − a Media and Disinformation Expert Explains the Danger of Political Violence Orchestrated Over Social Media