Get the latest tech news

H2O AI releases Danube, a super-tiny LLM for mobile applications


In the Hellaswag LLM benchmark evaluating common sense natural language inference, Danube performed with an accuracy of 69.58%, sitting just behind Stability AI’s Stable LM 2 1.6 billion parameter model.

Enterprises building consumer devices are racing to explore the potential of offline generative AI, where models run locally on the product, giving users quick assistance across functions and eliminating the need to take information out to the cloud. “We are excited to release H2O-Danube-1.8B as a portable LLM on small devices like your smartphone… The proliferation of smaller, lower-cost hardware and more efficient training now allows modestly-sized models to be accessible to a wider audience… We believe H2O-Danube-1.8B will be a game changer for mobile offline applications,” Sri Ambati, CEO and co-founder of H2O, said in a statement. In the long run, the availability of Danube and similar small-sized models is expected to drive a surge in offline generative AI applications across phones and laptops, helping with tasks like email summarization, typing and image editing.

Get the Android app

Or read this on Venture Beat

Read more on:

Photo of LLM

LLM

Photo of Danube

Danube

Photo of super-tiny LLM

super-tiny LLM

Related news:

News photo

Alibaba staffer offers a glimpse into building LLMs in China

News photo

China’s Moonshot AI zooms to $2.5B valuation, raising $1B for an LLM focused on long context

News photo

LangChain lands $25M round, launches platform to support entire LLM application lifecycle