Get the latest tech news

Researchers Upend AI Status Quo By Eliminating Matrix Multiplication In LLMs


Researchers from UC Santa Cruz, UC Davis, LuxiTech, and Soochow University have developed a new method to run AI language models more efficiently by eliminating matrix multiplication, potentially reducing the environmental impact and operational costs of AI systems. Ars Technica's Benj Edwards repor...

The technique has not yet been peer-reviewed, but the researchers -- Rui-Jie Zhu, Yu Zhang, Ethan Sifferman, Tyler Sheaves, Yiqiao Wang, Dustin Richmond, Peng Zhou, and Jason Eshraghian -- claim that their work challenges the prevailing paradigm that matrix multiplication operations are indispensable for building high-performing language models. They argue that their approach could make large language models more accessible, efficient, and sustainable, particularly for deployment on resource-constrained hardware like smartphones. The researchers project that their approach could theoretically intersect with and surpass the performance of standard LLMs at scales around 10^23 FLOPS, which is roughly equivalent to the training compute required for models like Meta's Llama-3 8B or Llama-2 70B.

Get the Android app

Or read this on Slashdot

Read more on:

Photo of LLMs

LLMs

Photo of researchers

researchers

Photo of AI status quo

AI status quo

Related news:

News photo

Researchers upend AI status quo by eliminating matrix multiplication in LLMs

News photo

Human-like skin for humanoids: Japan creates new tissue binding tech for robots | Researchers found a way to bind skin to complex structures by mimicking skin-ligament structures and using V-shaped perforations.

News photo

Researchers run high-performing LLM on the energy needed to power a lightbulb