Get the latest tech news

Evidence that LLMs are reaching a point of diminishing returns


The conventional wisdom, well captured recently by Ethan Mollick, is that LLMs are advancing exponentially. A few days ago, in very popular blog post, Mollick claimed that “the current best estimates of the rate of improvement in Large Language models show capabilities doubling every 5 to 14 months”:

A few days ago, in a popular blog post, Mollick claimed that “the current best estimates of the rate of improvement in Large Language Models show capabilities doubling every 5 to 14 months”: If two graphs I plotted are remotely correct, Mollick’s claim that “the rate of improvement in Large Language models show capabilities doubling every 5 to 14 months” is no longer holding true. If we really have changed regimes, from rapid progress to diminishing returns, and hallucinations and stupid errors do linger, LLMs may never be ready for prime time.

Get the Android app

Or read this on Hacker News

Read more on:

Photo of LLMs

LLMs

Photo of point

point

Photo of evidence

evidence

Related news:

News photo

Improving decision-making in LLMs: Two contemporary approaches

News photo

LLMs Are This Close to Destroying the Internet

News photo

Google’s new technique gives LLMs infinite context