Get the latest tech news

A Knockout Blow for LLMs?


LLM “reasoning” is so cooked they turned my name into a verb

(Regular readers will remember that when Satya Nadella waved the flag of concession in November on pure pretraining scaling- the hypothesis that my deep learning is a hitting a wall paper critique addressed - he suggested we might find a new set of scaling laws for inference time compute.) For anyone hoping that “reasoning” or “inference time compute” would get LLMs back on track, and take away the pain of m mutiple failures at getting pure scaling to yield something worthy of the name GPT-5, this is bad news. Worse, as the latest Apple papers shows, LLMs may well work on your easy test set (like Hanoi with 4 discs) and seduce you into thinking it has built a proper, generalizable solution when it does not.

Get the Android app

Or read this on Hacker News

Read more on:

Photo of LLMs

LLMs

Photo of Knockout Blow

Knockout Blow

Related news:

News photo

How much information do LLMs really memorize? Now we know, thanks to Meta, Google, Nvidia and Cornell

News photo

From tokens to thoughts: How LLMs and humans trade compression for meaning

News photo

LLMs and Elixir: Windfall or Deathblow?