Get the latest tech news
A Knockout Blow for LLMs?
LLM “reasoning” is so cooked they turned my name into a verb
(Regular readers will remember that when Satya Nadella waved the flag of concession in November on pure pretraining scaling- the hypothesis that my deep learning is a hitting a wall paper critique addressed - he suggested we might find a new set of scaling laws for inference time compute.) For anyone hoping that “reasoning” or “inference time compute” would get LLMs back on track, and take away the pain of m mutiple failures at getting pure scaling to yield something worthy of the name GPT-5, this is bad news. Worse, as the latest Apple papers shows, LLMs may well work on your easy test set (like Hanoi with 4 discs) and seduce you into thinking it has built a proper, generalizable solution when it does not.
Or read this on Hacker News