Get the latest tech news

When billion-dollar AIs break down over puzzles a child can do, it’s time to rethink the hype


The tech world is reeling from a paper that shows the powers of a new generation of AI have been wildly oversold, says cognitive scientist Gary Marcus

In short, these models are very good at a kind of pattern recognition, but often fail when they encounter novelty that forces them beyond the limits of their training, despite being, as the paper notes, “explicitly designed for reasoning tasks”. The new paper also echoes and amplifies several arguments that Arizona State University computer scientist Subbarao Kambhampati has been making about the newly popular LRMs. One of the most striking findings in the new paper was that an LLM may well work in an easy test set (such as Hanoi with four discs) and seduce you into thinking it has built a proper, generalisable solution when it has not.

Get the Android app

Or read this on r/technology

Read more on:

Photo of Time

Time

Photo of hype

hype

Photo of child

child

Related news:

News photo

‘Generative AI helps us bend time’: CrowdStrike, Nvidia embed real-time LLM defense, changing how enterprises secure AI

News photo

Fine-tuning LLMs is a waste of time

News photo

There's a massive ChatGPT outage, maybe it's time to switch over to Gemini