Get the latest tech news

‘You Can’t Lick a Badger Twice’: Google Failures Highlight a Fundamental AI Flaw


Google’s AI Overviews feature credible-sounding explanations for completely made-up idioms.

First is that it’s ultimately a probability machine; while it may seem like a large language model-based system has thoughts or even feelings, at a base level it’s simply placing one most-likely word after another, laying the track as the train chugs forward. “When people do nonsensical or ‘false premise’ searches, our systems will try to find the most relevant results based on the limited web content available,” said Google spokesperson Meghann Farnsworth in an emailed statement. “I did about five minutes of experimentation and it’s wildly inconsistent, and that’s what you expect of GenAI, which is very dependent on specific examples in training sets and not very abstract,” says Gary Marcus, a cognitive scientist and author of Taming Silicon Valley: How We Can Ensure That AI Works for Us.

Get the Android app

Or read this on Wired

Read more on:

Photo of Badger

Badger

Photo of fundamental ai flaw

fundamental ai flaw

Photo of google failures

google failures

Related news:

News photo

Nikola sells abandoned electric Badger pickup truck program to friend of disgraced founder Trevor Milton