Get the latest tech news
Being “Confidently Wrong” is holding AI back
The failure mode that stalls “AI for data” efforts or "AI on my APIs" efforts isn’t psychedelic hallucination—it’s confident inaccuracy: plausible answers that are wrong in subtle and costly ways.
It’s their ability to build up context, interrogate their own failures, and pick up small improvements and efficiencies as they practice a task In this post, based on our recent experiences selling 7-figure AI deals to Fortune 500s and Silicon Valley tech cos alike, I'll discuss how "confident inaccuracy" seems to be at the heart of this problem. Without high-quality uncertainty information, I don’t know whether a result is wrong because of ambiguity, missing context, stale data, or a model mistake. The starting point of this loop is if an AI system could tell the user when its not certain about its accuracy in a concrete and native way.
Or read this on Hacker News