Get the latest tech news

Detecting when LLMs are uncertain


A deep dive into a new reasoning technique called Entropix.

If we get two branches with fairly equal confidence (as measured by their entropy and varentropy) but with different contents, we could formulate this as a question to the user. One way to think about is that any of the top options may be solid choices (e.g. they are synonyms to each other), and so we should just choose one at random (aka a higher temperature). But, inference-time techniques like this are easy to experiment with and could be a promising direction for open source hackers to improve reasoning without huge budgets.

Get the Android app

Or read this on Hacker News

Read more on:

Photo of LLMs

LLMs

Related news:

News photo

Numerical Precision Affects Mathematical Reasoning Capabilities of LLMs

News photo

IBM debuts open source Granite 3.0 LLMs for enterprise AI

News photo

The Prompt() Function: Use the Power of LLMs with SQL