Get the latest tech news
LLMs can't do probability
I built an experiment to show what happens when you ask an LLM to behave in a certain way a certain percentage of the time.
I’ve seen a couple of recent posts where the writer mentioned asking LLMs to do something with a certain probability or a certain percentage of the time. The gist is that the author built a Custom GPT with educational course material and then put in the prompt that their bot should lie about 20% of the time. I guess a technical-enough user could build a CustomGPT that uses function calling to decide how it should answer a question for a “spot the misinformation” pop quiz type use case.
Or read this on Hacker News