Get the latest tech news
Can LLMs do randomness?
While LLMs theoretically understand “randomness,” their training data distributions may create unexpected patterns. In this article we will test different LLMs from OpenAI and Anthropic to see if they provide unbiased results. For the first experiment we will make it toss a fair coin and for the next, we will make it guess a number between 0-10 and see if its equally distributed between even and odd. I know the sample sizes are small and probably not very statistically significant. This whole thing is just for fun.
While LLMs theoretically understand “randomness,” their training data distributions may create unexpected patterns. Deviation simply measures how far each model’s heads probability strays from the ideal unbiased value (0.5 or 50%). Claude’s χ² = 2.56 falls below this threshold, suggesting its observed bias could reasonably occur by random variation.
Or read this on Hacker News