Get the latest tech news

How Do Olympiad Medalists Judge LLMs in Competitive Programming?


A new benchmark assembled by a team of International Olympiad medalists suggests the hype about large language models beating elite human coders is premature. LiveCodeBench Pro, unveiled in a 584-problem study [PDF] drawn from Codeforces, ICPC and IOI contests, shows the best frontier model clears j...

A new benchmark assembled by a team of International Olympiad medalists suggests the hype about large language models beating elite human coders is premature. A granular tag-by-tag autopsy identified implementation-friendly, knowledge-heavy problems -- segment trees, graph templates, classic dynamic programming -- as the models' comfort zone; observation-driven puzzles such as game-theory endgames and trick-greedy constructs remain stubborn roadblocks. The broader takeaway is that impressive leaderboard jumps often reflect tool use, multiple retries or easier benchmarks rather than genuine algorithmic reasoning, leaving a conspicuous gap between today's models and top human problem-solvers.

Get the Android app

Or read this on Slashdot

Read more on:

Photo of LLMs

LLMs

Photo of medalists

medalists

Related news:

News photo

Clinical knowledge in LLMs does not translate to human interactions

News photo

“Language and Image Minus Cognition”: An Interview with Leif Weatherby

News photo

Canva Now Requires Use of LLMs During Coding Interviews