Get the latest tech news

ProgramBench: Can language models rebuild programs from scratch?


Turning ideas into full software projects from scratch has become a popular use case for language models. Agents are being deployed to seed, maintain, and grow codebases over extended periods with minimal human oversight. Such settings require models to make high-level software architecture decisions. However, existing benchmarks measure focused, limited tasks such as fixing a single bug or developing a single, specified feature. We therefore introduce ProgramBench to measure the ability of software engineering agents to develop software holisitically. In ProgramBench, given only a program and its documentation, agents must architect and implement a codebase that matches the reference executable's behavior. End-to-end behavioral tests are generated via agent-driven fuzzing, enabling evaluation without prescribing implementation structure. Our 200 tasks range from compact CLI tools to widely used software such as FFmpeg, SQLite, and the PHP interpreter. We evaluate 9 LMs and find that none fully resolve any task, with the best model passing 95\% of tests on only 3\% of tasks. Models favor monolithic, single-file implementations that diverge sharply from human-written code.

None

Get the Android app

Or read this on Hacker News

Read more on:

Photo of Scratch

Scratch

Photo of programs

programs

Photo of language models

language models

Related news:

News photo

Train Your Own LLM from Scratch

News photo

Show HN: TRiP – a complete transformer engine in C built from scratch just by me

News photo

Simulating a 2D Quadcopter from Scratch