Get the latest tech news

GEPA optimizes LLMs without costly reinforcement learning


Moving beyond the slow, costly trial-and-error of RL, GEPA teaches AI systems to learn and improve using natural language.

They are often “compound AI systems,” complex workflows that chain multiple LLM modules, external tools such as databases or code interpreters, and custom logic to perform sophisticated tasks, including multi-step research and data analysis. Agrawal provided a concrete example of this efficiency gain: “We used GEPA to optimize a QA system in ~3 hours versus GRPO’s 24 hours—an 8x reduction in development time, while also achieving 20% higher performance,” he explained. “This may encourage the system to develop instructions and strategies grounded in a broader understanding of success, instead of merely learning patterns specific to the training data.” For enterprises, this improved reliability means less brittle, more adaptable AI applications in customer-facing roles.

Get the Android app

Or read this on Venture Beat

Read more on:

Photo of LLMs

LLMs

Photo of gepa

gepa

Related news:

News photo

Firefox 142 Now Available - Allows Browser Extensions/Add-Ons To Use AI LLMs

News photo

LLMs and coding agents are a security nightmare

News photo

LLMs tell bad jokes because they avoid surprises