Get the latest tech news

LLMs and Code Optimization


Dave Andersen's (new) blog

A naive parallelization can't retain the "check first" optimization because it updates those min/max's across threads, so we now have to get human on the problem and realize where the trick is: After processing a few thousand numbers, we've found some OK candidates for max_number and min_number. It admits additional mathematical optimization that I've deliberately ignored, because I like the pattern of the problem as a general one: Given a choice of a few ways to eliminate candidates, some of which parallelize and some of which are more tricky, find an order in which to do so. That tradeoff made this problem perhaps a more interesting LLM testcase than the author had intended, but it's a very real kind of issue: Many approaches to parallelization may end up trading off work for latency, and it requires a thoughtful programmer to figure out how to navigate it.

Get the Android app

Or read this on Hacker News

Read more on:

Photo of LLMs

LLMs

Photo of Code Optimization

Code Optimization

Related news:

News photo

How I program with LLMs

News photo

Getting LLMs to Generate Funny Memes Is Unexpectedly Hard

News photo

Using LLMs and Cursor to finish side projects