Get the latest tech news
Running local LLMs offline on a ten-hour flight
I flew from London to Google Cloud Next 2026 in Las Vegas. Ten hours with no in-flight wifi. I used the time to test how far a modern MacBook can carry engineering work on local LLMs alone. Setup A week old MacBook Pro M5 Max, 128GB unified memory, 40-core GPU. Gemma 4 31B and Qwen 4.6 36B via LM Studio. Top 100 most common docker images, top programming languages alongside with enough dependencies to build function sites with rich visualisations.
None
Or read this on Hacker News