Get the latest tech news

Running local LLMs offline on a ten-hour flight


I flew from London to Google Cloud Next 2026 in Las Vegas. Ten hours with no in-flight wifi. I used the time to test how far a modern MacBook can carry engineering work on local LLMs alone. Setup A week old MacBook Pro M5 Max, 128GB unified memory, 40-core GPU. Gemma 4 31B and Qwen 4.6 36B via LM Studio. Top 100 most common docker images, top programming languages alongside with enough dependencies to build function sites with rich visualisations.

None

Get the Android app

Or read this on Hacker News

Read more on:

Photo of local LLMs

local LLMs

Photo of hour flight

hour flight

Related news:

News photo

Lemonade 10.1 Released For Latest Improvements For Local LLMs On AMD GPUs & NPUs

News photo

Raspberry Pi's New AI Hat Adds 8GB of RAM for Local LLMs

News photo

Show HN: Cover letter generator with Ollama/local LLMs (Open source)