Get the latest tech news

Ollama Now Runs Faster on Macs Thanks to Apple's MLX Framework


Ollama, the popular app for running AI models locally on a computer, has released an update that takes advantage of Apple's own machine learning framework, MLX. The result is a hefty speed boost on Macs with Apple silicon. According to Ollama, the new version processes prompts around 1.6 times faster (prefill speed) and nearly doubles the speed at which it generates responses (decode speed).

None

Get the Android app

Or read this on Mac Rumors

Read more on:

Photo of Apple

Apple

Photo of Ollama

Ollama

Photo of mlx framework

mlx framework

Related news:

News photo

The Origin Story of Apple’s Long-Running Relationship with Foxconn

News photo

Jason Snell: The Origin of Apple

News photo

Ollama is now powered by MLX on Apple Silicon in preview