Ollama

Read news on Ollama with our app.

Read more in the app

Stop Using Ollama

April 2026 TLDR Setup for Ollama and Gemma 4 26B on a Mac mini

Running local models on Macs gets faster with Ollama's MLX support | Apple Silicon Macs get a performance boost thanks to better unified memory usage.

Ollama Now Runs Faster on Macs Thanks to Apple's MLX Framework

Ollama is now powered by MLX on Apple Silicon in preview

Show HN: Timber – Ollama for classical ML models, 336x faster than Python

ollama 0.17 Released With Improved OpenClaw Onboarding

This local AI quickly replaced Ollama on my Mac - here's why

Installing Ollama and Gemma 3B on Linux

ollama 0.14 Can Make Use Of Bash For Letting AI/LLMs Run Commands On Your System

Show HN: Cover letter generator with Ollama/local LLMs (Open source)

ollama 0.12.11 Brings Vulkan Acceleration

ollama Rolls Out Experimental Vulkan Support For Expanded AMD & Intel GPU Coverage

Launch HN: Cactus (YC S25) – AI inference on smartphones

Finding thousands of exposed Ollama instances using Shodan

Don't want drive-by Ollama attackers snooping on your local chats? Patch now

Show HN: OWhisper – Ollama for realtime speech-to-text

Ollama and gguf

Jan – Ollama alternative with local UI

Ollama's new app