MLX

Read news on MLX with our app.

Read more in the app

MLX vs CoreML on Apple Silicon: A Practical Guide to Picking the Right Backend — and Why You Should Use Both

Running local models on Macs gets faster with Ollama's MLX support | Apple Silicon Macs get a performance boost thanks to better unified memory usage.

Ollama is now powered by MLX on Apple Silicon in preview

R interface to Apple's MLX library

Qwen 3 now supports ARM and MLX

Apple's MLX adding CUDA support

Alibaba launches new Qwen3 AI models for Apple's MLX architecture

Show HN: A Implementation of Alpha Zero for Chess in MLX

Running Qwen3 on your macbook, using MLX, to vibe code for free