Read news on Cerebras with our app.
Read more in the app
Cerebras launches Qwen3-235B, achieving 1.5k tokens per second
Cerebras achieves 2,500T/s on Llama 4 Maverick (400B)
Cerebras CEO actually finds common ground with Nvidia as startup notches IBM win
Meta unleashes Llama API running 18x faster than OpenAI: Cerebras partnership delivers 2,600 tokens per second
Nvidia challenger Cerebras says it's leaped Mid-East funding hurdle on way to IPO
Cerebras to light up datacenters in North America and France packed with AI accelerators
Cerebras just announced 6 new AI datacenters that process 40M tokens per second — and it could be bad news for Nvidia
Foundation Capital, an early backer of Solana and Cerebras, raises $600M fund
Cerebras-Perplexity deal targets $100B search market with ultra-fast AI
Cerebras becomes the world’s fastest host for DeepSeek R1, outpacing Nvidia GPUs by 57x
OpenAI at one point considered acquiring AI chip startup Cerebras
Cerebras Trains Llama Models to Leap over GPUs
AI chipmaker Cerebras files for IPO
How Cerebras is breaking the GPU bottleneck on AI inference
Cerebras Launches the Fastest AI Inference
Cerebras Inference: AI at Instant Speed
Cerebras launches inference for Llama 3.1; benchmarked at 1846 tokens/s on 8B
Cerebras gives waferscale chips inferencing twist, claims 1,800 token per sec generation rates
Aleph Alpha enlists Cerebras waferscale supers to train AI for German military
Cerebras breaks ground on Condor Galaxy 3, an AI supercomputer that can hit 8 exaFLOPs