Get the latest tech news
New AI text diffusion models break speed barriers by pulling words from noise
New diffusion models borrow technique from AI image synthesis for 10x speed boost.
LLaDA's researchers report their 8 billion parameter model performs similarly to LLaMA3 8B across various benchmarks, with competitive results on tasks like MMLU, ARC, and GSM8K. Mercury's documentation states its models run "at over 1,000 tokens/sec on Nvidia H100s, a speed previously possible only using custom chips" from specialized hardware providers like Groq, Cerebras, and SambaNova. Independent AI researcher Simon Willison told Ars Technica, "I love that people are experimenting with alternative architectures to transformers, it's yet another illustration of how much of the space of LLMs we haven't even started to explore yet."
Or read this on ArsTechnica