Get the latest tech news
Consistency diffusion language models: Up to 14x faster, no quality loss
Standard diffusion language models can't use KV caching and need too many refinement steps to be practical. CDLM fixes both with a post-training recipe that enables exact block-wise KV caching and trajectory-consistent step reduction — delivering up to 14.5x latency improvements
None
Or read this on Hacker News