Consistency LLM

Read news on Consistency LLM with our app.

Read more in the app

Consistency LLM: converting LLMs to parallel decoders accelerates inference 3.5x