Get the latest tech news

Imagine Flash: Accelerating Emu Diffusion Models with Backward Distillation


Diffusion models are a powerful generative framework, but come with expensive inference. Existing acceleration methods often compromise image quality or...

Our approach comprises three key components: (i) Backward Distillation, which mitigates training-inference discrepancies by calibrating the student on its own backward trajectory; (ii) Shifted Reconstruction Loss that dynamically adapts knowledge transfer based on the current time step; and (iii) Noise Correction, an inference time technique that enhances sample quality by addressing singularities in noise prediction. Armen Avetisyan, Chris Xie, Henry Howard-Jenkins, Tsun-Yi Yang, Samir Aroudj, Suvam Patra, Fuyang Zhang, Duncan Frost, Luke Holland, Campbell Orme, Jakob Julian Engel, Edward Miller, Richard Newcombe, Vasileios Balntas Felix Xu, Di Lin, Jianjun Zhao, Jianlang Chen, Lei Ma, Qing Guo, Wei Feng, Xuhong Ren

Get the Android app

Or read this on Hacker News

Read more on:

Photo of Flash

Flash

Photo of emu diffusion models

emu diffusion models

Related news:

News photo

Nicolas Cage Says AI Is Nightmare And His Cameo in 'The Flash' Deceptive

News photo

The new Ford F-150 Lightning Flash puts tech and battery range on center stage

News photo

Apple's Emergency SOS iPhone feature saved a woman and her dog caught in a flash flood