Get the latest tech news

Paper Review: Variational Lossy Auto-Encoders


This post is part of a series of paper reviews, covering the ~30 papers Ilya Sutskever sent to John Carmack to learn about AI.

Maybe you want the RNN to output things that are "happy" or "sad" or "formal" or "informal", or maybe you want it to have the ability to reference back to some important context that might otherwise get lost over long generation sequences. If the authors can use VLAEs to consistently regenerate the information they care about using a very small vector representation, they have created a very powerful semantically aware compression algorithm. There's some deep intuition to be gained here — if you can grok why the RNN will always ignore the latent variable, you've understood something fundamental about how models learn things that is difficult to put into words.

Get the Android app

Or read this on Hacker News

Read more on:

Photo of Encoders

Encoders

Photo of Paper Review

Paper Review