Get the latest tech news
A Summary of Ilya Sutskevers AI Reading List
24 Sep 2024, Taro Langner Earlier this year, a reading list with about 30 papers was shared on Twitter. It reportedly forms part of a longer version originally compiled by Ilya Sutskever, co-founder and chief scientist of OpenAI at the time, for John Carmack in 2020 with the remark: ‘If you really learn all of these, you’ll know 90% of what matters’.
It builds up from linear classifiers and their ability to learn a given task based on mathematical optimization, or training, which adjusts their internal parameter weights such that applying them to input data will produce more desirable outputs. It outperformed its competitors in the 2012 ImageNet benchmark challenge, predicting whether a given input image contained e.g. a cat, dog, ship or any other of 1,000 possible classes, so conclusively that the real-world dominance of deep learning became commonly accepted. The proposed approach weakens the decoder (e.g. limiting it to reconstruct small receptive fields) such that it depends on the missing information (e.g. global structure) being fully provided by the latent code to which the input is compressed.
Or read this on Hacker News