Get the latest tech news

An overview of gradient descent optimization algorithms (2016)


Gradient descent is the preferred way to optimize neural networks and many other machine learning algorithms but is often used as a black box. This post explores how many of the most popular gradient-based optimization algorithms such as Momentum, Adagrad, and Adam actually work.

As adaptive learning rate methods have become the norm in training neural networks, practitioners noticed that in some cases, e.g. for object recognition or machine translation they fail to converge to an optimal solution and are outperformed by SGD with momentum. Note that Adagrad, Adadelta, and RMSprop almost immediately head off in the right direction and converge similarly fast, while Momentum and NAG are led off-track, evoking the image of a ball rolling down the hill. Notice here that SGD, Momentum, and NAG find it difficulty to break symmetry, although the two latter eventually manage to escape the saddle point, while Adagrad, RMSprop, and Adadelta quickly head down the negative slope.

Get the Android app

Or read this on Hacker News

Read more on:

Photo of overview

overview

Related news:

News photo

How to become a Data Scientist? My journey, overview of skill set, practice tips

News photo

Overview of cross-architecture portability problems

News photo

A overview of binaries, ELF, and NoMMU on Linux