Get the latest tech news

Forward propagation of errors through time


1Stanford University, 2University of Groningen, 3Google DeepMind February 17, 2026Code TL;DR We investigate a fundamental question in recurrent neural network training: why is backpropagation through time always ran backwards? We show, by deriving an exact gradient-based algorithm that propagates error forward in time (in multiple phases), that this does not necessarily need to be the case! However, while the math holds up, it suffers from critical numerical stability issues as the network forgets information faster. This post details the derivation, the successful experiments, an analysis of why this promising idea suffers numerically, and the reasons why we did not investigate it further.

None

Get the Android app

Or read this on Hacker News

Read more on:

Photo of Time

Time

Photo of errors

errors

Photo of Forward propagation

Forward propagation

Related news:

News photo

Is 'Brain Rot' Real? How Too Much Time Online Can Affect Your Mind.

News photo

How far back in time can you understand English?

News photo

Lessons learned from `oapi-codegen`'s time in the GitHub Secure Open Source Fund