Get the latest tech news

Brain learning differs fundamentally from artificial intelligence systems


This paper introduces ‘prospective configuration’, a new principle for learning in neural networks, which differs from backpropagation and is more efficient in learning and more consistent with data on neural activity and behavior.

A common way to simulate this is ‘concept drifting’ 31, where a part of the mapping between the output neurons to the semantic meaning is shuffled regularly, each time a certain number of training iterations has passed (Fig. These data were previously explained by complex and abstract mechanisms, such as Bayesian models 38, 39, whereas here, we mechanistically show with prospective configuration how such inference can be performed by minimal networks encoding only the essential elements of the tasks. c, Activity of the output neuron corresponding to the selected option from networks trained with prospective configuration and backpropagation compared with fMRI data measured in human participants (that is, peak blood oxygenation level-dependent (%BOLD) signal in the mPFC).

Get the Android app

Or read this on Hacker News

Read more on:

Photo of Brain learning

Brain learning