Get the latest tech news

What's Going on in Machine Learning? Some Minimal Models


Stephen Wolfram explores minimal models and their visualizations, aiming to explain the underneath functionality of neural nets and ultimately machine learning.

Let’s say we start from a random rule array, then repeatedly construct the change map and apply the mutation that it implies gives the most positive change—in effect at each step following the “path of steepest descent” to get to the lifetime we want (i.e. reduce the loss). The idea that the brain is fundamentally made of connected nerve cells was considered in the latter part of the nineteenth century, and took hold in the first decades of the twentieth century—with the formalized concept of a neural net that operates in a computational way emerging in full form in the work of Warren McCulloch and Walter Pitts in 1943. Thanks also to Brad Klee, Tianyi Gu, Nik Murzin and Max Niederman for specific results, to George Morgan and others at Symbolica for their early interest, and to Kovas Boguta for suggesting many years ago to link machine learning to the ideas in A New Kind of Science.

Get the Android app

Or read this on Hacker News

Read more on:

Photo of machine learning

machine learning

Photo of minimal models

minimal models

Related news:

News photo

Cardiovascular diseases identified at an early stage using machine learning and digital twins, eliminating the need for expensive diagnostic methods like MRI or CT

News photo

Flighty’s popular flight-tracking app can now predict delays using machine learning

News photo

The reanimation of pseudoscience in machine learning