Get the latest tech news

Kolmogorov-Arnold networks may make neural networks more understandable


By tapping into a decades-old mathematical principle, researchers are hoping that Kolmogorov-Arnold networks will facilitate scientific discovery.

A 1989 paper co-authored by Tomaso Poggio, a physicist turned computational neuroscientist at the Massachusetts Institute of Technology, explicitly stated that the mathematical idea at the heart of a KAN is “irrelevant in the context of networks for learning.” He and his adviser, the MIT physicist Max Tegmark, had been working on making neural networks more understandable for scientific applications — hoping to offer a peek inside the black box — but things weren’t panning out. A paper by Yizheng Wang of Tsinghua University and others that appeared online in June showed that their Kolmogorov-Arnold-informed neural network (KINN) “significantly outperforms” MLPs for solving partial differential equations(PDEs).

Get the Android app

Or read this on Hacker News

Read more on:

Photo of neural networks

neural networks

Photo of novel architecture

novel architecture

Related news:

News photo

AI has yet to pay off – or is transforming business: calculating ROI of neural networks turns out to be rather complicated.

News photo

Microsoft CEO of AI Your online content is 'freeware' fodder for training models

News photo

Why neural networks struggle with the Game of Life (2020)