Get the latest tech news
Kolmogorov-Arnold networks may make neural networks more understandable
By tapping into a decades-old mathematical principle, researchers are hoping that Kolmogorov-Arnold networks will facilitate scientific discovery.
A 1989 paper co-authored by Tomaso Poggio, a physicist turned computational neuroscientist at the Massachusetts Institute of Technology, explicitly stated that the mathematical idea at the heart of a KAN is “irrelevant in the context of networks for learning.” He and his adviser, the MIT physicist Max Tegmark, had been working on making neural networks more understandable for scientific applications — hoping to offer a peek inside the black box — but things weren’t panning out. A paper by Yizheng Wang of Tsinghua University and others that appeared online in June showed that their Kolmogorov-Arnold-informed neural network (KINN) “significantly outperforms” MLPs for solving partial differential equations(PDEs).
Or read this on Hacker News