Get the latest tech news
What If We Had Bigger Brains? Imagining Minds Beyond Ours
Stephen Wolfram explores how the number of neural connections affects capabilities like language and abstraction. How far we could go accounting for neural nets and LLMS, the fundamental nature of computation, neuroscience and the operation of brains.
My purpose here is to start exploring such questions, informed by what we’ve seen in recent years in neural nets and LLMs, as well as by what we now know about the fundamental nature of computation, and about neuroscience and the operation of actual brains (like the one that’s writing this, imaged here): And in particular, symbolic expressions can be thought of “grammatically” as consisting of nested functions that form a tree-like structure; effectively a more precise version of the typical kind of grammar that we find in human language. Thanks for recent discussions about topics covered here in particular to Richard Assar, Joscha Bach, Kovas Boguta, Thomas Dullien, Dugan Hammock, Christopher Lord, Fred Meinberg, Nora Popescu, Philip Rosedale, Terry Sejnowski, Hikari Sorensen, and James Wiles.
Or read this on Hacker News