Get the latest tech news

What are we missing out on when we think Transformer is unreasonable in biology?


Form Over Function (II): Why Your Brain Isn’t a Computer, But Your Computer Can Simulate a Brain # In my previous article, “ Form Over Function: Why Dynamic Sparsity is the Only Path to AGI”, we reached a core conclusion: the pursuit of intelligence should focus on the realization of macroscopic computational functions, rather than the imitation of microscopic biological forms. We saw that the Transformer architecture, especially its sparse MoE variants, inadvertently became the best engineering approximation of the brain’s two core functions: “global workspace” and “dynamic sparse activation.”

Many acknowledge the functional superiority of Transformers, but deep down, a lingering question remains: “How can a mathematical model with global attention be compared to a wet brain composed of locally connected neurons?” This workspace allows highly processed information from different senses and memory modules to be integrated and “broadcast” to all other cognitive subsystems, forming a unified, coherent conscious experience. Conclusion: At the microscopic physical level, what truly corresponds to a biological neuron is not Transformer’s mathematical formula, but the billions of parallel-operating transistors on a GPU, performing simple local computations.

Get the Android app

Or read this on Hacker News

Read more on:

Photo of Transformer

Transformer

Photo of biology

biology

Related news:

News photo

We reimagined Transformer architectures inspired by nature's hidden structures

News photo

Machine Learning: The Native Language of Biology

News photo

Transformer neural net learns to run Conway's Game of Life just from examples