Get the latest tech news
Sakana introduces new AI architecture, ‘Continuous Thought Machines’ to make models reason with less guidance — like human brains
While the CTM shows strong promise, it is still primarily a research architecture and is not yet production-ready out of the box.
Most modern large language models (LLMs) are still fundamentally based upon the “Transformer” architecture outlined in the seminal 2017 paper from Google Brain researchers entitled “ Attention Is All You Need.” Its ability to adaptively allocate compute, self-regulate depth of reasoning, and offer clear interpretability may prove highly valuable in production systems facing variable input complexity or strict regulatory requirements. As large incumbents like OpenAI and Google double down on foundation models, Sakana is charting a different course: small, dynamic, biologically inspired systems that think in time, collaborate by design, and evolve through experience.
Or read this on Venture Beat