Get the latest tech news

Sakana introduces new AI architecture, ‘Continuous Thought Machines’ to make models reason with less guidance — like human brains


While the CTM shows strong promise, it is still primarily a research architecture and is not yet production-ready out of the box.

Most modern large language models (LLMs) are still fundamentally based upon the “Transformer” architecture outlined in the seminal 2017 paper from Google Brain researchers entitled “ Attention Is All You Need.” Its ability to adaptively allocate compute, self-regulate depth of reasoning, and offer clear interpretability may prove highly valuable in production systems facing variable input complexity or strict regulatory requirements. As large incumbents like OpenAI and Google double down on foundation models, Sakana is charting a different course: small, dynamic, biologically inspired systems that think in time, collaborate by design, and evolve through experience.

Get the Android app

Or read this on Venture Beat

Read more on:

Photo of guidance

guidance

Photo of human brains

human brains

Photo of sakana

sakana

Related news:

News photo

Arm says it isn’t worried by tariffs, but won't give guidance for FY'26

News photo

Nokia Sinks After Warning Tariffs Set to Hit Profit Outlook

News photo

Robot Maker Fanuc Withholds Guidance on Tariff Uncertainty