Get the latest tech news
No retraining needed: Sakana’s new AI model changes how machines learn
Sakana found that self-adaptive models can modify their weights during inference to adjust behavior to new and unseen tasks.
This is the latest in a series of techniques that aim to improve the abilities of large language models(LLMs) at inference time, making them increasingly useful for everyday applications across different domains. “By selectively adjusting critical components of the model weights, our framework allows LLMs to dynamically adapt to new tasks in real time,” the researchers write in a blog post published on the company’s website. Titans, an architecture developed by researchers at Google, tackles the problem from a different angle, giving language models the ability to learn and memorize new information at inference time.
Or read this on Venture Beat