Get the latest tech news
This AI Model Never Stops Learning
Scientists at Massachusetts Institute of Technology have devised a way for large language models to keep learning on the fly—a step toward building AI that continually improves itself.
The MIT scheme, called Self Adapting Language Models (SEAL), involves having an LLM generate its own synthetic training data based on the input it receives. “The initial idea was to explore if tokens [units of text fed to LLMs and generated by them] could cause a powerful update to a model,” says Jyothish Pari, a PhD student at MIT involved with developing SEAL. For one thing, as Agrawal notes, the LLMs tested suffer from what’s known as “catastrophic forgetting,” a troubling effect seen when ingesting new information causes older knowledge to simply disappear.
Or read this on Wired