Get the latest tech news

Collapse of self-trained language models


In various fields of knowledge creation, including science, new ideas often build on pre-existing information. In this work, we explore this concept within the context of language models. Specifically, we explore the potential of self-training models on their own outputs, akin to how humans learn and build on their previous thoughts and actions. While this approach is intuitively appealing, our research reveals its practical limitations. We find that extended self-training of the GPT-2 model leads to a significant degradation in performance, resulting in repetitive and collapsed token output.

View PDFHTML (experimental) Abstract:In various fields of knowledge creation, including science, new ideas often build on pre-existing information. Specifically, we explore the potential of self-training models on their own outputs, akin to how humans learn and build on their previous thoughts and actions. We find that extended self-training of the GPT-2 model leads to a significant degradation in performance, resulting in repetitive and collapsed token output.

Get the Android app

Or read this on Hacker News

Read more on:

Photo of collapse

collapse

Related news:

News photo

The Collapse of Small Press Distribution

News photo

Reddit's IPO, Bitcoin's Rise, and SVB's Collapse

News photo

Rampant Wildfires Are Threatening a Collapse of the Amazon Rainforest