Get the latest tech news
LLMs aren't "trained on the internet" anymore
A path to continued model improvement.
But none of these techniques are a complete solution to a famous weakness of current models: the “ LLMs suck at producing outputs that don’t look like existing data ” problem. While improved architectures and more parameters might help with these limitations, you can bet your butt that OpenAI, Meta, Google, and/or Microsoft are paying big money to fill some of these gaps in a simpler way: creating novel examples to train on. These workers, who help train and test models for companies from OpenAI and Cohere to Anthropic and Google, also work through a third-party, often another Scale subsidiary called Outlier, but are paid higher hourly wages.
Or read this on Hacker News