Get the latest tech news

LLMs aren't "trained on the internet" anymore


A path to continued model improvement.

But none of these techniques are a complete solution to a famous weakness of current models: the “ LLMs suck at producing outputs that don’t look like existing data ” problem. While improved architectures and more parameters might help with these limitations, you can bet your butt that OpenAI, Meta, Google, and/or Microsoft are paying big money to fill some of these gaps in a simpler way: creating novel examples to train on. These workers, who help train and test models for companies from OpenAI and Cohere to Anthropic and Google, also work through a third-party, often another Scale subsidiary called Outlier, but are paid higher hourly wages.

Get the Android app

Or read this on Hacker News

Read more on:

Photo of internet

internet

Photo of LLMs

LLMs

Related news:

News photo

Shodan – Search Engine for the Internet of Everything

News photo

What we've learned from a year of building with LLMs

News photo

Don't Worry about LLMs