Get the latest tech news

Anthropic's position on AI safety is helping it attract researchers from OpenAI


There's more to winning AI talent than just seven-figure pay and gobs of GPUs. Perceptions on trust and safety matter too.

There are a lot of folks who want to see AI developed and deployed responsibly, and the stream of resignations from OpenAI over safety concerns shows that top talent will go elsewhere if they feel the company they’re working for is on a dangerous path. These comments—which ranged from praising Israel’s military actions to claiming that “radical Islam” poses a threat to liberal values in Canada—often included links to the operation’s websites and were met with critical responses from authentic users calling them propaganda. That’s the percentage of answers ChatGPT gave in response to programming questions from Stack Overflow that contained incorrect information, according to a paper by Purdue researchers that was presented this month at the Conference on Human Factors in Computing Systems.

Get the Android app

Or read this on r/technology

Read more on:

Photo of OpenAI

OpenAI

Photo of researchers

researchers

Photo of Anthropic

Anthropic

Related news:

News photo

Sam Altman Says OpenAI Doesn’t Fully Understand How GPT Works Despite Rapid Progress

News photo

OpenAI has stopped five attempts to misuse its AI for 'deceptive activity'

News photo

OpenAI halted five political influence ops over the last three months