Get the latest tech news
Anthropic's position on AI safety is helping it attract researchers from OpenAI
There's more to winning AI talent than just seven-figure pay and gobs of GPUs. Perceptions on trust and safety matter too.
There are a lot of folks who want to see AI developed and deployed responsibly, and the stream of resignations from OpenAI over safety concerns shows that top talent will go elsewhere if they feel the company they’re working for is on a dangerous path. These comments—which ranged from praising Israel’s military actions to claiming that “radical Islam” poses a threat to liberal values in Canada—often included links to the operation’s websites and were met with critical responses from authentic users calling them propaganda. That’s the percentage of answers ChatGPT gave in response to programming questions from Stack Overflow that contained incorrect information, according to a paper by Purdue researchers that was presented this month at the Conference on Human Factors in Computing Systems.
Or read this on r/technology