Get the latest tech news

OpenAI putting 'shiny products' above safety, says departing researcher


Jan Leike, a key safety researcher at firm behind ChatGPT, quit days after launch of its latest AI model, GPT-4o

A former senior employee at OpenAI has said the company behind ChatGPT is prioritising “shiny products” over safety, revealing that he quit after a disagreement over key aims reached “breaking point”. Jan Leike was a key safety researcher at OpenAI as its co-head of superalignment, ensuring that powerful artificial intelligence systems adhered to human values and aims. Sam Altman, OpenAI’s chief executive, responded to Leike’s thread with a post on X thanking his former colleague for his contributions to the company’s safety culture.

Get the Android app

Or read this on Hacker News

Read more on:

Photo of OpenAI

OpenAI

Photo of safety

safety

Photo of shiny products

shiny products

Related news:

News photo

OpenAI departures: Why can’t former employees talk?

News photo

Multi AI agent systems using OpenAI's assistants API

News photo

Reddit goes AI agnostic, signs data training deal with OpenAI