Get the latest tech news

OpenAI dissolves team focused on long-term AI risks less than one year after announcing it


"Over the past years, safety culture and processes have taken a backseat to shiny products,” one team member wrote.

OpenAI has disbanded its team focused on the long-term risks of artificial intelligence just one year after the company announced the group, a person familiar with the situation confirmed to CNBC on Friday. The issue seemed to grow more complex each day, with The Wall Street Journal and other media outlets reporting that Sutskever trained his focus on ensuring that artificial intelligence would not harm humans, while others, including Altman, were instead more eager to push ahead with delivering new technology. News of Sutskever’s and Leike’s departures, and the dissolution of the superalignment team, come days after OpenAI launched a new AI model and desktop version of ChatGPT, along with an updated user interface, the company’s latest effort to expand the use of its popular chatbot.

Get the Android app

Or read this on r/technology

Read more on:

Photo of OpenAI

OpenAI

Photo of Year

Year

Photo of term AI risks

term AI risks

Related news:

News photo

OpenAI founders Sam Altman and Greg Brockman go on the defensive after top safety researchers quit | The departures sparked concern about OpenAI's commitment to ensuring AI doesn't destroy the world

News photo

Opinion | OpenAI is selling its new chatbot as a flirty and obedient female companion

News photo

500-year-old maths problem turns out to apply to coffee and clocks