Get the latest tech news

OpenAI scientists wanted "a doomsday bunker" before AGI surpasses human intelligence and threatens humanity


Ilya Sutskever proposed an additional layer of protection to shield key scientists from threats posed by "Artificial General Intelligence".

While Safe Superintelligence Inc. founder Ilya Sutskever declined to comment on the matter, it raises great concern, especially since he was intimately involved in ChatGPT's development and other flagship AI-powered products. Kevin Okemwa is a seasoned tech journalist based in Nairobi, Kenya with lots of experience covering the latest trends and developments in the industry at Windows Central. With a passion for innovation and a keen eye for detail, he has written for leading publications such as OnMSFT, MakeUseOf, and Windows Report, providing insightful analysis and breaking news on everything revolving around the Microsoft ecosystem.

Get the Android app

Or read this on r/technology

Read more on:

Photo of OpenAI

OpenAI

Photo of Humanity

Humanity

Photo of agi

agi

Related news:

News photo

OpenAI confirms Operator Agent is now more accurate with o3

News photo

OpenAI updates Operator to o3, making its $200 monthly ChatGPT Pro subscription more enticing

News photo

OpenAI Jony Ive Deal Sets Record as Startup M&A Gets Bigger