Get the latest tech news
OpenAI scientists wanted "a doomsday bunker" before AGI surpasses human intelligence and threatens humanity
Ilya Sutskever proposed an additional layer of protection to shield key scientists from threats posed by "Artificial General Intelligence".
While Safe Superintelligence Inc. founder Ilya Sutskever declined to comment on the matter, it raises great concern, especially since he was intimately involved in ChatGPT's development and other flagship AI-powered products. Kevin Okemwa is a seasoned tech journalist based in Nairobi, Kenya with lots of experience covering the latest trends and developments in the industry at Windows Central. With a passion for innovation and a keen eye for detail, he has written for leading publications such as OnMSFT, MakeUseOf, and Windows Report, providing insightful analysis and breaking news on everything revolving around the Microsoft ecosystem.
Or read this on r/technology