Get the latest tech news
It was their job to make sure humans are safe from OpenAI's superintelligence. They just quit.
Jan Leike and Ilya Sutskever led the Superalignment team, aiming to figure out how to make sure AI systems much smarter than humans don't go rogue.
"We need scientific and technical breakthroughs to steer and control AI systems much smarter than us," OpenAI said of superalignment in a July 5, 2023 post on its website. "To solve this problem within four years, we're starting a new team, co-led by Ilya Sutskever and Jan Leike, and dedicating 20% of the compute we've secured to date to this effort." Calling it "an honor and a privilege to have worked together" with Altman and crew, Sutskever bowed out from the role, saying he's "confident that OpenAI will build AGI that is both safe and beneficial."
Or read this on r/technology