Get the latest tech news

2 OpenAI researchers working on safety and governance have quit


Daniel Kokotajlo and William Saunders departed OpenAI, the company behind ChatGPT, in April and February, respectively.

Kokotajlo wrote on his profile page on the online forum LessWrong that he quit "due to losing confidence that it would behave responsibly around the time of AGI." The Superalignment team, which was initially led by Ilya Sutskever and Jan Leike, is tasked with building safeguards to prevent artificial general intelligence( AGI) going rogue. Saunders was also a manager of the interpretability team, which researches how to make AGI safe and examines how and why models behave the way they do.

Get the Android app

Or read this on r/technology

Read more on:

Photo of safety

safety

Photo of governance

governance

Photo of openai researchers

openai researchers

Related news:

News photo

OUTBRK – get ready to go storm chasing in Tornado Alley from the safety of your PC

News photo

SB-1047 will stifle open-source AI and decrease safety

News photo

The world's leading AI companies pledge to protect the safety of children online