Get the latest tech news
OpenAI Touts New AI Safety Research. Critics Say It’s a Good Step, but Not Enough
The company announced a new technique to make the workings of its systems more transparent, but people familiar with OpenAI say more oversight is needed.
“This is core to the mission of building an [artificial general intelligence] that is both safe and beneficial,” Yining Chen, a researcher at OpenAI involved with the work, tells WIRED. This came shortly after the departure of cofounder and key technical leader Ilya Sutskever, who was one of the board members who briefly ousted CEO Sam Altman last November. After the runaway success of ChatGPT and more intense competition from well-backed rivals, some people have accused the company of prioritizing splashy advances and market share over safety.
Or read this on Wired