Get the latest tech news
AI firms warned to calculate threat of super intelligence or risk it escaping human control | AI safety campaigner calls for existential threat assessment akin to Oppenheimer’s calculations before first nuclear test
AI safety campaigner calls for existential threat assessment akin to Oppenheimer’s calculations before first nuclear test
Artificial intelligence companies have been urged to replicate the safety calculations that underpinned Robert Oppenheimer’s first nuclear test before they release all-powerful systems. In a 1959 interview with the US writer Pearl Buck, Compton said he had approved the test after calculating the odds of a runaway fusion reaction to be “slightly less” than one in three million. Tegmark said a Compton constant consensus calculated by multiple companies would create the “political will” to agree global safety regimes for AIs.
Or read this on r/technology