Get the latest tech news
Collaborative research on AI safety is vital | If we are to take seriously the risk facing humanity, regulators need the power to ‘recall’ deployed models, as well as assess leading, not lagging, indicators of risk
Letter: If we are to take seriously the risk facing humanity, regulators need the power to ‘recall’ deployed models, as well as assess leading, not lagging, indicators of risk, writes Prof John McDermid
This approach will never be enough; AI needs to be designed for safety and evaluation – something that can be done by drawing on expertise and experience in well-established safety-related industries. While I don’t subscribe to his perspective about the level of risk facing humanity, the precautionary principle suggests that we must act now. In traditional safety-critical domains, the need to build physical systems, eg aircraft, limits the rate at which safety can be impacted.
Or read this on r/technology