Get the latest tech news

Collaborative research on AI safety is vital | If we are to take seriously the risk facing humanity, regulators need the power to ‘recall’ deployed models, as well as assess leading, not lagging, indicators of risk


Letter: If we are to take seriously the risk facing humanity, regulators need the power to ‘recall’ deployed models, as well as assess leading, not lagging, indicators of risk, writes Prof John McDermid

This approach will never be enough; AI needs to be designed for safety and evaluation – something that can be done by drawing on expertise and experience in well-established safety-related industries. While I don’t subscribe to his perspective about the level of risk facing humanity, the precautionary principle suggests that we must act now. In traditional safety-critical domains, the need to build physical systems, eg aircraft, limits the rate at which safety can be impacted.

Get the Android app

Or read this on r/technology

Read more on:

Photo of risk

risk

Photo of Humanity

Humanity

Photo of power

power

Related news:

News photo

Bench customers are now being forced to hand over their data or risk losing it, they say

News photo

The ‘Worst in Show’ CES products put your data at risk and cause waste, privacy advocates say

News photo

White House official says R&D funding and AI advances are ‘at risk’