Get the latest tech news

DeepMind’s 145-page paper on AGI safety may not convince skeptics


DeepMind published a lengthy paper on its approach to AGI safety. But experts don't necessarily buy the premises.

Google DeepMind on Wednesday published an exhaustive paper on its safety approach to AGI, roughly defined as AI that can accomplish any task a human can. Others, including major AI labs like Anthropic, warn that it’s around the corner, and could result in catastrophic harms if steps aren’t taken to implement appropriate safeguards. Sandra Wachter, a researcher studying tech and regulation at Oxford, argues that a more realistic concern is AI reinforcing itself with “inaccurate outputs.”

Get the Android app

Or read this on TechCrunch

Read more on:

Photo of DeepMind

DeepMind

Photo of agi

agi

Photo of skeptics

skeptics

Related news:

News photo

DeepMind is Holding Back Release of AI Research To Give Google an Edge

News photo

New funding to build towards AGI

News photo

A new, challenging AGI test stumps most AI models