Get the latest tech news
DeepMind’s 145-page paper on AGI safety may not convince skeptics
DeepMind published a lengthy paper on its approach to AGI safety. But experts don't necessarily buy the premises.
Google DeepMind on Wednesday published an exhaustive paper on its safety approach to AGI, roughly defined as AI that can accomplish any task a human can. Others, including major AI labs like Anthropic, warn that it’s around the corner, and could result in catastrophic harms if steps aren’t taken to implement appropriate safeguards. Sandra Wachter, a researcher studying tech and regulation at Oxford, argues that a more realistic concern is AI reinforcing itself with “inaccurate outputs.”
Or read this on TechCrunch