Get the latest tech news

Meta says it may stop development of AI systems it deems too risky


Meta has released a policy document outlining scenarios in which the company may not release certain categories of 'risky' AI systems.

Meta CEO Mark Zuckerberg has pledged to make artificial general intelligence (AGI) — which is roughly defined as AI that can accomplish any task a human can — openly available one day. Somewhat surprising is that, according to the document, Meta classifies system risk not based on any one empirical test but informed by the input of internal and external researchers who are subject to review by “senior-level decision-makers.” Why? Meta says that it doesn’t believe the science of evaluation is “sufficiently robust as to provide definitive quantitative metrics” for deciding a system’s riskiness.

Get the Android app

Or read this on TechCrunch

Read more on:

Photo of Meta

Meta

Photo of development

development

Photo of AI systems

AI systems

Related news:

News photo

Meta's Investment in Virtual Reality on Track To Top $100 Billion

News photo

Meta agrees to pay $25 million to settle lawsuit from Trump after Jan. 6 suspension

News photo

AI systems with 'unacceptable risk' are now banned in the EU