Get the latest tech news
Meta says it may stop development of AI systems it deems too risky
Meta has released a policy document outlining scenarios in which the company may not release certain categories of 'risky' AI systems.
Meta CEO Mark Zuckerberg has pledged to make artificial general intelligence (AGI) — which is roughly defined as AI that can accomplish any task a human can — openly available one day. Somewhat surprising is that, according to the document, Meta classifies system risk not based on any one empirical test but informed by the input of internal and external researchers who are subject to review by “senior-level decision-makers.” Why? Meta says that it doesn’t believe the science of evaluation is “sufficiently robust as to provide definitive quantitative metrics” for deciding a system’s riskiness.
Or read this on TechCrunch