Get the latest tech news

Group co-led by Fei-Fei Li suggests that AI safety laws should anticipate future risks


In a new report, Fei-Fei Li and others suggest that lawmakers should consider transparency and future AI risks when crafting new legislation.

In the report, Li, along with co-authors UC Berkeley College of Computing Dean Jennifer Chayes and Carnegie Endowment for International Peace President Mariano-Florentino Cuéllar, argue in favor of laws that would increase transparency into what frontier AI labs such as OpenAI are building. Industry stakeholders from across the ideological spectrum reviewed the report before its publication, including staunch AI safety advocates like Turing Award winner Yoshua Benjio as well as those who argued against SB 1047, such as Databricks Co-Founder Ion Stoica. Dean Ball, an AI-focused research fellow at George Mason University who was critical of SB 1047, said in a post on X that the report was a promising step for California’s AI safety regulation.

Get the Android app

Or read this on TechCrunch

Read more on:

Photo of fei-fei li

fei-fei li

Photo of future risks

future risks

Photo of AI safety laws

AI safety laws

Related news:

News photo

AI pioneer Fei-Fei Li warns policymakers not to let sci-fi sensationalism shape AI rules

News photo

AI pioneer Fei-Fei Li says AI policy must be based on ‘science, not science fiction’

News photo

AI pioneer Fei-Fei Li has a vision for computer vision