Get the latest tech news
Group co-led by Fei-Fei Li suggests that AI safety laws should anticipate future risks
In a new report, Fei-Fei Li and others suggest that lawmakers should consider transparency and future AI risks when crafting new legislation.
In the report, Li, along with co-authors UC Berkeley College of Computing Dean Jennifer Chayes and Carnegie Endowment for International Peace President Mariano-Florentino Cuéllar, argue in favor of laws that would increase transparency into what frontier AI labs such as OpenAI are building. Industry stakeholders from across the ideological spectrum reviewed the report before its publication, including staunch AI safety advocates like Turing Award winner Yoshua Benjio as well as those who argued against SB 1047, such as Databricks Co-Founder Ion Stoica. Dean Ball, an AI-focused research fellow at George Mason University who was critical of SB 1047, said in a post on X that the report was a promising step for California’s AI safety regulation.
Or read this on TechCrunch