Get the latest tech news
Top AI Labs Have 'Very Weak' Risk Management, Study Finds
A new study contends leading AI labs like OpenAI, Meta and xAI have inadequate risk management
Siméon Campos, the founder of SaferAI, says the purpose of the ratings is to develop a clear standard for how AI companies are handling risk as these nascent systems grow in power and usage. Campos says the ratings might put pressure on these companies to improve their internal processes, which could potentially lessen models’ bias, curtail the spread of misinformation, or make them less prone to misuse by malicious actors. Yoshua Bengio, one of the most respected figures in AI, endorsed the ratings system, writing in a statement that he hopes it will “guarantee the safety of the models [companies] develop and deploy…We can't let them grade their own homework.”
Or read this on r/technology