Get the latest tech news
Top AI researchers say OpenAI, Meta and more hinder independent evaluations
Firms like OpenAI and Meta use strict protocols to keep bad actors from abusing AI systems. But researchers argue these rules are chilling independent evaluations.
The letter was signed by experts in AI research, policy, and law, including Stanford University’s Percy Liang; Pulitzer Prize-winning journalist Julia Angwin; Renée DiResta from the Stanford Internet Observatory; Mozilla fellow Deb Raji, who has pioneered research into auditing AI models;ex-government official Marietje Schaake, a former member of European Parliament; and Brown University professor Suresh Venkatasubramanian, a former adviser to the White House Office of Science and Technology Policy. The letter, sent to companies including OpenAI, Meta, Anthropic, Google and Midjourney, implores tech firms to provide a legal and technical safe harbor for researchers to interrogate their products. Because the testing happens under their own log-in, some fear AI companies, which are still developing methods for monitoring potential rule breakers, may disproportionatelycrack down on users who bring negative attention to their business.
Or read this on r/technology