Get the latest tech news

Top AI researchers say OpenAI, Meta and more hinder independent evaluations


Firms like OpenAI and Meta use strict protocols to keep bad actors from abusing AI systems. But researchers argue these rules are chilling independent evaluations.

The letter was signed by experts in AI research, policy, and law, including Stanford University’s Percy Liang; Pulitzer Prize-winning journalist Julia Angwin; Renée DiResta from the Stanford Internet Observatory; Mozilla fellow Deb Raji, who has pioneered research into auditing AI models;ex-government official Marietje Schaake, a former member of European Parliament; and Brown University professor Suresh Venkatasubramanian, a former adviser to the White House Office of Science and Technology Policy. The letter, sent to companies including OpenAI, Meta, Anthropic, Google and Midjourney, implores tech firms to provide a legal and technical safe harbor for researchers to interrogate their products. Because the testing happens under their own log-in, some fear AI companies, which are still developing methods for monitoring potential rule breakers, may disproportionatelycrack down on users who bring negative attention to their business.

Get the Android app

Or read this on r/technology

Read more on:

Photo of OpenAI

OpenAI

Photo of Meta

Meta

Photo of AI researchers

AI researchers

Related news:

News photo

Google explains how Meta ships speedy app updates and wants other apps to follow

News photo

Major Disruptions for Meta: Outages Across Facebook, Instagram, and Messenger

News photo

Facebook, Instagram and Threads were all down in massive Meta outage on Super Tuesday