Get the latest tech news
Experts call for legal ‘safe harbor’ so researchers, journalists and artists can evaluate AI tools
In a new paper, AI researchers called on tech companies to indemnify public interest research and protect it from account suspensions.
Despite the need for independent evaluation, the paper says, conducting research related to these vulnerabilities is often legally prohibited by the terms of service for popular AI models, including those of OpenAI, Google, Anthropic, Inflection, Meta, and Midjourney. The paper’s authors called on tech companies to indemnify public interest AI research and protect it from account suspensions or legal reprisal. The type of model behavior he found, he continued, “is exactly why independent evaluation and red teaming should be permitted, because [the companies have] shown they won’t do it themselves, to the detriment of rights owners everywhere.”
Or read this on Venture Beat