Get the latest tech news

Experts call for legal ‘safe harbor’ so researchers, journalists and artists can evaluate AI tools


In a new paper, AI researchers called on tech companies to indemnify public interest research and protect it from account suspensions.

Despite the need for independent evaluation, the paper says, conducting research related to these vulnerabilities is often legally prohibited by the terms of service for popular AI models, including those of OpenAI, Google, Anthropic, Inflection, Meta, and Midjourney. The paper’s authors called on tech companies to indemnify public interest AI research and protect it from account suspensions or legal reprisal. The type of model behavior he found, he continued, “is exactly why independent evaluation and red teaming should be permitted, because [the companies have] shown they won’t do it themselves, to the detriment of rights owners everywhere.”

Get the Android app

Or read this on Venture Beat

Read more on:

Photo of Experts

Experts

Photo of AI tools

AI tools

Photo of Artists

Artists

Related news:

News photo

Researchers jailbreak AI chatbots with ASCII art -- ArtPrompt bypasses safety measures to unlock malicious queries | ArtPrompt bypassed safety measures in ChatGPT, Gemini, Clause, and Llama2.

News photo

Smarter than GPT-4: Claude 3 AI catches researchers testing it

News photo

Researchers tested leading AI models for copyright infringement using popular books, and GPT-4 performed worst