Get the latest tech news

Nobody Knows How to Safety-Test AI | "They are, in some sense, these vast alien intelligences.”


Governments and companies hope safety-testing can reduce dangers from AI systems. But the tests are far from ready.

(Large language models, such as OpenAI’s GPT-4 and Anthropic’s Claude, are giant AI systems that are trained by predicting the next word for a vast amount of text, and that can answer questions and carry out basic reasoning and planning.) METR has taken a number of practical steps to increase its independence, such as requiring staff to sell any financial interests in companies developing the types of system that they test, says Barnes. More fundamentally, the focus on testing has distracted from “real governance things,” he argues, such as passing laws that would ensure AI companies are liable for damages caused by their models and promoting international cooperation.

Get the Android app

Or read this on r/technology

Read more on:

Photo of sense

sense

Photo of Test AI

Test AI

Related news:

News photo

Laid-off Techies Face 'Sense of Impending Doom' With Job Cuts at Highest Since Dot-com Crash

News photo

Voyager 1 starts making sense again after months of babble.

News photo

Laid-off techies face 'sense of impending doom' with job cuts at highest since dot-com crash