Get the latest tech news
Nobody Knows How to Safety-Test AI | "They are, in some sense, these vast alien intelligences.”
Governments and companies hope safety-testing can reduce dangers from AI systems. But the tests are far from ready.
(Large language models, such as OpenAI’s GPT-4 and Anthropic’s Claude, are giant AI systems that are trained by predicting the next word for a vast amount of text, and that can answer questions and carry out basic reasoning and planning.) METR has taken a number of practical steps to increase its independence, such as requiring staff to sell any financial interests in companies developing the types of system that they test, says Barnes. More fundamentally, the focus on testing has distracted from “real governance things,” he argues, such as passing laws that would ensure AI companies are liable for damages caused by their models and promoting international cooperation.
Or read this on r/technology