Get the latest tech news
Researchers Have Ranked AI Models Based on Risk—and Found a Wild Range
Studies find wildly divergent views on risk and suggest that regulations could be tightened.
Bo Li, an associate professor at the University of Chicago who specializes in stress testing and provoking AI models to uncover misbehavior, has become a go-to source for some consulting firms. A company looking to use a LLM for customer service, for instance, might care more about a model’s propensity to produce offensive language when provoked than how capable it is of designing a nuclear device. Peter Slattery, lead on the project and a researcher at MIT’s FutureTech group, which studies progress in computing, says the database highlights the fact that some AI risks get more attention than others.
Or read this on Wired