Get the latest tech news
OpenAI and Anthropic agree to send models to US Government for safety evaluation
OpenAI and Anthropic will send AI models to the US AI Safety Institute before public release for safety testing and evaluations.
“We believe the institute has a critical role to play in defining U.S. leadership in responsible developing artificial intelligence and hope that our work together offers a framework that the rest of the world can build on.” “Our collaboration with the U.S. AI Safety Institute leverages their wide expertise to rigorously test our models before widespread deployment,” said Anthropic co-founder and Head of Policy Jack Clark in a statement sent to VentureBeat. “NIST must ensure that OpenAI and Antrhopic follow through on their commitments; both have a track record of making promises, such as the AI Election Accord, with very little action.
Or read this on Venture Beat