Get the latest tech news
The US and UK are teaming up to test the safety of AI models
The UK and the US governments have signed a Memorandum of Understanding in order to create a common approach for independent evaluation on the safety of generative AI models.
They're planning to share technical knowledge, information and even personnel as part of the partnership, and one of their initial goals seems to be performing a joint testing exercise on a publicly accessible model. Back in March, the White House signed an executive order aiming to ensure that federal agencies are only using AI tools that "do not endanger the rights and safety of the American people." It will ban "AI that manipulates human behavior or exploits people’s vulnerabilities," "biometric categorization systems based on sensitive characteristics," as well as the "untargeted scraping" of faces from CCTV footage and the web to create facial recognition databases.
Or read this on Endgadget