Get the latest tech news

The US and UK are teaming up to test the safety of AI models


The UK and the US governments have signed a Memorandum of Understanding in order to create a common approach for independent evaluation on the safety of generative AI models.

They're planning to share technical knowledge, information and even personnel as part of the partnership, and one of their initial goals seems to be performing a joint testing exercise on a publicly accessible model. Back in March, the White House signed an executive order aiming to ensure that federal agencies are only using AI tools that "do not endanger the rights and safety of the American people." It will ban "AI that manipulates human behavior or exploits people’s vulnerabilities," "biometric categorization systems based on sensitive characteristics," as well as the "untargeted scraping" of faces from CCTV footage and the web to create facial recognition databases.

Get the Android app

Or read this on Endgadget

Read more on:

Photo of safety

safety

Photo of AI models

AI models

Related news:

News photo

Robot, can you say ‘Cheese’ | Columbia engineers build Emo, a silicon-clad robotic face that makes eye contact and uses two AI models to anticipate and replicate a person’s smile before the person actually smiles

News photo

Breakdown of Safety Is Not Unique to Boeing — It’s Endemic to Capitalist Society

News photo

Scientists create AI models that can talk to each other and pass on skills with limited human input