Get the latest tech news
The US Government Wants You—Yes, You—to Hunt Down Generative AI Flaws
AI ethics nonprofit Humane Intelligence and the US National Institute of Standards and Technology are launching a series of contests to get more people probing for problems in generative AI systems.
Participants who pass through the qualifying round will take part in an in-person red-teaming event at the end of October at the Conference on Applied Machine Learning in Information Security (CAMLIS) in Virginia. “NIST's ARIA is drawing on structured user feedback to understand real-world applications of AI models,” says Humane Intelligence founder Rumman Chowdhury, who is also a contractor in NIST's Office of Emerging Technologies and a member of the US Department of Homeland Security AI safety and security board. The effort aims to make it much more common for the companies and organizations that develop what are now black-box algorithms to offer transparency and accountability through mechanisms like “bias bounty challenges,” where individuals can be rewarded for finding problems and inequities in AI models.
Or read this on Wired