Get the latest tech news

The US Government Wants You—Yes, You—to Hunt Down Generative AI Flaws


AI ethics nonprofit Humane Intelligence and the US National Institute of Standards and Technology are launching a series of contests to get more people probing for problems in generative AI systems.

Participants who pass through the qualifying round will take part in an in-person red-teaming event at the end of October at the Conference on Applied Machine Learning in Information Security (CAMLIS) in Virginia. “NIST's ARIA is drawing on structured user feedback to understand real-world applications of AI models,” says Humane Intelligence founder Rumman Chowdhury, who is also a contractor in NIST's Office of Emerging Technologies and a member of the US Department of Homeland Security AI safety and security board. The effort aims to make it much more common for the companies and organizations that develop what are now black-box algorithms to offer transparency and accountability through mechanisms like “bias bounty challenges,” where individuals can be rewarded for finding problems and inequities in AI models.

Get the Android app

Or read this on Wired

Read more on:

Photo of flaws

flaws

Photo of us government

us government

Related news:

News photo

Google Researchers Found Nearly a Dozen Flaws in Popular Qualcomm Software for Mobile GPUs

News photo

ATM Software Flaws Left Piles of Cash for Anyone Who Knew to Look

News photo

Microsoft July 2024 Patch Tuesday fixes 142 flaws, 4 zero-days