Get the latest tech news
AI Is Spreading Old Stereotypes to New Languages and Cultures
Margaret Mitchell, an AI ethics researcher at Hugging Face, tells WIRED about a new dataset designed to test AI models for bias in multiple languages.
The training data might contain all kinds of really problematic stereotypes across countries, but then the bias mitigation techniques may only look at English. How do you make templates where the whole sentence needs to agree in gender, in number, in plurality, and all these different kinds of things with the target of the stereotype? So, now you can do these contrastive statements across all of these languages, even the ones with the really hard agreement rules, because we've developed this novel, template-based approach for bias evaluation that’s syntactically sensitive.
Or read this on Wired