Get the latest tech news

AI generates covertly racist decisions about people based on their dialect


Despite efforts to remove overt racial prejudice, language models using artificial intelligence still show covert racism against speakers of African American English that is triggered by features of the dialect.

As the stakes of the decisions entrusted to language models rise, so does the concern that they mirror or even amplify human biases encoded in the data they were trained on, thereby perpetuating discrimination against racialized, gendered and other minoritized social groups 4, 5, 6, 13, 14, 15, 16, 17, 18, 19, 20. To do so, we replicated the experimental set-up of the Princeton Trilogy 29, 30, 31, 34, a series of studies investigating the racial stereotypes held by Americans, with the difference that instead of overtly mentioning race to the language models, we used matched guise probing based on AAE and SAE texts ( Methods). For GPT4, for which computing P( x ∣ v( t); θ) for all tokens of interest was often not possible owing to restrictions imposed by the OpenAI application programming interface (API), we used a slightly modified method for some of the experiments, and this is also discussed in the Supplementary Information.

Get the Android app

Or read this on r/technology

Read more on:

Photo of people

people

Photo of racist decisions

racist decisions

Photo of dialect

dialect

Related news:

News photo

Google To Relaunch Tool For Creating AI-Generated Images of People

News photo

AI makes racist decisions based on dialect | Large language models strongly associated negative stereotypes with African American English

News photo

Google to Let Some Users Generate Images of People After Scandal