Get the latest tech news

AIs show distinct bias against Black and female résumés in new study | Language models seem to treat "masculine and White concepts... as the 'default' value."


Language models seem to treat “masculine and White concepts… as the ‘default’ value.”…

In a new paper published during last month's AAAI/ACM Conference on AI, Ethics and Society, two University of Washington researchers ran hundreds of publicly available résumés and job descriptions through three different Massive Text Embedding (MTE) models. These models—based on the Mistal-7B LLM—had each been fine-tuned with slightly different sets of data to improve on the base LLM's abilities in "representational tasks including document retrieval, classification, and clustering," according to the researchers, and had achieved "state-of-the-art performance" in the MTEB benchmark. The top 10 percent of résumés that the MTEs judged as most similar for each job description were then analyzed to see if the names for any race or gender groups were chosen at higher or lower rates than expected.

Get the Android app

Or read this on r/technology

Read more on:

Photo of default

default

Photo of Value

Value

Photo of new study

new study

Related news:

News photo

SVT-AV1 2.3 Brings More Performance Improvements: AVX-512 & LTO By Default, More Tuning

News photo

Jobs with WFH options have better stock returns, new study reveals

News photo

Raspberry Pi OS Now Using Wayland By Default