Get the latest tech news

We investigated Amsterdam's attempt to build a 'fair' fraud detection model


Amsterdam spent years trying to build an unbiased welfare fraud algorithm. Here’s what we found when we analyzed it.

In taking on this investigation, we wanted to look ahead and understand the thorny reality of building a fair AI tool that makes consequential decisions about people’s lives. To evaluate a model’s outcome fairness, one needs such confusion matrices broken down by demographic characteristics, which we obtained for age, gender, nationality, ethnic background(Western vs Non-Western), and parenthood. But while this is a problem that could have easily been resolved by raising the threshold at which a person is being flagged, the bias that emerged is less straightforward: Specifically, the pilot showed substantially higher False Positive Shares for Dutch relative to immigrant applicants.

Get the Android app

Or read this on Hacker News

Read more on:

Photo of attempt

attempt

Photo of amsterdam

amsterdam

Related news:

News photo

Characterizing my first attempt at copper-only passives

News photo

My first attempt at iOS app development

News photo

Lawmakers Vote To Stop NYPD's Attempt To Encrypt Their Radios