Get the latest tech news
We investigated Amsterdam's attempt to build a 'fair' fraud detection model
Amsterdam spent years trying to build an unbiased welfare fraud algorithm. Here’s what we found when we analyzed it.
In taking on this investigation, we wanted to look ahead and understand the thorny reality of building a fair AI tool that makes consequential decisions about people’s lives. To evaluate a model’s outcome fairness, one needs such confusion matrices broken down by demographic characteristics, which we obtained for age, gender, nationality, ethnic background(Western vs Non-Western), and parenthood. But while this is a problem that could have easily been resolved by raising the threshold at which a person is being flagged, the bias that emerged is less straightforward: Specifically, the pilot showed substantially higher False Positive Shares for Dutch relative to immigrant applicants.
Or read this on Hacker News