Get the latest tech news

Inside Amsterdam’s high-stakes experiment to create fair welfare AI


The Dutch city thought it could break a decade-long trend of implementing discriminatory algorithms. Its failure raises the question: can these programs ever be fair?

It was February 2023, and de Zwart, who had served as the executive director of Bits of Freedom, the Netherlands’ leading digital rights NGO, had been working as an informal advisor to Amsterdam’s city government for nearly two years, reviewing and providing feedback on the AI systems it was developing. Other cities, like Amsterdam and Leiden, used a system called the Fraud Scorecard, which was first deployed more than 20 years ago and included education, neighborhood, parenthood, and gender as crude risk factors to assess welfare applicants; that program was also discontinued. “There’s a high-level thing of Do not discriminate, which I think we can all agree on, but this example highlights some of the complexities of how you translate that [principle].” Ultimately, Chen believes that finding any solution will require trial and error, which by definition usually involves mistakes: “You have to pay that cost.”

Get the Android app

Or read this on r/technology

Read more on:

Photo of amsterdam

amsterdam

Photo of stakes experiment

stakes experiment

Photo of fair welfare AI

fair welfare AI

Related news:

News photo

We investigated Amsterdam's attempt to build a 'fair' fraud detection model

News photo

Venta AI (YC S23) is hiring a full stack engineer in Amsterdam

News photo

Dipping my toes in OpenBSD, in Amsterdam