Get the latest tech news

Revealed: bias found in AI system used to detect UK benefits fraud


Exclusive: Age, disability, marital status and nationality influence decisions to investigate claims, prompting fears of ‘hurt first, fix later’ approach

This assurance came in part because the final decision on whether a person gets a welfare payment is still made by a human, and officials believe the continued use of the system – which is attempting to help cut an estimated £8bn a year lost in fraud and error – is “reasonable and proportionate”. But no fairness analysis has yet been undertaken in respect of potential bias centring on race, sex, sexual orientation and religion, or pregnancy, maternity and gender reassignment status, the disclosures reveal. “It is clear that in a vast majority of cases the DWP did not assess whether their automated processes risked unfairly targeting marginalised groups,” said Caroline Selman, senior research fellow at the Public Law Project, which first obtained the analysis.

Get the Android app

Or read this on r/technology

Read more on:

Photo of bias

bias

Photo of AI system

AI system

Photo of UK benefits fraud

UK benefits fraud

Related news:

News photo

Audio AIs are trained on data full of bias and offensive language

News photo

India Issues Notice To Wikipedia Over Concerns of Bias

News photo

India issues notice to Wikipedia over concerns of bias