Get the latest tech news
Revealed: bias found in AI system used to detect UK benefits fraud
Exclusive: Age, disability, marital status and nationality influence decisions to investigate claims, prompting fears of ‘hurt first, fix later’ approach
This assurance came in part because the final decision on whether a person gets a welfare payment is still made by a human, and officials believe the continued use of the system – which is attempting to help cut an estimated £8bn a year lost in fraud and error – is “reasonable and proportionate”. But no fairness analysis has yet been undertaken in respect of potential bias centring on race, sex, sexual orientation and religion, or pregnancy, maternity and gender reassignment status, the disclosures reveal. “It is clear that in a vast majority of cases the DWP did not assess whether their automated processes risked unfairly targeting marginalised groups,” said Caroline Selman, senior research fellow at the Public Law Project, which first obtained the analysis.
Or read this on r/technology