Connect with us

Tech News

DWP ‘fairness analysis’ reveals bias in AI fraud detection system

Published

on

DWP ‘fairness analysis’ reveals bias in AI fraud detection system

An AI system utilized by the Department for Work and Pensions (DWP) to identify welfare fraud is displaying significant disparities based on age, disability, marital status, and nationality, as per the department’s internal assessment.

Released under freedom of information (FoI) rules to the Public Law Project, the 11-page “fairness analysis” discovered that a machine learning (ML) system used by the DWP to screen thousands of universal credit benefit payments is disproportionately selecting individuals from certain groups when recommending who to investigate for potential fraud.

Conducted in February 2024, the assessment revealed a “statistically significant referral… and outcome disparity for all the protected characteristics analyzed,” which included age, disability, marital status, and nationality.

A subsequent review of the identified disparities concluded that there were no immediate concerns of discrimination or unfair treatment towards individuals or protected groups. The assessment emphasized that human decision-makers are involved in the process, considering all available information.

While certain protected characteristics were not specifically analyzed, the DWP stated that there are no immediate concerns of unfair treatment due to the safeguards in place for all customers. The department plans to continuously improve the analysis method and conduct quarterly assessments.

Caroline Selman of the Public Law Project expressed the need for the DWP to assess the potential risks of unfairly targeting marginalized groups with automated processes and emphasized the importance of understanding the harm these tools may pose.

The DWP spokesperson reiterated that the AI tool does not replace human judgment, and measures are being taken to address benefit fraud more efficiently. However, concerns about potential bias and the role of automation in welfare systems remain prevalent.

See also  Glove Stealer Emerges A New Malware Threat For Browsers

The scrutiny of AI and automation in welfare systems has intensified, with reports of automated systems in Denmark and Sweden disproportionately affecting marginalized groups in accessing social benefits and being targeted for fraud investigations.

Trending