Tech News
Swedish authorities urged to discontinue AI welfare system
Amnesty International has called for the immediate discontinuation of Sweden’s algorithmically powered welfare system, which is unfairly targeting marginalized groups in Swedish society for benefit fraud investigations. A recent investigation found that the machine learning system used by Försäkringskassan, Sweden’s Social Insurance Agency, is disproportionately flagging women, individuals with “foreign” backgrounds, low-income earners, and people without university degrees for further investigation over social benefits fraud.
The investigation also revealed that the system is ineffective at identifying men and wealthy individuals who have committed social security fraud. The system assigns risk scores to social security applicants, triggering investigations automatically if the risk score is high. Those with the highest risk scores are referred to the agency’s “control” department, while those with lower scores are investigated by case workers without the presumption of criminal intent.
Complaints have arisen from individuals incorrectly flagged by the system, as they face delays and legal obstacles in accessing their welfare entitlement. Amnesty Tech’s senior investigative researcher, David Nolan, likened the system to a witch hunt and highlighted how AI systems can exacerbate inequalities and discrimination.
The investigation also revealed that Försäkringskassan has not been transparent about the system’s inner workings and rejected Freedom of Information requests. The agency defended the system, stating that it operates within Swedish law and does not discriminate. However, Amnesty International insists that the system violates the right to equality and non-discrimination and must be discontinued immediately to prevent a scandal similar to the one in the Netherlands.
Under the AI Act, public authorities using AI systems to determine access to essential public services must adhere to strict technical, transparency, and governance rules. Amnesty International urges Sweden to comply with these regulations and discontinue the biased welfare system.
Tools for social scoring systems are not allowed.
In 2018, Sweden’s ISF concluded that the algorithm used by Försäkringskassan does not uphold equal treatment. However, the agency disputed this, claiming the analysis was flawed.
A data protection officer who formerly worked for Försäkringskassan stated in 2020 that the system breaches the European General Data Protection Regulation as there is no legal basis for profiling individuals.
Amnesty International revealed on 13 November that AI tools utilized by Denmark’s welfare agency are promoting widespread surveillance, potentially leading to discrimination against certain groups.
-
Destination3 months ago
Singapore Airlines CEO set to join board of Air India, BA News, BA
-
Tech News7 months ago
Bangladeshi police agents accused of selling citizens’ personal information on Telegram
-
Motivation6 months ago
The Top 20 Motivational Instagram Accounts to Follow (2024)
-
Guides & Tips5 months ago
Have Unlimited Korean Food at MANY Unlimited Topokki!
-
Guides & Tips5 months ago
Satisfy Your Meat and BBQ Cravings While in Texas
-
Gaming4 months ago
The Criterion Collection announces November 2024 releases, Seven Samurai 4K and more
-
Self Development7 months ago
Don’t Waste Your Time in Anger, Regrets, Worries and Grudges
-
Toys6 months ago
15 of the Best Trike & Tricycles Mums Recommend