Amnesty International has called for the immediate discontinuation of Sweden's algorithmically powered welfare system, citing its disproportionate targeting of marginalized groups for benefit fraud investigations.
An investigation by Lighthouse Reports and Svenska Dagbladet revealed that the machine learning system used by Försäkringskassan, Sweden's Social Insurance Agency, flags women, low-income earners, and individuals with foreign backgrounds more frequently for fraud inquiries.
Since its implementation in 2013, the system assigns risk scores to applicants, triggering investigations for those deemed high risk. However, it is largely ineffective in identifying actual fraud among wealthier individuals and men.
Amnesty's report highlights that flagged individuals face invasive scrutiny, including social media monitoring and neighbor interviews, which can delay their access to welfare benefits.
David Nolan, senior investigative researcher at Amnesty Tech, criticized the system for perpetuating inequality and discrimination, stating, "This is a clear example of people's right to social security, equality, and privacy being violated by a biased system."