Algorithms Policed Welfare Systems For Years. Now They’re Under Fire for Bias

Date:

Share:


“People receiving a social allowance reserved for people with disabilities [the Allocation Adulte Handicapé, or AAH] are directly targeted by a variable in the algorithm,” says Bastien Le Querrec, legal expert at La Quadrature du Net. “The risk score for people receiving AAH and who are working is increased.”

Because it also scores single-parent families higher than two-parent families, the groups argue it indirectly discriminates against single mothers, who are statistically more likely to be sole-care givers. “In the criteria for the 2014 version of the algorithm, the score for beneficiaries who have been divorced for less than 18 months is higher,” says Le Querrec.

Changer de Cap says it has been approached by both single mothers and disabled people looking for help, after being subject to investigation.

The CNAF agency, which is in charge of distributing financial aid including housing, disability, and child benefits, did not immediately respond to a request for comment or to WIRED’s question about whether the algorithm currently in use had significantly changed since the 2014 version.

Just like in France, human rights groups in other European countries argue they subject the lowest-income members of society to intense surveillance—often with profound consequences.

When tens of thousands of people in the Netherlands—many of them from the country’s Ghanaian community—were falsely accused of defrauding the child benefits system, they weren’t just ordered to repay the money the algorithm said they allegedly stole. Many of them claim they were also left with spiraling debt and destroyed credit ratings.

The problem isn’t the way the algorithm was designed, but their use in the welfare system, says Soizic Pénicaud, a lecturer in AI policy at Sciences Po Paris, who previously worked for the French government on transparency of public sector algorithms. “Using algorithms in the context of social policy comes with way more risks than it comes with benefits,” she says. “I haven’t seen any example in Europe or in the world in which these systems have been used with positive results.”

The case has ramifications beyond France. Welfare algorithms are expected to be an early test of how the EU’s new AI rules will be enforced once they take effect in February 2025. From then, “social scoring”—the use of AI systems to evaluate people’s behavior and then subject some of them to detrimental treatment—will be banned across the bloc.

“Many of these welfare systems that do this fraud detection may, in my opinion, be social scoring in practice,” says Matthias Spielkamp, cofounder of the nonprofit Algorithm Watch. Yet public sector representatives are likely to disagree with that definition—with arguments about how to define these systems likely to end up in court. “I think this is a very hard question,” says Spielkamp.



Source link

━ more like this

What is The Roku Channel? Content, cost, and how to use it | Tech Reader

Popular streaming apps like Netflix, Amazon Prime Video and Apple TV+ all deliver tons of great movies and TV shows, but the cost...

Apple will launch a Business Caller ID service next year

Apple some new tools to its Apple Business Connect program that could be useful for the everyday consumer. The most notable update...

Microsoft is backtracking on its Copilot key | Tech Reader

The Copilot key was a big part of Microsoft’s initial push with AI PCs, but it didn’t exactly receive positive reception. But now, in...

Two Sudanese brothers accused of launching a dangerous series of DDoS attacks

Newly unsealed grand jury documents revealed that two Sudanese nationals allegedly attempted to launch thousands of distributed denial of services (DDoS) attacks on...

McAfee+ Premium for Mac review: a questionable bargain | Tech Reader

McAfee+ Premium MSRP $44.99 “Despite its solid offering, McAfee+ Premium for Mac's lack of independent testing and a buggy setup experience left me doubting its commitment...
spot_img