Comment by ordu

Comment by ordu 5 days ago

6 replies

The goal is to avoid penalizing people for their skin color, or for gender/sex/ethnicity/whatever. If some group have higher rate of welfare fraud, the fair/unbiased system must keep false positives for that group at the same level as for general population. Ideally there should be no false positives at all, because they are costly for people, who were marked wrongly, but sadly real systems are not like that. So these false positives have to be spread over all groups proportionally to sizes of the groups.

Though the situation is more complex than that. What I described is named "False Positive Share" in the article (or at least I think so), but the article discusses other metrics too.

The problem is that the policy should make the world better, but if the policy penalizes some groups for law breaking, then it can push these groups to break the law even more. It is possible to create biases this way, and it is possible to do it accidentally. Or, rather, it is hard not to do it accidentally.

I'd recommend to read "Against Prediction", it has a lot of examples how it works. For example, biased False Negatives are also bad, they make it easier for some groups to break the law.

jsemrau 5 days ago

>The goal is to avoid penalizing people for their skin color [...]

That's not correct. The goal is to identify and flag fraud cases. If one group has a higher likelihood to perform that, then this will show up in the data. The solution should not be to change the data but educate that group to change their behavior.

Please note that I have neither mentioned any specific group and do not have a specific group in mind. However, an example for such a group that I have seen in my professional life could be female 20 year old CEOs of construction companies (often connected to organized crime)

  • ordu 5 days ago

    > The solution should not be to change the data but educate that group to change their behavior.

    1. This is easier to say than to do.

    2. In reality what you see is a correlation. If you try to educate all 20 year old females to not become a connected to organized crime CEOs of construction companies, your efforts will be wasted with 99% of these people, because they are either not connected to organized crime or are not going to become CEOs. Moreover the very your efforts will lead to a discrimination of 20 years old females, if not due to public perception of them, then because you've just increased difficulties for them of becoming a CEO.

    > The goal is to identify and flag fraud cases.

    Not quite. The goal is to reduce the amount of fraud cases. To identify and flag is a method of achieving that goal. But policymakers has a lot of other goals, like avoiding discrimination or reducing rate of murders. By focusing on one goal policymakers might undermine other goals.

    As a side (almost methaphisical) note: it is one of the reason, why techies are bad at social problems. Their math education taught them to ignore all irrelevant details, when dealing with a problem, but society is a big complex system where everything is connected, so in general you can't ignore anything, because everything is relevant. But education have the upper hand, so techies tend to throw away as much complexity as it is needed to make the problem solvable. They will never accept that they don't know how to solve a problem.

    • jsemrau 5 days ago

      "your efforts will lead to a discrimination of 20 years old females"

      I'd think that this is an extremely far-fetched example that fails at basic logic. Just because a very specific scenario will be flagged does not mean that this scenario is generalized to all CEOs, all females, all 20 year olds.

      • drdaeman 4 days ago

        I think their point is that a lot of us, upon hearing "group A tend to exhibit more of some negative trait X than some other group B" mentally start to associate A with X and this creates a social stigma - just because how our brains work.

        I wish there'd be some way to phrase such statements in a nonjudgmental way, without introducing a perception bias...

    • drdaeman 4 days ago

      > The goal is to reduce the amount of fraud cases.

      I'm sorry but I fail to see how have you reached this conclusion. Can you please elaborate? The way I understand it, a detection system cannot affect anything about its inputs - you need a feedback loop for that to happen. And I don't see anything like that within the scope of the project as covered by the article.

  • halostatue 5 days ago

    In practice, investigations tend to find the results for which the investigation was started. At the beginning of the article, it was also suggested that such investigations in Amsterdam found no higher rate of actual fraud amongst the groups which were targeted more frequently via implicit bias by human reviewers.

    In North America, we know that white people use hard drugs at a slightly higher rate than non-whites. However, the arrest and conviction rate of hard drug users is multiples higher for non-white people than whites. (I mention North America because similar data exist for both Canada and the USA, but the exact ratios and which groups are negatively impacted differ.)

    Similarly, when it comes to accusations of welfare fraud, there is substantial bias in the investigations of non-whites and there are deep-seated racist stereotypes (thanks for that, Reagan) that don't hold up to scrutiny especially when the proportion of welfare recipients is slightly higher amongst whites than amongst non-whites[1].

    So…saying that the goal is to avoid penalizing people for [innate characteristics] is more correct and a better use of time. The city of Amsterdam already knew that its fraud investigations were flawed.

    [1] In the US based on 2022 data, https://www.census.gov/library/stories/2022/05/who-is-receiv... shows that excluding Medicaid/CHIP, the rate of welfare is higher for whites.