Comment by ordu
The goal is to avoid penalizing people for their skin color, or for gender/sex/ethnicity/whatever. If some group have higher rate of welfare fraud, the fair/unbiased system must keep false positives for that group at the same level as for general population. Ideally there should be no false positives at all, because they are costly for people, who were marked wrongly, but sadly real systems are not like that. So these false positives have to be spread over all groups proportionally to sizes of the groups.
Though the situation is more complex than that. What I described is named "False Positive Share" in the article (or at least I think so), but the article discusses other metrics too.
The problem is that the policy should make the world better, but if the policy penalizes some groups for law breaking, then it can push these groups to break the law even more. It is possible to create biases this way, and it is possible to do it accidentally. Or, rather, it is hard not to do it accidentally.
I'd recommend to read "Against Prediction", it has a lot of examples how it works. For example, biased False Negatives are also bad, they make it easier for some groups to break the law.
>The goal is to avoid penalizing people for their skin color [...]
That's not correct. The goal is to identify and flag fraud cases. If one group has a higher likelihood to perform that, then this will show up in the data. The solution should not be to change the data but educate that group to change their behavior.
Please note that I have neither mentioned any specific group and do not have a specific group in mind. However, an example for such a group that I have seen in my professional life could be female 20 year old CEOs of construction companies (often connected to organized crime)