Comment by tanewishly

Comment by tanewishly 4 days ago

2 replies

> 2. Two people who are identical except for their nationality face the same probability of a false positive.

That seems to fall afoul of the Base Rate Fallacy. Eg, consider 2 groups of 10,000 people and testing on A vs B. First group has 9,999 A and 1 B, second has 1 A and 9,999 B. Unless you make your test blatantly ineffective, you're going to have different false positive rates -- irrespectiveof the test's performance.

tripletao 4 days ago

The linked article already notes that model accuracy degraded after their reweighting, ultimately contributing to their abandonment of the project. (For completeness, they could also have considered nationality in the opposite direction, improving accuracy vs. nominally blind baseline at the cost of yet more disparate false positives; but that's so politically unacceptable that it's not even mentioned.)

My point is that even if we're willing to trade accuracy for "fairness", it's not possible for any classifier to satisfy both those definitions of fairness. By returning to human judgment they've obfuscated that problem but not solved it.

  • tanewishly 17 hours ago

    My point was that there is no test (or classifier) that can always guarantee that one definition of fairness by itself, irrespective of the base rate. If the classifier acts the same independent of base rate, there are always base rates (ie occurrence rates in the rates population) for which the classifier will fail the given definition.

    That illustrates that the given definition cannot hold universally, irrespective of what classifier you dream up. Unless your classifier is not independent from the base rate - that is, a classifier that gets more lenient if there's more fraud in the group. That seems undesirable when considering fairness as a goal.