anonym29 4 days ago

Legal frameworks can indeed contradict mathematical optimization functions, statistical patterns exist independent of our social preferences about them, and aggregate behavioral differences between groups (whatever their causes) will produce disparate algorithmic outcomes when accurately measured.

If certain demographic groups legitimately have higher base rates of welfare errors (due to language barriers, unfamiliarity with bureaucratic systems, economic desperation, or other factors), then an accurate algorithm will necessarily produce disparate outcomes.

If we dig deeper, there are three different underlying questions that are attempting to be addressed by the authors of this "fair" fraud detection system -

1. Do group differences in fraud rates actually exist?

2. What mechanisms drive these differences?

3. Should algorithms optimize for accuracy or equality of outcomes?

The article conflates these, treating disparate outcomes as presumptive evidence of algorithmic bias rather than potentially accurate detection of real differences.

Pattern recognition that produces disparate outcomes isn't necessarily inherently "broken", it may be simply be accurately detecting real underlying patterns whose causes are uncomfortable to acknowledge or difficult to address through algorithmic modifications alone.