Comment by like_any_other
Comment by like_any_other 4 days ago
> Ideally, a model would be fair in the senses that: 1. In aggregate over any nationality, people face the same probability of a false positive.
Why? We've been told time and time again that 'nations' don't really exist, they're just recent meaningless social constructs [1]. And 'races' exist even less [2]. So why is it any worse if a model is biased on nation or race, than on left-handedness or musical taste or what brand of car one drives? They're all equally meaningless, aren't they?
[1] https://www.reddit.com/r/AskHistorians/comments/18ubjpv/the_...
[2] https://www.scientificamerican.com/article/race-is-a-social-...
I'm making a mathematical statement, not a moral one. I chose "nationality" as my input because the linked article focused on that, but the statement applies equally to any other input.
As already noted, any classifier better than a coin flip will disfavor some groups. The choice of which groups are acceptable to disfavor is political and somewhat arbitrary here. For example, these authors accept disfavoring people based on poverty ("sum of assets") or romantic relationship status ("single or partnered?"), but don't accept parenthood or nationality.