Comment by throwawayqqq11
Comment by throwawayqqq11 5 days ago
[flagged]
Comment by throwawayqqq11 5 days ago
[flagged]
Being flagged as potential fraud based on eg. ethnicity is what you want to eliminate, so you have to start with the assumption of an even distristribution.
From the article:
> Deciding which definition of fairness to optimize for is a question of values and context.
This optimization is the human feedback required to not have the model stagnate in a local optimum.
No... the pre-determined bias in this story is obviously that all subgroups of people behave identically w.r.t. welfare applications, which the data itself did not support and a momentary consideration of socioeconomics would debunk. When they tried to cludge the weights to fit their predetermined bias, the model did so poorly on a pilot run that the city shut it down.