Comment by bananaquant
Comment by bananaquant 4 days ago
What nobody seems to talk about is that their resulting models are basically garbage. If you look at the last provided confusion matrix, their model is right in about 2/3 of cases when it makes a positive prediction. The actual positives are about 60%. So, any improvement is marginal at best and a far cry from ~90% accuracy you would expect from a model in such a high-stakes scenario. They could have thrown a half of cases out at random and had about the same reduction in case load without introducing any bias into the process.
You can't tell a project will fail until you undertake it.
Amsterdam didn't deploy their models when they found their outcome is not satisfactory. I find it a perfectly fine result.