Comment by bananaquant

Comment by bananaquant 4 days ago

3 replies

What nobody seems to talk about is that their resulting models are basically garbage. If you look at the last provided confusion matrix, their model is right in about 2/3 of cases when it makes a positive prediction. The actual positives are about 60%. So, any improvement is marginal at best and a far cry from ~90% accuracy you would expect from a model in such a high-stakes scenario. They could have thrown a half of cases out at random and had about the same reduction in case load without introducing any bias into the process.

xyzal 4 days ago

You can't tell a project will fail until you undertake it.

Amsterdam didn't deploy their models when they found their outcome is not satisfactory. I find it a perfectly fine result.

delusional 4 days ago

> What nobody seems to talk about is that their resulting models are basically garbage.

The post does talk about it when it briefly mentions that the goal of building the model (to decrease the number of cases investigated while increasing the rate of finding fraud) wasn't achieved. They don't say any more than that because that's not the point they are making.

Anyway, the project was shelved after a pilot. So your point is entirely false.

  • bananaquant 4 days ago

    Good catch about the project being shelved. It is buried pretty deep in the document to the point of making it misleading:

    > In late November 2023, the city announced that it would shelve the pilot.

    I would agree that implications regarding the use of those models do not hold, but not the ones about their quality.