Comment by PoignardAzur
Comment by PoignardAzur 2 months ago
Yeah, but GP gives the example of a 33% chance for true positive. That's more than enough to keep you on your toes.
Comment by PoignardAzur 2 months ago
Yeah, but GP gives the example of a 33% chance for true positive. That's more than enough to keep you on your toes.
Sooooo... You're saying that the chance of a true positive given an alert was much less than 33%?
I don't if you meant it as a counterpoint for what I said, but it really isn't.
Yeah, but again, the problem isn't the high false positive rate.
The problem is that given any positive at all, the chance it points to a problem is still virtually zero.
If it was 6.8% of all tests as false positives and 2% true positives, probably people wouldn't have silenced the alarm.
If it goes off 8 times a day and 2 of them are true positives, then people have recent memories of having to fix problems pointed by the alarm.
I'm not. If you have three alerts a day, a 33% chance of true positive per alert means you'll get an alert pointing to a problem at least once per day.
That's enough to anchor "alert == I might find a problem" in the user's mind.
At work, we had an appliance which went into failsafe on average 8 times per day. The failsafe is meant to remove power from a device-under-test in case of something like fire in the DUT. The few actual critical failures were not detected by the appliance.
Instead, the failsafe has the effect of merely invalidating the current test, and making the appliance unable to run a test correctly until either power cycled or the appliance's developer executes a secret series of commands that are not shared with us.
So of course an operator of the appliance found a way to feed in a false "I'm here!" with a loop, to trick the appliance into never going into failsafe…
That's for ~6.8% of all tests being false-positive, ~93.2% being true-negative, and ~3 tests that should have triggered failsafe did not.