Comment by _aavaa_

Comment by _aavaa_ 2 months ago

12 replies

I would not. High false alarm rates are a problem in all sorts of industry when it comes to warnings and alerts. Too many alerts, or too many false positive alerts cause operators (or nurses in this example) to start ignoring such warnings.

tcmart14 2 months ago

This is the real problem. In a perfect world, everyone pays attention to alarms with the same attentiveness all the time. But it just isn't reality. Before going into building software, I was in the Navy and after that did work as a chemical system tech. In the Navy, I worked in JP-5 pumprooms. In both environments we had alarms and in both environments we learned what were nuisance alarms and what weren't, or just took alarms with a grain of salt and there for never paid proper attention to them.

That is always the issue with alarms. You have a fine line to walk. Too many alarms and people become complacent and learn to ignore alarms. Too few alarms and you don't draw the attention that is needed.

the__alchemist 2 months ago

More data with appropriate confidence intervals can always be leveraged for good. I hear this application often in medical systems, and recognize the practical impact. The problem is incorrect use of this knowledge (eg to overtreat); not having the knowledge.

  • _aavaa_ 2 months ago

    No, the problem is information overload. Even without these errors nurses are often overburdened with work and paperwork. Adding another alarm, with a >50% false positive rate is going to make that situation worse. And the nurses will start ignoring the unreliable warning.

    • the__alchemist 2 months ago

      I suspect we are on the same page. My point is in regards to using information as described in the article to improve the system. I do not think an on/off "alarm" is the way to do this. The key is to use information from signal processing theory (eg how a Kalman filter updates) to provide input into what medical action to take. The reactions against more diagnostics etc is due to how they are applied, like a brute force alarm, leading to worse outcomes through, for example, unnecessary surgeries etc.

      The reduction I am arguing against is: "Historically, extra information and diagnostics that have an error margin results in worse outcomes because we misapply it; therefore don't build these systems."

      • _aavaa_ 2 months ago

        Yeah, we agree. That reduction is also what I originally commented against.

PoignardAzur 2 months ago

Yeah, but GP gives the example of a 33% chance for true positive. That's more than enough to keep you on your toes.

  • IIsi50MHz 2 months ago

    At work, we had an appliance which went into failsafe on average 8 times per day. The failsafe is meant to remove power from a device-under-test in case of something like fire in the DUT. The few actual critical failures were not detected by the appliance.

    Instead, the failsafe has the effect of merely invalidating the current test, and making the appliance unable to run a test correctly until either power cycled or the appliance's developer executes a secret series of commands that are not shared with us.

    So of course an operator of the appliance found a way to feed in a false "I'm here!" with a loop, to trick the appliance into never going into failsafe…

    That's for ~6.8% of all tests being false-positive, ~93.2% being true-negative, and ~3 tests that should have triggered failsafe did not.

    • PoignardAzur 2 months ago

      Sooooo... You're saying that the chance of a true positive given an alert was much less than 33%?

      I don't if you meant it as a counterpoint for what I said, but it really isn't.

      • IIsi50MHz 2 months ago

        Sorry, I meant to say that with only 6.8% of all tests triggering a false alarm (and 0% true alarm), a test operator still found a way to prevent the alarm from occurring rather than being kept on their toes.

        • PoignardAzur 2 months ago

          Yeah, but again, the problem isn't the high false positive rate.

          The problem is that given any positive at all, the chance it points to a problem is still virtually zero.

          If it was 6.8% of all tests as false positives and 2% true positives, probably people wouldn't have silenced the alarm.

          If it goes off 8 times a day and 2 of them are true positives, then people have recent memories of having to fix problems pointed by the alarm.

  • emptiestplace 2 months ago

    I hope you are joking.

    • PoignardAzur 2 months ago

      I'm not. If you have three alerts a day, a 33% chance of true positive per alert means you'll get an alert pointing to a problem at least once per day.

      That's enough to anchor "alert == I might find a problem" in the user's mind.