Comment by vessenes

Comment by vessenes 2 months ago

22 replies

I like this analysis, although I come to a different conclusion: if AI can give early warning to nursing staff, telling them 'look closer', and over 1/3 of the time, it was right, that seems great. Right now in a 30 bed unit, nurses have to keep track of 30 sets of data. With this, they could focus in on 3 sets when an alarm goes off. I believe these systems will get better over time as well. But, as a patient, I'd 100% take a ward that early AI warning with 66% chance of false positives over one with no such tech. Wouldn't you?

_aavaa_ 2 months ago

I would not. High false alarm rates are a problem in all sorts of industry when it comes to warnings and alerts. Too many alerts, or too many false positive alerts cause operators (or nurses in this example) to start ignoring such warnings.

  • tcmart14 2 months ago

    This is the real problem. In a perfect world, everyone pays attention to alarms with the same attentiveness all the time. But it just isn't reality. Before going into building software, I was in the Navy and after that did work as a chemical system tech. In the Navy, I worked in JP-5 pumprooms. In both environments we had alarms and in both environments we learned what were nuisance alarms and what weren't, or just took alarms with a grain of salt and there for never paid proper attention to them.

    That is always the issue with alarms. You have a fine line to walk. Too many alarms and people become complacent and learn to ignore alarms. Too few alarms and you don't draw the attention that is needed.

  • the__alchemist 2 months ago

    More data with appropriate confidence intervals can always be leveraged for good. I hear this application often in medical systems, and recognize the practical impact. The problem is incorrect use of this knowledge (eg to overtreat); not having the knowledge.

    • _aavaa_ 2 months ago

      No, the problem is information overload. Even without these errors nurses are often overburdened with work and paperwork. Adding another alarm, with a >50% false positive rate is going to make that situation worse. And the nurses will start ignoring the unreliable warning.

      • the__alchemist 2 months ago

        I suspect we are on the same page. My point is in regards to using information as described in the article to improve the system. I do not think an on/off "alarm" is the way to do this. The key is to use information from signal processing theory (eg how a Kalman filter updates) to provide input into what medical action to take. The reactions against more diagnostics etc is due to how they are applied, like a brute force alarm, leading to worse outcomes through, for example, unnecessary surgeries etc.

        The reduction I am arguing against is: "Historically, extra information and diagnostics that have an error margin results in worse outcomes because we misapply it; therefore don't build these systems."

        • _aavaa_ 2 months ago

          Yeah, we agree. That reduction is also what I originally commented against.

  • PoignardAzur 2 months ago

    Yeah, but GP gives the example of a 33% chance for true positive. That's more than enough to keep you on your toes.

    • IIsi50MHz 2 months ago

      At work, we had an appliance which went into failsafe on average 8 times per day. The failsafe is meant to remove power from a device-under-test in case of something like fire in the DUT. The few actual critical failures were not detected by the appliance.

      Instead, the failsafe has the effect of merely invalidating the current test, and making the appliance unable to run a test correctly until either power cycled or the appliance's developer executes a secret series of commands that are not shared with us.

      So of course an operator of the appliance found a way to feed in a false "I'm here!" with a loop, to trick the appliance into never going into failsafe…

      That's for ~6.8% of all tests being false-positive, ~93.2% being true-negative, and ~3 tests that should have triggered failsafe did not.

      • PoignardAzur 2 months ago

        Sooooo... You're saying that the chance of a true positive given an alert was much less than 33%?

        I don't if you meant it as a counterpoint for what I said, but it really isn't.

    • emptiestplace 2 months ago

      I hope you are joking.

      • PoignardAzur 2 months ago

        I'm not. If you have three alerts a day, a 33% chance of true positive per alert means you'll get an alert pointing to a problem at least once per day.

        That's enough to anchor "alert == I might find a problem" in the user's mind.

rscho 2 months ago

No, many people working in clinical units wouldn't. Because of what might happen on false alarms. What GP said: more meds, more interventions. It's not clear at all whether such systems would help with current workflows and current technology. One of the most famous books about medicine says that good medicine is doing nothing as much as possible. It's still very true in 2024, and probably for a long time still.

hammock 2 months ago

I like this analysis, although I come to a different conclusion: if AI can allow nurses to manage 10x as many beds (30 vs 3), a hospital can now let go 90% of its nursing staff. Wouldn’t you?

  • netsharc 2 months ago

    Luckily most hospitals in the world seem to be short-staffed, and the population of sick is growing (because people are living longer).

    • hammock 2 months ago

      Generally speaking, they aren’t short staffed because there aren’t enough nurses, but because they can’t/won’t pay them enough. Those same hospitals hire large numbers of travel nurses to supplement their “short staff” at pay rates double or triple a local nurse.

      And the nurses who want decent pay and can do travel nurse, do travel nurse

  • namaria 2 months ago

    Coming to the conclusion that cutting 90% of nursing staff is possible and desirable is an astonishingly disconnected take

  • [removed] 2 months ago
    [deleted]
0xdeadbeefbabe 2 months ago

It's not just a false positive rate, but also the rate you train nurses to ignore alerts.