AI tool cuts unexpected deaths in hospital by 26%, Canadian study finds
(cbc.ca)230 points by isaacfrond 2 months ago
230 points by isaacfrond 2 months ago
I like this analysis, although I come to a different conclusion: if AI can give early warning to nursing staff, telling them 'look closer', and over 1/3 of the time, it was right, that seems great. Right now in a 30 bed unit, nurses have to keep track of 30 sets of data. With this, they could focus in on 3 sets when an alarm goes off. I believe these systems will get better over time as well. But, as a patient, I'd 100% take a ward that early AI warning with 66% chance of false positives over one with no such tech. Wouldn't you?
I would not. High false alarm rates are a problem in all sorts of industry when it comes to warnings and alerts. Too many alerts, or too many false positive alerts cause operators (or nurses in this example) to start ignoring such warnings.
This is the real problem. In a perfect world, everyone pays attention to alarms with the same attentiveness all the time. But it just isn't reality. Before going into building software, I was in the Navy and after that did work as a chemical system tech. In the Navy, I worked in JP-5 pumprooms. In both environments we had alarms and in both environments we learned what were nuisance alarms and what weren't, or just took alarms with a grain of salt and there for never paid proper attention to them.
That is always the issue with alarms. You have a fine line to walk. Too many alarms and people become complacent and learn to ignore alarms. Too few alarms and you don't draw the attention that is needed.
More data with appropriate confidence intervals can always be leveraged for good. I hear this application often in medical systems, and recognize the practical impact. The problem is incorrect use of this knowledge (eg to overtreat); not having the knowledge.
Yeah, but GP gives the example of a 33% chance for true positive. That's more than enough to keep you on your toes.
No, many people working in clinical units wouldn't. Because of what might happen on false alarms. What GP said: more meds, more interventions. It's not clear at all whether such systems would help with current workflows and current technology. One of the most famous books about medicine says that good medicine is doing nothing as much as possible. It's still very true in 2024, and probably for a long time still.
Generally speaking, they aren’t short staffed because there aren’t enough nurses, but because they can’t/won’t pay them enough. Those same hospitals hire large numbers of travel nurses to supplement their “short staff” at pay rates double or triple a local nurse.
And the nurses who want decent pay and can do travel nurse, do travel nurse
False positives are not harmless. Textbook example is breast cancer screening https://theconversation.com/breast-cancer-screening-in-austr...
That’s true, but no alarm is deadly. It’s the matter of risk assessment. Not to mention AI can be refined and improved.
It's not just a false positive rate, but also the rate you train nurses to ignore alerts.
>1 patient in 156 was helped by this intervention
the headline says we're talking about death: does that mean 1 life was saved for every 156 patients?
>In addition, they had 2 false alarms for each true alarm and ... and possibly increased risk from said interventions
but wouldn't this study have captured any deaths from those interventions, so the 1 out of 156 life-savings was net?
an individual would probably not make that choice, but the population could easily, the insurance company might, religious leaders might, etc.
this study was measuring deaths and what you are suggesting would be outside this study, but it could be measured also.
You can't automate it. You have to look at the data and charts to figure out the specifics you want and then you plug and chug. I haven't looked deeply at this though but whenever researchers use relative risk and it shows a profound effect, I always calculate the absolute risk to make sure that the intervention is effective.
Many researchers go to relative risk because it shows better results!
I know everyone hates "I asked ChatGPT" comments but...I feel it's relevant here.
It came to roughly the same conclusion as the gp comment when provided with the study PDF.
https://chatgpt.com/share/66eb09e3-7a74-8008-afa8-3b60161d24...
(Though obviously this approach still requires you to go and look at the PDF yourself to make sure it isn't making anything up)
I think that ChatGPT result is a Rorschach test, it wrote things like "The percentage reduction could be exaggerated based..."
Could is doing a lot of work in letting you interpret what it's saying however you like.
>How can we automate this kind of analysis?
Happy to talk about it.
Are you in the Healthcare industry?
That’s a good point, a similar conversation was had around the Covid jabs, with some research re vaccine mandates concluding “heads of governments, schools, healthcare facilities, and private businesses (were) misled by the vaccines’ reported 95% relative risk reduction”
https://www.sciencedirect.com/science/article/pii/S277265332...
"The deterioration prediction model was a time-aware multivariate adaptive regression spline (MARS) model"
Thanks for posting this! Much better source than CBC article.
I found this interesting:
> 1 truly alerted patient for every 2 falsely alerted patients was deemed an acceptable number of false alarms
Interesting, and I think it makes sense.
In an ideal world, the nurse to patient ratio would be high enough that patients could be seen on regular rotation frequently. I've never been in a hospital where this was the case. So a system that can correctly prioritize resources for critical cases even if it's pulling resources away from non-critical cases will probably result in a net improved outcome.
GOFAI from 1991! Wasn't AI back then though.
https://en.m.wikipedia.org/wiki/Multivariate_adaptive_regres...
I'm also shocked at how readable this wikipedia article is relative to most articles about statistical methods.
Wow you're right. I mean it's all maths articles on Wikipedia, not just statistics.
I think there are two causes of Wikipedia maths articles' general awfulness:
1. They're probably written by people that just learnt about them and want to show off their superior knowledge rather than explain the concept.
2. The people writing them think it's supposed to be a precise mathematical definition of the concept, rather than an easy to understand introduction. It's like they're writing a formal model instead of a tutorial.
Often the Mathworld articles are a lot better than Wikipedia, when they exist at least.
Fun bit of trivia (though depressing) from the wiki
The term "MARS" is trademarked and licensed to Salford Systems. In order to avoid trademark infringements, many open-source implementations of MARS are called "Earth".
The question is will this lead to better care or a reduction in resources? Technology allows companies to become 'just good enough'. Any better than 'just good enough' and resources are withdrawn. If there is a 26% improvement in x and x was 'just good enough' before then the only 'rational' move by administration is to reduce other resources until x hits 'just good enough' again. That being said, I think the improvements are coming so rapidly in healthcare that we have a real chance of causing the entire system to shift into a new dynamic so maybe we will actually capture some of these gains for patients.
This takes place in Canada. There are no for-profit hospital complexes like the USA. All of our major hospitals are non-profit, reimbursed by the single-payer healthcare system and philanthropists getting stuff named after them. The profit-motive isn't as significant of a factor here.
That being said, I'm fine with a reduction of resources if additional resources don't increase the quality of my care. In Canada, doctors don't really like to prescribe antibiotics for minor infections.
Americans find this bizarre, but for a minor infection antibiotics are going to screw up your stomach bacteria and long-term health to maybe treat a disease that your body can easily handle on its own.
There's no magic value that comes from allocating resources to a problem. Oftentimes spending money has zero or negative impact beyond virtue-signalling that you care about the problem.
Canadian hospitals have largely the same cost cutting and "efficiency" measures as their US equivalents. Departments have budgets that they have to fight for, feifdoms compete for scraps, and there is an enormous and perpetually growing admin/executive side that is taking more and more of the budget. Couple this with governments such as Ontario that "starve the beast", so to speak, forcing hospitals to squeeze further.
I don't think we should ever take any sort of superior position on this. The same motivations and outcomes occur.
Having said that, efficiency is good, especially with an aging population that will require more and more care. Resources are limited, so applying them in the most effective, efficient way possible is always a win.
American healthcare spending 80% more than Canada on a per-capita basis for worse or equal outcomes.[1]
Our system has major problems, but we spend less money and have a healthier population. That definitionally means we're more efficient.
> The same motivations and outcomes occur.
Our hospitals don't have shareholders that capture excess revenue as profit. Efficiency gains in a non-profit hospital typically get reinvested into the mission of providing healthcare. Efficiency gains in a for-profit hospital often go to the owners.
"Efficiency" is also measured differently in a non-profit context. A business measures monetary return on investment. A non-profit organization measures the monetary cost of achieving its mission.
Many for-profit hospitals in the United States offer free mental health clinics. These clinics have been accused of baiting patients into saying something suicidal as a tactic to involuntarily commit said patients.[2] Because appeals of an emergency mental health order are difficult, this is an extremely efficient way of making money (the hospital gets to bill the patient for their stay).
I don't believe this could happen in Canada. The goal is to get people out of the hospital because there aren't enough beds.
[1] https://en.wikipedia.org/wiki/Comparison_of_the_healthcare_s...
[2] https://www.buzzfeednews.com/article/rosalindadams/intake
I totally agree. The tie to whole patient outcome is stronger in that system. Still not perfect, but a lot more direct for sure. It may be an odd thing to say, but because of that there is an argument that the Canadian system is closer to a true free market healthcare system with the patient as the consumer than the US system.
I think you're missing an important part of the equation: it's outcome quality per amount paid. If you could have gotten 20% better results but it would mean tripling the costs of healthcare because we'd need to hire a lot more staff, perhaps we felt that was a bad deal.
If you can get 20% by paying... what, presumably <5% more for some ML tool that double-checks stuff and flags risky stuff... perhaps it's something we want to do.
No, my argument isn't that this wouldn't be used, it is that by using it there will be overage in quality of care above 'good enough' for the same or similar cost. That will result in the most expensive resources being reduced until quality of care is back to 'good enough' at less cost. It isn't a stretch to imagine that a tool like this would lead to a reduction in nursing staff since they can make rounds more effective and now don't need as many people to get the same level of quality job done.
But I think that's a wrong way to look at it. Or rather, it posits that we're at a point we truly consider good enough independent of cost.
It's entirely possible that we want better healthcare outcomes - all the historical trends point to that - but that we're more or less out of ideas how to get there on the cheap. This might be a new possibility.
In your model, why do we get improved, costlier insulin if the old thing was good enough? Because we actually want to pay more if it works better, and it doesn't mean we cut something else to make up for it. You just pay more in taxes in a subsidized model, or pay more at the pharmacy with private healthcare. There's a drug manufacturer profit motive in there, but it holds true in the added-cost ML scenario too.
I can agree that good enough is not tied to cost and that is likely unfortunate for the patient. It is instead tied to profit, for the company. If increasing the standard of care leads to more profit a rational company will do that. If it means lowering then they will do that. Unfortunately there aren't many actual direct ties between patient outcome and profit and often when they do exist they are negative for the patient. The classic example of this is the question of is it more profitable to cure or to manage a disease? I'd love it if whole life outcome was actually tied to profit in a way that was beneficial to the patient. That would mean a free market driven by the patient as the consumer could exist. But healthcare systems, especially in the US, generally aren't structured that way.
So, to answer your question about 'why do we get improved, costlier insulin if the old thing was good enough' it is because the healthcare system will make more money on it. If they take a % then they are incentivized to use a more expensive version and they can justify it with the word 'better' even if the person is actually worse off as their financial situation deteriorates and they and their families are forced to cut quality of life everywhere else. They put their line for good enough at the point that makes the most value for them, not the point that is best for the patient.
I think in this case it's unlikely because I don't think the problems the tool solves correspond 1-1 with reduced staffing or other resources. The tool mostly seems to provide ongoing diagnosis at a level of detail the clinical team doesn't have regular bandwidth for (they might make one diagnosis of the patient per time they visit the patient rather than on an ongoing basis). It doesn't really reduce the amount of time staff can spend with patients. They can't get rid of doctor diagnosis entirely so they can't really reduce time per patient in any effective way.
Starving the beast is an ongoing program, the budget will be cut (or fixed, hence silently cut through inflation) either way. My hope is that improvements like this will stave off the harmful effects of the budget cuts.
You realistically can’t starve the beast that is healthcare. The costs will go up disproportionately, and they do, in basically every advanced economy: https://en.m.wikipedia.org/wiki/Baumol_effect
While I agree that you shouldn't, and that the end goal (privatized health care) is at the same time more costly and less efficient, that doesn't mean people can't or don't.
The Baumol effect you link to only shows that wage demands from health care workers go up in proportion to the wages of other workers. This means (roughly speaking), that reducing the health care budget will reduce the effectiveness of your health care system, because you're able to afford fewer people (I think this is the point you're making, please correct me if I'm wrong).
But that's entirely the point of starving the beast! By cutting funding to some federal department, that department becomes less effective, which makes people think that the government is incapable of running said department, and makes them open to the idea of privatizing the department. Et voila, you've opened up a whole new market that can be exploited for profits! The holy grail is opening up a market with inelastic demand such as health care, where people, no matter what you charge, will be forced to buy your product. This program has been incredibly successful in the US, which can be seen by comparing their health care system to that of other wealthy nations.
You can reduce the spend per person by replacing more qualified workers with less qualified (cheaper) workers, and adding friction to the process of obtaining healthcare.
Increasing prior authorizations, increasing paperwork complexity, increasing hold times on the phone, obfuscation for who is responsible for what, constantly changing coverage so people have to change providers, and otherwise discourage them from seeking care.
This question has an implicit assumption that you're talking about a US-style health system and the incentives that exist in a system of that structure.
This is exactly why a structure like the UK NHS which is going for "what's the most healthcare I can get for the country with a fixed pot of money" is a better setup.
For instance, in the UK the female contraceptive pill is free to whoever wants it. Because that is a whole lot cheaper than extra (unwanted) pregnancies. Similarly the NHS has spent money on reducing smoking because that's cheaper than dealing with the health effects.
> the female contraceptive pill is free to whoever wants it. Because that is a whole lot cheaper than extra (unwanted) pregnancies.
Abundant contraception encourages and promotes promiscuity
> the NHS has spent money on reducing smoking because that's cheaper than dealing with the health effects.
Reducing tobacco usage makes more room for nicotine OTC and vaping to replace it. Among other stimulants.
What do you recommend? Just letting people die of cancer and ignore teenage pregnancies?
I don't think data supports your claim that tobacco use was merely redirected to other forms of nicotine. But even if it did, that's a success since they're less harmful.
I recommend that kids who go to the Bodega and pick up a pack of smokes, a tub of ice cream, and a packet of condoms, check Wikipedia for terms like "Helen of Troy", "Trojan Horse", and the eponymous war, lest their Y chromosomes combine with something in a way they thought was impossible.
> Reducing tobacco usage makes more room for nicotine OTC and vaping to replace it. Among other stimulants.
And? Nicotine itself is not particularly dangerous and might be even neuroprotective if consumed in moderation. Vaping as a consumption method might be problematic of course, but I don’t think there is any research showing it to be even as remotely as harmful?
The early death of smokers tends to save a long, expensive period of end-of-life care. I believe smoking deaths reduce health care costs, ironically enough.
It does, there is even a study on it. https://pubmed.ncbi.nlm.nih.gov/9321534/
Smokers also help keep pension/social security costs down since they pay into it but don't collect out of it or do for much shorter period.
> Because that is a whole lot cheaper than extra (unwanted) pregnancies.
From a nation which should know better after being so very thoroughly roasted by Mr. Swift some few years ago: https://www.gutenberg.org/files/1080/1080-h/1080-h.htm
Why are you putting ‘just good enough’ in quotation marks?
Even declaring that is the case doesn’t change that it’s still clearly a personal judgement depending on the individual.
Its not a personal judgement to say private equity buying up hospitals has shifted the priorities of the hospitals from care to profit.
"Depending on the individual" here means, depending if you're a share holder, or the patient dying on the cot.
Huh? How does this relate to value judgements made by individuals?
This scenario exists only in progressive and HNers heads. Companies make money and Capitalism works by offering more services not fewer. Are there companies that does short-term thinking? Yes. But overall, our standard of living and quality of services has always improved
That was a rational capitalist argument. If a company has an opportunity to make money, they will. Any better than 'good enough' isn't rational and the people running that company should be fired. In the long term the entire industry will slowly adopt this and the standard of care may rise slightly as these gains are used for competitive advantage instead of pure profit but that will take a while at best and relies on a true free market, which healthcare definitely isn't.
> relies on a true free market, which healthcare definitely isn't.
I think this is the part that people miss the most. When a purchasing decision is made based on something like "who has the best quality shoes in price range X", competition can occur.
When the buying decision is "will I live or die", there's not really any choice made there. Couple that with the complete lack of transparency for how much a give procedure will cost, and you've strayed so far away from a free market that it's not even recognizable.
I mean, even the hospital can't even remotely accurately tell you how much something will cost before you actually get a bill...
>Any better than 'good enough' isn't rational and the people running that company should be fired.
This is kind of the reason the Japanese economy is stagnant and continues to fail in winning global marketshare. Businesses that are too good will fail or at least not compete with businesses that settle for being good enough.
> If a company has an opportunity to make money, they will. Any better than 'good enough' isn't rational and the people running that company should be fired.
...where "good enough" is relative to a particular level of quality and price point, of course. Otherwise there wouldn't be different markets for rich and poor people. And this mechanic helps avoid a "collapse into mediocrity" that you'd otherwise get if all goods and services were offered at a single price point.
The real problem is what you identified at the end, that healthcare isn't anything like a free market. There's no buyer mobility, no transparency as to the level of the service you're getting - heck, you don't even know how much you're going to pay in advance, unlike almost every other industry.
IME anything that looks vaguely like a cost center often has something vaguely resembling an acceptable service/quality level and people typically aim to achieve that with the lowest cost. It’s not at all uncommon to cut budgets/headcount when that goal is exceeded noticeably.
Unclear what "AI" brings to the table here. Sounds like traditional automation & monitoring could do the job here. No mention of how the model works, or what kind of training is involved.
> white blood cell count was "really, really high"
You don't need AI for this.
I wish they would provide a more compelling example.
It’s a regression model. You don’t “need” AI for anything. But using ML to identify thresholds for decision making is extremely useful.
I don’t like calling everything AI, but I’m even more irritated by people that don’t understand the value of simple ML models for low hanging fruit decisions like the one shown here
In AI applications, especially those involving predictive modeling, MARS can be used to improve the accuracy of predictions. For example, MARS models are used in time series forecasting, financial predictions, environmental modeling, and other domains where relationships between inputs and outputs are complex and non-linear. By adding time-awareness, the model can handle time-based data more effectively.
> Unclear what "AI" brings to the table here
A 26% reduction in unexpected deaths, apparently.
They used a bog standard statistical technique and called it AI to try to attract more funding...
It's the difference between "give the programmer this medical report and have them parse out the white blood cell count" versus s/progammer/AI. And the same every time that the report changes in any way.
I've been that programmer more times than I can count. I'm much happier about being able to work on better problems instead than I am worried about AI taking away my rice bowl.
I think there is some element of "technology laundering" here that I saw during the blockchain hype. Even if plain ol' monitoring and automation could solve your problem no executives want to back that. If you say it's adding AI, blockchain, etc. they get to feel like a visionary so they'll fund your project
I am beyond tired of the "It made a decision on a if-statement, that's AI!"
Most modern AI does even less. It simply flows values through a graph. No decision is ever made. The consumer of the network interprets the result and makes a decision.
I heard AI is just if-statements, so perhaps the reverse is true.
> don't need AI for this
And you don’t need Dropbox for file sync. Machine learning makes integrating automation easier.
You do have to wonder though, if traditional automation could do the job, why wasn't that done already?
I think the real question is why is this being reported on. There are always medical advancements, but because this one gets chosen as a news story because "AI" in the headline gets clicks.
This isn't just a small advancement, though. It's a simple tool, which isn't restricted to medical specialists, with a huge impact.
If a study found that letting cats roam hospital hallways reduced unexpected deaths by 26%, I think that would be reported, too.
Because healthcare (and banking and ....) are horribly behind on tech. We have life saving devices in hospitals still running Windows 95 as an OS. Also, the main problem in healthcare is misaligned incentives. As said elsewhere in this thread, this kind of tech will get when it enables cost reductions larger than its costs.
Because tech people don't understand how healthcare systems work, and reciprocally healthcare workers have neither the education nor the time to understand new tech. The result is what you get today: people from both sides shouting at deaf ears on the internet. Also, the usual corporate culture issues.
Hot take: If tech people who are used to working with complex systems can't understand it, maybe it's time to replace the whole thing. The healthcare system doesn't make sense at all and is that way because of regulation and a bunch of other crap we need to get rid of/refactor.
This was traditional automation. They used a bog standard statistical techniques and called it AI for fundraising purposes.
The paper itself doesn't claim it's AI. They do say it's "machine learning".
Machine learning is indeed extremely good at pattern recognition, but I wouldn't trust an LLM to reliably identify patterns, especially in a medical context. As other commenters have said, this article is evidence of classical methods continuing to be useful.
This doesn't make sense on many levels. "Hospital IT" does not code the hospital EHR systems, just like the airport doesn't code flight management systems.
These are life-long software engineers, just like others reading this comment, using the best tools at their disposal to engineer lifesaving software. They're not using "regex" to develop algorithms for monitoring patients (???), and frankly that suggestion is so wild that one has to assume you don't know anything about algorithm design at all.
An LLM literally hallucinates incorrect answers by design and struggles to get extremely basic math and spelling correct.
You're welcome to put your literal life in the hands of a hallucinating english generator, but when it comes to healthcare, I want a "0% LLM" policy. LLM's will be the cheap things that offer substandard care to poor people, while the wealthy and elite enjoy personalized and human-centered care.
This sentence contains two diametrically opposed hypothesis.
LLM's and accuracy in one sentence in the context of quantifying thresholds is stunning.
LLM's don't have a concept of numerical accuracy.
Knowing what I know about workplace dynamics in hospitals I'm gonna go out on a limb and say that the "new hotness" factor of the term "AI" probably does a lot of heavy lifting here when it comes to getting buy in from management and users.
Forgoing a decade of income to get some letters beside your name selects for people who don't take orders from Clippy unless you market it well.
This is a great example of "classic AI" being more than good enough.
Using AI to find patterns in patients and intervene was something I worked on in my last job in Specialty Pharma. Theres many red flags on patients long before they even start treatment, sadly income is one of the largest red flags here in the States.
We were able to perform interventions earlier and improve outcomes with a simple regression model that tried to determined the number of missed doses.
^^^^^^^^^^^ THIS ^^^^^^^^^^^^
Medical professionals, mostly nurses, are spread extremely thin. They are so busy and/or jaded that they often neglect to show any compassion or empathy until they see somebody else doing it. Having a family member nearby also keeps them accountable.
I have seen it personally too many times.
Its incredible that this is needed.
Medical isnt science, and its frightening.
The weirdest thing I've experienced as a patient is that Physicians will urge you against second opinions or having multiple doctors.
Hope telemedicine becomes more mainstream, I'd like to avoid US physicians as much as possible.
Medicine isn't science because science is not as advanced as many would think. The lack of workplace integration is also a big factor.
I don't think we discourage second opinions, except maybe in some for-profit structures. The bad idea is to have multiple people making decisions in parallel. I'm not in the US, though.
Regarding advocacy, I don't think it's so crazy. It's very good to have a valid interlocutor when the patient is diminished. Also, hospitals are big systems with limited personalization. If someone's there to call out the system when it's trying to shoehorn too hard, it's also very good.
Enjoy the privilege of seeing multiple doctors as long as you still can. With steady cost reduction (AI, automation, less effort per patient) and increase in medical authoritarianism ("expert said so") that privilege is on thin ice. In the UK it's already normal to have a single area-designated doctor you're allowed to go to, and that doctor is also a gatekeeper to refer you to specialists. Hope he likes you! Beyond that, AI diagnosis would likely require an extensive medical online profile of you. Such e-med profiles obviously already exist in various countries, as opt-out features. In the name of cost reduction through automation, I'll be so free and call it: These profiles will become mandatory over the next ten years. Either way, good luck getting a second opinion once a false diagnosis ended up in your file or once AI continuously misidentifies a pattern present there.
I was semi-retired two years ago and decided to do a LPN program to work part time, do something physical, something that felt like a moral win and good for society.
I would have had no problem intellectually getting through the program but quit after the first night in a hospital.
Anyone sitting at a desk can not understand how tough and miserable a nursing job is. Everyone is basically miserable and stressed out. The work is completely thankless, disgusting and dangerous with personal liability on the line if you make a mistake. Everything that we take for granted in an office setting just doesn't apply in a medical setting.
I eventually just went back to a bullshit project management job, for more money than a nurse of course. This is obviously part of the problem.
It is easy to complain about the system when it is someone else who has to help grandma to the bathroom. There is no easy solution for any of this given the demographics. It is basically a disaster.
We have the term “GOFAI” to distinguish “modern” AI from the older stuff (big bag of if statements, behavior trees, etc., but do we need a new term now to distinguish pre-LLM / Diffusion models (neural networks and tree based models)? Everyone thinks “ChatGPT” when they hear AI now but surely this is something more like XGBoost or a neural network under the hood.
No, we don't use GOFAI, we call it machine learning. LLMs are a subset of the field, and if you want to refer to them just use the term LLM. We don't need new terms when we already have easy to use precise language.
Marketing will abuse any term they get their hands on, and certainly "AI" has been abused, but in the field it usually the umbrella term for all areas of research into making "intelligent" behaviour. Be it expert systems, logic systems, machine learning, statistical machine learning, or otherwise.
Why all that need to distinguish it?
If you want the details, call it a regression model. If not, why insisting on communicating the details?
Important to note that the timing of this means that it's dedicated, specific AI, not "throw a wrapper and a specific prompt in front of ChatGPT" AI. Of course it's all muddied now.
Test results are already reported from testing equipment a value and expected range (to account for a specific machine/reagant's calibration). Notifying when out of range hardly seems like a AI, but it certainly might be marketed as such.
Maybe there is some nuance for things like a patient in for liver issues where their liver enzymes are expected to be abnormal, but identifying when it is abnormal for them.
Yeah, I'm not sure how this qualifies as AI outside of marketing, but wanted to get ahead of the people whose opinions would be biased by the current en vogue LLMs.
There's a thriller plot hidden in here where the medicos ask an AI to reduce unexpected deaths so it manipulates both predictions and deaths to optimize the statistic. When they can manipulate the world we'll have to treat prompts as if they were wish fulfillment demands to a hostile djinn.
> That warning showed the patient's white blood cell count was "really, really high," recalled Bell, the clinical nurse educator for the hospital's general medicine program.
I’m not sure how an alarm for “high white cell count” should have had so much impact. Here in China once the doctor prescribes a finger blood test, we sample finger blood after lining up for 15 minutes, and the result is available within 30 minutes. The patient prints the results from a kiosk and any patient who cares enough about their own health will see the exceptionally high white cell count and request an urgent appointment with the doctor for diagnosis right away. Even in normal cases we usually have the doctor see the report within two hours. Why wait several hours?
> While the nursing team usually checked blood work around noon, the technology flagged incoming results several hours beforehand.
> But in health care, he stressed, these tools have immense potential to combat the staff shortages plaguing Canada's health-care system by supplementing traditional bedside care.
This sounds like the deaths prevented by this tech are caused by delays and staff shortage and what this tech does is to prioritize patients with serious issues? While I appreciate using new tools to cut deaths, it looks like the elephant in the room is staff shortage?
I don't know the details, but I suspect its a bit more. Probably takes as input all of the factors over a time series and then determines based on these inputs over time there is a higher likelihood of Y. When that likelihood reaches some threshold it sends an alert to the nurse. It's almost certainly not as simple as temperature at 105 -> alert (although a temp of 105, would certainly signal a problem).
Closing hospitals would cut deaths in hospitals by 100%.
Like I'm not sure what this measure means, it's not like 26% of people that would die in the hospital would be made immortal or something.
These studies are the only way AI will be implemented in Medical.
This stuff will not happen because its good technology that can save lives. Rather, the public pressure from AI performing better at saving lives that humans.
The anecdotes of 'oh it was wrong that one time', will pale in comparison to success. Maybe Insurance companies will be the winners and be our advocate. I've already seen medical professionals use 'that one time it was wrong' as a way to ignore technology.
I am thankful that we can expect fewer unexpected events now. Today I was saved from death at least five times because an automated traffic signal alerted me to machines hurtling dangerously in the wrong direction. I was able to push commands that halted my conveyance until the risk of death had plummeted.
I am also reminded of Dilbert's PHB decreeing that all future unplanned outages must be announced at least 48 hours in advance.
“While the nursing team usually checked blood work around noon, the technology flagged incoming results several hours beforehand”
So the blood was collected and labs done but it wasn’t scheduled to be reviewed until later?
Seems like a win-win. For those saying you don’t need AI, the alternative would be either across-the-board thresholds for flags for each line item (too many false positives) or manually setting it for each patient (too intensive).
> Across the board thresholds is exactly what we usually have. I'm not so sure about the false positives being so high
The article would have been stronger with those numbers. But I wouldn’t be convinced that a high WBC count for an average ER visitor would have been sensitive enough to trigger an alarm. The prior knowledge that it’s a cat bite is important.
Stuff like this is not exactly new but it’s great it’s receiving desired outcomes. The company I work for developed a sepsis alert back in 2010 that helped inform clinicians to possible sepsis in patients by analyzing lab results. Lot of success stories but of course false positives. Tools like this are very useful when they are one of many factors driving a clinician’s decision and not the only reason.
I don't think it is weasel word. It is just a qualification.
Nobody really expects AI to save terminal cancer patients or 90-y.o. cardiacs. Unexpected deaths, on the other hand, are really nasty, both for the next of kin and the doctors themselves. If an apparently viable patient suddenly drops dead, everyone asks what went wrong.
Reducing such deaths by one fourth is a good job.
"While the nursing team usually checked blood work around noon, the technology flagged incoming results several hours beforehand"
So they're understaffed and could just look into the results more, oh wow what an use of computational power.
If it keeps me alive, I don't care if the hospital I'm in "should" have just hired a lot more people.
> So they're understaffed and could just look into the results more, oh wow what an use of computational power.
How is this not a good use of compute?
What is an example of a computer that doesn't employ AI, by this definition?
The way I see it, AI is about the tasks a system handles, rather than the computer itself. I would say that AI encompasses the set of tasks where the computer system is in some way better than its user not just by having access to more computational resources, but by actually "reasoning" better. As a simple example, I'd argue that a basic spell-checker which works via dictionary lookup doesn't employ AI, but an extensive modern grammar checker does, as it can "reason" about the English language better than I (and most people) can.
Another way of thinking about it is that non-AI systems must always perform a task correctly, or we'd say that they have a bug. Conversely, an AI system performs tasks in situations where there is some measure of uncertainty or subjectivity, and they might arrive at a way of performing the task that is suboptimal, or even entirely inappropriate, without being buggy - for these systems we'd say that they did their best given the circumstances.
In the case of this hospital study, if they had used a simple "beep if measure goes above X" system, that wouldn't have been AI, but they used an ML model which integrates many interdependent factors over time [0] and while it has a significant ratio of false positive triggers (and as such is often wrong), it applies what would absolutely count as "reasoning" in trained human nurses.
[0] "The deterioration prediction model was a time-aware multivariate adaptive regression spline (MARS) model (Appendix, Sections 1–4). The model is made time-aware by incorporating risk score predictions from earlier in the encounter, the change in risk score since the previous assessment, and summaries of changes in the risk score over time." https://www.cmaj.ca/content/196/30/E1027
In Canada they're dangerously understaffed. And the staff are burnt out and lack qualities like: attention to detail, common sense and basic human empathy. Or at least I hope it's because they're burnt out and not because the hospital regularly hires amoral robots, which is also a possibility.
Either you go with America and get bankrupted by medical care if you don’t have excellent insurance, or you go to Canada or Europe where the average doctor is paid 1/3rd as much and there are significant waiting periods for non immediately necessary procedures. Heads I lose, tails you win.
People wonder why folks hate doctors or get “white coat” syndrome. Same shit from dentists wondering why everyone hates them.
> Europe where the average doctor is paid 1/3rd as much and there are significant waiting periods for non immediately necessary procedures
I'm not sure where exactly you evaluated this based on (personal experience I suppose?) but this hasn't been true for me in Spain with either public healthcare nor private. Don't remember it being like that in Sweden (public healthcare) either, and I'm sure there are plenty of other European countries where the waiting time isn't significant either, and you also get great care.
Some countries seems to just have figured out how to make healthcare costs manageable, with great care, well educated doctors/nurses and also relatively low waiting times. I'd probably still say they're underpaid, because they're literally saving people's lives, but I guess that's true for everywhere, even the US.
Canadian doctors earn decently well:
https://www.dr-bill.ca/blog/career-advice/doctor-salary-us-v...
Sure, there's the exchange rate, but it's still quite good. The disparity for tech workers is much greater.
I think they (doctors here) have other concerns more about regulation / paperwork and overhead that comes with it, less than total compensation. Family doctors anyways.
That and the schools simply won't graduate enough of them. Doctor shortage is a serious problem. But so is nurse shortage post-COVID.
System here seems to be in crisis. Combination of many factors.
But all my experiences in the last few years have been... very positive? Excellent recent care for my teen at McMaster Children's Hospital. Family doctor 5 minute drive away, can get appointments quite quickly. So, yeah, it's regional and situation dependent.
We could view the patient as a process under control, with all the sensors we have, and simply apply process control technology to that without waiting for a human to interpret the data many hours after it's relevant.
I've been there. They take the physical samples to minimize patient sleep.
Otherwise, you're hooked up to monitoring equipment...
I mean... this is a perfectly legitimate use of computational power? What is the downside?
I suppose there is a risk they will downsize more. But this is like thinking cameras were bad because they reduced the number of security guards needed to secure an area. No?
Well, calling this AI seems like a long shot. What seems causal here is 'warn early', and indeed I'm sure it would work even better if you outputted a warning displayed full screen on the nurse's phone. It's quite possible you could have the same effect with trivial thresholds instead of a stat model. Still, I'd say it's indeed a good use of computers in general to produce targeted warnings.
Oh, fair. To an extent, at least. If they had said these were ML processed samples, would you balk as hard at it?
That is, I'm willing to chalk up use of "AI" as a descriptor being an editorial choice. Agreed that it isn't impressive just because it is AI, but it does still seem to be a good use of computational power.
I don’t like relative risk and relative risk reduction because it tends to overestimate the effectiveness of the intervention.
In this case, the absolute risk when measuring for death in the GIM pre-intervention and GIM post-intervention are 0.0215 (2.15%) and 0.0146 (1.46%) with an absolute risk reduction of 0.0069 (.69%).
While the relative risk is 26% across the pre- and post-intervention, the absolute risk reduction is only 0.69% with a NNT (number needed to treat) of 1/156. Which means that 1 patient in 156 was helped by this intervention.
In addition, they had 2 false alarms for each true alarm and could suggest that interventions were performed in patients who did not require it — more tests, medications and possibly increased risk from said interventions.
This shows that the CHARTwatch ML/AI is not helping at all that much clinically.