Beyond sensor data: Foundation models of behavioral data from wearables
(arxiv.org)229 points by brandonb 3 days ago
229 points by brandonb 3 days ago
What is an "accuracy" of 83%? Do 83% of predicted diabetes cases actually have diabetes? Or did 83% of those who have diabetes get diagnosed as such? It's about precision vs. recall. You can improve one by sacrificing the other. Boiling it down to one number is hard.
Ah, thanks for the pointer!
https://en.m.wikipedia.org/wiki/Receiver_operating_character...
So, 83% is actually not that great, given that you can achieve 50% by guessing randomly.
Had the phrase "foundation model" become a term of art yet?
By 2018, the concept was definitely in the air since you had GPT-1 (2018) and BERT (2018). You could argue even Word2Vec (2013) had the core concept of pre-training on an unsupervised or self-supervised objective leading to performance on a downstream semantic task. However, the phrase "foundation model" wasn't coined until 2021, to my knowledge.
I guess I just find the whole "foundation model" phrasing to be designed in a way to pat the backs of the "winners" who would of course be those with the most money. I'm sure there are foundation models from groups that aren't e.g. OpenAI, but the origins felt egotistical and asserting that you made one prior to the phrase's inception only feels more-so.
Had you merely called it an early instance of pretraining, I'd be fine with it.
reminds me of Jim Simons of Renaissance advise when it comes to data science - sort first, then regress.
Not sort in the literal way right?
https://stats.stackexchange.com/questions/185507/what-happen...
The guy was sorting the X separately from y? That can’t be a real.
Is anyone else surprised by how poorly performing the results are for the vast majority of cases? The foundation model which had access to sensor data and behavioral biomarkers actually _underperformed_ the baseline predictor that just uses nonspecific demographic data in almost 10 areas.
In fact, even when the wearable foundation model was better, it was only marginally better.
I was expecting much more dramatic improvements with such rich data available.
I wonder how much of that is driven by poor performing behavioral models. There was a HN article from a few weeks back and it only had an accuracy of about 70% determining if someone was awake or asleep. I would guess that the secondary behavioral data used in this data (like cardiovascular fitness) are much harder to predict from raw sensor data than being awake or asleep.
I worked with similar data in grad school. I'm not surprised. You can have a lot of data, but sometimes the signal (or signal quality) just isn't present in that haystack, and there's nothing you can do about it.
Sometimes you just have to use ultrasound or MRI or stick a camera in the body, because everything else might as well be reading tea leaves, and people generally demand very high accuracy when it comes to their health.
Cool way of integrating the two approaches. For those on mobile, I created an infographic that's a bit more accessible: https://studyvisuals.com/artificial-intelligence/beyond-sens...
i love this because I build in medtech, but the big problem is no open weights, nor open data.
you can export your own apple XML data for usage and processing, but if you want to create an application and request apple XML data from users, that likely crosses into clinical research territory with data security policy requirements and de-identification needs.
what is the best way for non-big tech to buy such data for research and product development?
thanks for sharing. I also found wearable dataset of ~1k users https://cseweb.ucsd.edu/~jmcauley/datasets/fitrec.html
Trusting your health data with AI brothers is... extremely ill-advised.
I don't even trust Apple themselves, which will sell your health data any insurance company any minute now.
They might not sell "your" data outright, but it doesn't mean they won't sell inferences/assumptions that they make about you using your data.
The reality is that no matter how ethical the company you trust with that data is, you're still one hack or pissed off employee away from having that data leaked, and all of that data is freely up for grabs to the state (whose 3 letter agencies are likely collecting it wholesale) and open to subpoena in a lawsuit.
Thanks for posting this. This looks promising...
I have about 3-3.5 years worth of Apple Health + Fitness data (via my Apple Watch) encompassing daily walks / workouts / runs / HIIT / weight + BMI / etc. I started collecting this religiously during pandemic.
The exported Fitness data is ~3.5GB
I'm looking to do some longitudinal analysis - for my own purposes first, to see how certain indicators have evolved.
Has anyone done something similar? Perhaps in R, Python? Would love to do some tinkering. Any pointers appreciated!
Thanks!!
It might actually be worth writing your analysis in Swift with the actual HealthKit API and visualization libraries.
Bonus: when you’re done, you’ll have an app you can sell.
Not yet -- this one is just a research study. Some of their previous research has made it into product features.
For example, Apple Watch VO2Max (cardio fitness) is based on a deep neural network published in 2023: https://www.empirical.health/blog/how-apple-watch-cardio-fit...
Apple's VO2Max measures are not based upon that deep neural network development, and empirical seems to be conflating a few things. And FWIW, just finding the actual paper is almost impossible as that same site has SEO-bombed Google so thoroughly you end up in the circular-reference empirical world where all of their pages reference each other as authorities.
Apple and Columbia did recently collaborate on a heart rate response model -- one which can be downloaded and trialed -- but that was not related to the development of their VO2Max calculations.
Apple is very shrouded about how they calculate VO2Max, but it likely is a pretty simple calculation (e.g. how much is your heart responding based upon the level of activity assumed based upon your motion, method of exercise and movements). The most detail they provide is in https://www.apple.com/healthcare/docs/site/Using_Apple_Watch..., which mostly is a validation that it's providing decent enough accuracy.
What’s your source on Apple not using the neural network for VO2Max estimation? They’ve been using on-device neural networks for various biomarkers for several years now (even for seemingly simple metrics like heart rate).
FWIW, the article above links directly to both the paper and a GitHub repo with PyTorch code.
Apple was reporting VO2max for a very long time (much before 2023). I wonder what the accuracy was back then? Maybe they should the option for users to re-compute those past numbers based on the latest and greatest algorithm.
Has anyone seen the publishing of the weights or even an API release?
It's a "Foundation Model" for wearable devices. So "wearable" describes where it is to be used, rather than describing "foundation".
I worked on one of the first wearable foundation models in 2018. The innovation of this 2025 paper from Apple is moving up to a higher level of abstraction: instead of training on raw sensor data (PPG, accelerometer), it trains on a timeseries of behavioral biomarkers derived from that data (e.g., HRV, resting heart rate, and so on.).
They find high accuracy in detecting many conditions: diabetes (83%), heart failure (90%), sleep apnea (85%), etc.