Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Beyond sensor data: Foundation models of behavioral data from wearables (arxiv.org)
186 points by brandonb 8 hours ago | hide | past | favorite | 38 comments




I worked on one of the first wearable foundation models in 2018. The innovation of this 2025 paper from Apple is moving up to a higher level of abstraction: instead of training on raw sensor data (PPG, accelerometer), it trains on a timeseries of behavioral biomarkers derived from that data (e.g., HRV, resting heart rate, and so on.).

They find high accuracy in detecting many conditions: diabetes (83%), heart failure (90%), sleep apnea (85%), etc.


Had the phrase "foundation model" become a term of art yet?

By 2018, the concept was definitely in the air since you had GPT-1 (2018) and BERT (2018). You could argue even Word2Vec (2013) had the core concept of pre-training on an unsupervised or self-supervised objective leading to performance on a downstream semantic task. However, the phrase "foundation model" wasn't coined until 2021, to my knowledge.

Insurance and health insurance companies must be super interested in this research and its applications.

I'm sure they're also interested in the data. Imagine raising premiums based on conditions they detect from your wearables. That's why it's of utmost importance to secure biometrics data

At least in the US, health insurers can’t raise rates or deny coverage based on pre-existing conditions. That was a major part of the Affordable Care Act.

The ACA will not survive the next couple of years.

reminds me of Jim Simons of Renaissance advise when it comes to data science - sort first, then regress.


The guy was sorting the X separately from y? That can’t be a real.

"Nothing is foolproof to a sufficiently talented fool"

Not every day you find pseudo permutation in the wild!

Is anyone else surprised by how poorly performing the results are for the vast majority of cases? The foundation model which had access to sensor data and behavioral biomarkers actually _underperformed_ the baseline predictor that just uses nonspecific demographic data in almost 10 areas.

In fact, even when the wearable foundation model was better, it was only marginally better.

I was expecting much more dramatic improvements with such rich data available.


I worked with similar data in grad school. I'm not surprised. You can have a lot of data, but sometimes the signal (or signal quality) just isn't present in that haystack, and there's nothing you can do about it.

Sometimes you just have to use ultrasound or MRI or stick a camera in the body, because everything else might as well be reading tea leaves, and people generally demand very high accuracy when it comes to their health.


Cool way of integrating the two approaches. For those on mobile, I created an infographic that's a bit more accessible: https://studyvisuals.com/artificial-intelligence/beyond-sens...

i love this because I build in medtech, but the big problem is no open weights, nor open data.

you can export your own apple XML data for usage and processing, but if you want to create an application and request apple XML data from users, that likely crosses into clinical research territory with data security policy requirements and de-identification needs.


what is the best way for non-big tech to buy such data for research and product development?

Some are for free:

- aidlab.com/datasets

- physionet.org


thanks for sharing. I also found wearable dataset of ~1k users https://cseweb.ucsd.edu/~jmcauley/datasets/fitrec.html

Trusting your health data with AI brothers is... extremely ill-advised.

I don't even trust Apple themselves, which will sell your health data any insurance company any minute now.


What do you base that suspicion on?

nature abhors an unexploited resource

Thanks for posting this. This looks promising...

I have about 3-3.5 years worth of Apple Health + Fitness data (via my Apple Watch) encompassing daily walks / workouts / runs / HIIT / weight + BMI / etc. I started collecting this religiously during pandemic.

The exported Fitness data is ~3.5GB

I'm looking to do some longitudinal analysis - for my own purposes first, to see how certain indicators have evolved.

Has anyone done something similar? Perhaps in R, Python? Would love to do some tinkering. Any pointers appreciated!

Thanks!!


FWIW, we're working on something similar (you wouldn't necessarily need to write R or Python). Feel free to email me at bmb@empirical.health and I can add you to a beta once we have it ready!

Thanks, I'll reach out.

I am curious to do my own analysis, for two main reasons:

- some data is confidential (I'd hate for it to leave my devices) - wanna DIY / learn / iterate

Will ping you in any case. Thanks


It might actually be worth writing your analysis in Swift with the actual HealthKit API and visualization libraries.

Bonus: when you’re done, you’ll have an app you can sell.


:thumbs_up.gif:

My sentiments, exactly.

Though I'm looking to scratch my own itch for now...


Interesting to see contrastive loss instead of a reconstruction loss.

Has anyone seen the publishing of the weights or even an API release?

In the paper, they say they can't release the weights due to terms of consent with study participants (this is from the Apple Heart and Movement study).

Is there a way to run this on your own data? I’ve been wearing my Apple Watch for years and would love to be able to use it better.

Not yet -- this one is just a research study. Some of their previous research has made it into product features.

For example, Apple Watch VO2Max (cardio fitness) is based on a deep neural network published in 2023: https://www.empirical.health/blog/how-apple-watch-cardio-fit...


Apple was reporting VO2max for a very long time (much before 2023). I wonder what the accuracy was back then? Maybe they should the option for users to re-compute those past numbers based on the latest and greatest algorithm.

Apple's VO2Max measures are not based upon that deep neural network development, and empirical seems to be conflating a few things. And FWIW, just finding the actual paper is almost impossible as that same site has SEO-bombed Google so thoroughly you end up in the circular-reference empirical world where all of their pages reference each other as authorities.

Apple and Columbia did recently collaborate on a heart rate response model -- one which can be downloaded and trialed -- but that was not related to the development of their VO2Max calculations.

Apple is very shrouded about how they calculate VO2Max, but it likely is a pretty simple calculation (e.g. how much is your heart responding based upon the level of activity assumed based upon your motion, method of exercise and movements). The most detail they provide is in https://www.apple.com/healthcare/docs/site/Using_Apple_Watch..., which mostly is a validation that it's providing decent enough accuracy.


What’s your source on Apple not using the neural network for VO2Max estimation? They’ve been using on-device neural networks for various biomarkers for several years now (even for seemingly simple metrics like heart rate).

FWIW, the article above links directly to both the paper and a GitHub repo with PyTorch code.


>FWIW, the article above links directly to both the paper and a GitHub repo with PyTorch code.

Neat, though the paper and the Github repo have nothing to do with Apple's VO2Max estimations. It's related to health, and touches on VO2Max and health sensors, but the only source claiming any association at all is that Empirical site. And given that this research came out literally years after Apple added VO2Max estimates to their health metrics, it seems pretty conclusive that it is not the source of Apple's calculations. Neat research related to predicting heart rate response to activity (which might come into play for filling in measurement gaps which happen during activity when a device isn't tight enough, etc).

>What’s your source on Apple not using the neural network for VO2Max estimation?

You're asking me to prove a negative. Apple never claims that they do any complex math or deep neural networks to derive VO2Max, and from my own observations of its estimates of mine, it seems remarkably trivial.

Trivial can still be accurate. But it hardly seems complex. Like, guess people's A1c based upon age, body fat percentage, demographic and you'll likely be high-90s accurate with trivial algebra.

>even for seemingly simple metrics like heart rate

Deriving heart rate from a green light imperfectly reflecting off skin, watching for tiny variations in colour change, is actually super complex! Doing it accurately is actually pretty difficult, which is why wearable accuracy is all over the place, though Apple is one of the leaders and has been for years. Guessing a number based upon HR and activity level isn't quite as complex.


Can someone explain what "wearable foundation" means?

It's a "Foundation Model" for wearable devices. So "wearable" describes where it is to be used, rather than describing "foundation".



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: