A few weekends ago, I was challenged by a friend to do more steps than him. Of course, I won 😉 But I noticed he was wearing his activity tracker on his wrist while I was wearing mine on my waist. As I noticed several times before, when I had an activity tracker on my wrist, these devices tend to capture some movements even if you don’t actually walk (while typing energetically on the computer or while driving for instance).
So I took the opportunity of a small trip to wear 2 activity trackers, one Fitbit One on my waist and one Fitbit Charge HR on my wrist. Before going to bed, I noted down how many steps each of them detected (note that, except for the initial time synchronization, I didn’t add the Charge to the Fitbit app, not knowing exactly how the app/service would reconcile data collected concurrently).
The dataset is available on Github. Feel free to use and re-use it. It unfortunately contains only 7 days of data (as I lost my One when I ran in the airport on my trip back home, trying to catch my connecting flight, on the 8th day). And, as you can see, it was not a very physically active week (I had to juggle between work in a different time zone and family – first world problem, I know).
From this, it looks like the Surge HR is quasi-systematically capturing more movement than the One. This tends to reinforce the idea that wearing your activity tracker on your wrist will capture more movement than when wearing it on your waist.
Both activity trackers can also capture the number of stairs you climb. Below you can see that the difference is not that obvious (the Charge – wrist – capturing less stairs in the beginning and then more in the end than the One – waist).
Of course, this is a very small dataset (the accompanying R code to generate these graphs is also available on Github). So I wondered if anybody studied that more rigorously and published their data. In a very short, non systematic literature review on Pubmed, I found three publications of interest.
The newest paper (Loprinzi and Smith, 2017) mentions a difference between waist- and wrist-mounted activity devices but their abstract doesn’t mention how different it is (the full paper is paywalled). It is interesting to note that they compared 1) the same device (an Actigraph GT9x) worn at different locations (this eliminates a potential bias between devices) and 2) activities when subjects were “forced” to walk/run on a treadmill in lab conditions and when subjects were allowed to move freely. The second point is interesting because it differentiates between “real” steps (on a treadmill) and potentially additional, non-step-related activity (in free-living conditions).
Also using one type of activity tracker, Tudor-Locke et al. (2015) found a difference between waist- and wrist-attachment sites too. They concluded that the “wrist attachment site detected consistently fewer visually counted steps than the waist attachment site at most treadmill speeds during laboratory testing. In contrast, the wrist attachment site produced a higher average step count […] than the waist attachment site under free-living conditions.”
The third paper (Gomersall et al., 2016) came to the same conclusion, comparing a Fitbit One, a Jawbone Up and an Actigraph device (the Up – wrist-worn – overestimated by 14% the number of steps detected by the Actigraph, compared to an overestimation of 8% for the One – waist-worn).
So, as usual, the data is only as good as the way you collect it. And comparison should always take into account these differences. Another conclusion is be that consumer-level activity trackers are probably less reliable than research-grade devices (something the Actigraph is considered to be – I worked with an Actiwatch, in 2006). But if you take the raw data from Fitbit devices with a pinch of salt, you can fairly say if you were active or not, if you decreased your risk of cardiovascular disease or not.