MetaWear

I just saw this new project pop-up on Kickstarter this morning, and am really hoping it’s not an April fools joke…. This device will do everything that my home brew tracker can, but with increased form factor flexibility and Bluetooth LE. A product like this represents a major step forward in the commoditization of wearable hardware, and dramatically lowers the barrier to entry for new applications so that developers and entrepreneurs can focus on what matters… the data! I wish MetaWear had been around when I started my blog, and I look forward to playing around with the first generation of hardware. Thank you to the MbientLab team for working to make open source activity tracking hardware a reality!

MetaWear: Production Ready Wearables in 30 Min or Less

MetaWear: Production Ready Wearables in 30 Min or Less

 

The iPhone 5s’ M7 Processor As A Predictor of FitBit Steps

I’ve had an iPhone 5s since late September 2013, and wanted to examine the relationship between the daily step data it records and the daily step data my FitBit Flex records. Of course, the data from the M7 should consistently underestimate my step count, since I’m always wearing my Flex but don’t always carry my phone with me (especially when I’m working out). This post is thus fundamentally different from previous posts, because the underlying variable is really about my personal phone-carrying habits (I’ve already done a brief experiment which incorporates a step-for-step comparison of the M7 and the Flex). Nevertheless, I wanted to see how accurate the M7 data would be in predicting my daily FitBit step count.

The iPhone 5s will only hold up to 7 days worth of data from the M7, so if you fail to launch a 3rd party app which downloads and stores the M7′s data on a weekly basis, you’ll start to loose steps. I was pretty good about checking-in with the Argus app in Sep, Oct, Nov and Dec, but started to get a bit spotty in Jan and Feb. Since 9/24/13, there were 116 days for which I had daily step counts from the M7 via Argus, and the data are plotted below:

M7 vs FitBit Flex

M7 vs FitBit Flex

As expected, there is a pretty noticeable correlation, but before quantifying it I wanted to remove the outliers. I figured it was easiest to just manually remove data points for which either the M7 or the Flex recorded zero steps. The former could have been caused by my leaving my phone at home (which happened occasionally), and the latter was likely due to the periods when I wasn’t wearing the FitBit due to loosing one in the Bahamas or having my Force short out on me. Here is the plot of this data, along with a linear regression and R^2 coefficient:

M7 vs FitBit Flex Where Data Points with an X or Y Value =0 Have Been Removed

M7 vs FitBit Flex Where Data Points with an X or Y Value =0 Have Been Removed

Having removed the zeros, I also decided to remove other data points which were also clearly outliers (shown in yellow above). Not only did these points not fit the generally trend, but the dates on which they occurred were highly suspect : several of the points occurred when I was changing time zones / crossing international date lines, while others occurred on days before / after I lost a Fitbit. I figured these confounding factors warranted their removal, and their absence generated the the plot below which had a considerably improved r^2 coefficient:

M7 Data vs FitBit Data with All Outliers Removed

M7 Data vs FitBit Data with All Outliers Removed

There are several incredible features on this plot. The first is that the Y intercept of the linear regression (5257.7 Steps) is very close to the number of steps that my typical cardio workout routine (30 min on the elliptical) generates. I haven’t gathered data to support this, but I know qualitatively that I can always expect to gain about 5,000 steps on my FitBit after a 30 min workout. The M7, by contrast, gains no steps during this time as it is just sitting on the display stand streaming terrible Jason Statham movies off NetFlix. While this is seemingly a good explanation for the Y intercept, it is weakened by the fact that 1) I certainly don’t do cardio every day and 2) there must be an inherent baseline of daily of steps that my FitBit sees which my phone does not (for instance, getting dressed in the morning, showering, etc). It would be interesting, particularly for Apple or an app maker utilizing M7 data, to see what the average American’s baseline steps without their phone are, not including working out. This number could then be added back into M7 based step measurements in order to give a more accurate picture of a users real step count.

The second notable feature is that the slope of the line is almost exactly 1, which means that aside from any baseline + elliptical generated steps, the FitBit and M7 are almost spot on in their step counts. If the slope were considerably greater than or less than 1, it would be evidence that there is a systematic bias between the two measurements.

Finally, the R^2 value is decently high, which means that M7 data could probably be used to give me a rough idea of my FitBit step count on average. While it’s unlikely to be good enough to make determinations on a daily basis, if I can keep my phone on me for most of the day, I could probably bucket my weeks into high, med or low movement without having to keep track of going to the gym.

Algo Optomization

I wanted to test the algorithm from a previous post on multiple samples, so I recorded three trips on my tracker, Flex and M7 while walking around SF. The algorithm has two parameters than can be adjust in order to fine tune it: The cutoff point for the amplitude above and below 1G needed to trigger the peak sample counter, and the minimum number of data points above that threshold in order to record a peak (two peaks = one step). I sampled this data at 100Hz, because when I last used the algorithm I was running out of resolution.

I wrote a script MultiComplexCounter.m which took the vector sum for a given trip, and calculated the absolute percent difference between its calculated steps and the Felx’s steps for an array of different values for the two parameters mentioned above. I then took this data from the three trips and averaged it, to produce the table below.  Combinations of parameters which have less than a 3% error are in Red, less than a 10% error are in Yellow:

Averaged Percent Error For 3 Separate Trials Across and Spectrum of Values for Two Parameters

Averaged Percent Error For 3 Separate Trials Across and Spectrum of Values for Two Parameters

I was pretty stoked that the average percent error was so low in several places. I would imagine that the Flex’s error versus the actual number of steps walked is somewhere in this range, so I am not that far off. I think it is also comforting to see that there is a clear band which shows a relationship between the number of samples in the peak and the threshold of peak detection. As the threshold for beginning to count samples expands on either side of 1G, it makes sense that there would be fewer samples in that peak to be counted.

The data gets even better, however, when you look at the individual trials, which boast combinations of parameters which yield percent errors as low as 0.2%, 0.3% and 0.9%! To put this in perspective, a 0.2% difference from the Flex is 1 step in 500. That is surely just as accurate as anything FitBit can do! You can see all the data tables here in Trial Tables.

Analyzing the data further, if you plot all the parameter combinations which give errors of less than 1% (there is at least one combination for each trial) on an individualized basis, you can see that the black boxes representing them are all fairly tightly clustered:

Black Boxes for Individual Values less than 1% On Top of Averaged Data

Black Boxes for Individual Values less than 1% On Top of Averaged Data

If you then take the average values for the parameters the black boxes represent, and round as needed since the number of samples must be an integer, you find that the ideal values moving forward are a +/-Threshold = 0.1583 and an Average Samples per Peak of 7. When using this algorithm to analyze data moving forward, I will be sure to use these values!

A Six-Pack in Six Positions

Over the holidays, I started thinking that an activity tracker algorithm should be position agnostic, i.e. it shouldn’t make a difference whether you have your arm by your side, over your head, etc. It also shouldn’t matter what the loading on that arm is, and the algorithm should produce the same result whether you’re carrying a dumbbell back to the rack at the gym or simply walking down the street. It is the vibration caused by taking a step, which is transmitted through the body to wherever the tracker is being worn, that is really the key signal. I decided to investigate this.

I picked a nice level spot of sidewalk in SF where I walked 50 steps six times in a row, taking care to make the exact same strides each time by stepping on the regularly spaced expansion joints in the cement. Each time I made the trip, I held my left arm outstretched in one of the following positions:

  • Down at my side, perpendicular to the ground
  • Above my head, perpendicular to the ground
  • In front, parallel to the ground and perpendicular to the plane of my chest
  • Behind, parallel to the ground and perpendicular to the plane of my chest (or as close as I could get it)
  • Left of me, parallel to the ground and the plane of my chest
  • Right, parallel to the ground and the plane of my chest

In case that language isn’t clear, here’s an amazing diagram which shows the the single letter references I’ll use throughout this post to refer to each position:

Positions

DaVinci Caliber Sketch of the Six Different Positions

I also decided I would have a better chance of holding my arm steady if it was weighted, so I choose to hold a 5-pack of Coors Light in my left hand, grabbing onto the empty loop where the missing 6th beer should have been. This put a mass of around 1.8Kg at the tip of my arm,  but what I failed to anticipate until I was half way through the experiment was that the 5-pack oscillated back and forth on its plastic collar, potentially introducing undesirable signals. As I have often thought on Saturday and Sunday mornings, Coors Light turned out to be a bad decision. In future experiments, I will choose something less dynamic like a barbell or a book.

As I was pacing off my 50 steps with my arm in each position, I also recorded data from my Flex (located on the same wrist as my tracker) and my iPhone 5s M7 processor (in my left pants pocket). Here are the results:

Device Step Counts

Device Step Counts

Of course, the M7 was unaffected by the arm position because it was in my pocket, but both of these devices turned out to be pretty darn accurate in this setting. FULL DISCLOSURE: I lost my old Flex diving on a reef off the cost of Great Abco Island, so the measurements here are coming from a new device versus previous posts. Given the volume at which FitBit is manufacturing , however, I will assume that any variations  between the two devices is negligible.

The Raw Data for the 6 Position Trial is plotted below, along with the vector sum of the three axes. I’ve also labeled when the trials are occurring, to help them stand out from some of the other noise that’s in here (like me walking back to the other end of the sidewalk in order to repeat the exact same 50 steps):

Plot of All Six 50 Step Trials

Plot of All Six 50 Step Trials

I feel pretty good about all these, except for the trial B when the 5-pack was behind my back… This was a really awkward position to maintain, and combined with the oscillating beers makes it a pretty junk signal. I decided to throw it out, and plotted the vector sums for all the other 5 trials bellow:

Vector Sums for the 5 Good Trials

Vector Sums for the 5 Good Trials

The next thing I wanted to do was revisit the methodology from an earlier post, and figure out what the average step looked like for the vector sum each of the 5 trials with the tracker in different orientations. Here are the results:

Overlaid Plot With The "Average Step" For Each of the 5 Trials

The “Average Step” From each of the 5 Trials

From this plot, we can learn that no matter what orientation the arm is in, a step looks pretty much the same when looking at the vector sum of all three axes. I then added the average of all 5 trials to the plot, and threw in some annotations which I believe will be useful in developing a new step counting algorithm:

Annotated Average Signals

Annotated Average Signals

As the plot shows, the feature of all these signals that is the most similar is the period of the step and the slope of the signal between 0.9Gs < x < 1.1Gs (see yellow circles). I therefor believe it will be interesting to build an algorithm with the following criteria for what constitutes a step:

  1. Go from 0.9Gs to 1.1Gs in 1, 2 or 3 samples without any decrease in force (i.e. S1 < S2 < S3)
  2. Remain above 1.1Gs for at least 5 samples
  3. Go from 1.1Gs to 0.9Gs in 1, 2 or 3 samples without any increase in force (i.e. S1>S2>S3)
  4. Remain below 0.9Gs for at least 5 samples

If all of the above criteria are met, a step would then be recorded. Obviously this is a continuous cycle, where you could start counting at either positions 1 or 3. I wrote a script ComplexCounter.m which approximated the above, although I wound up not coding in anything related to the slope because I found I often had no data points in that range due to a low sample rate. I adjusted the algorithm so that in order to count as a step, the signal had to remain above 1.1Gs or below 0.9 Gs for two continuous samples, and generated these results:

ComplexCounter.m Results

ComplexCounter.m Results

The algorithm performed pretty well, even when back tested against a much nosier signal from a previous post. Overall, I think I came away thinking that the right way to approach this problem is through the vector sum of the forces, since the signals look pretty much the same no matter how your wrist / arm are oriented. I also think there is something to the approach I’ve taken of measuring a minimum number of samples above a threshold, but I don’t believe my algorithm is sufficiently robust at present. I need to gather some more real world data to back test it with and potentially make some further tweaks.

Fitbit Force Review

I was excited to receive my Fitbit Force in the mail last week, because I’m constantly looking at my Flex to see what time it is (there is, of course, no clock on the Flex, so this habitual action frequently leaves me feeling quite stupid). The Force is definitely a bit heavier and almost twice as wide, but the quality of the display was excellent and the ability to interact with the device with a button rather than crude tapping gestures was great. Unfortunately, the Force is not even remotely waterproof! I wore it accidentally in the shower for 20 seconds before remembering to take it off, and the device shorted out. For about half a day, the Force flickered on and off while intermittently displaying the Fitbit logo, and condensed water was visible inside the display. Although it ultimately started working again, the problem reemerged after 5 minutes of washing dishes so I stopped wearing the device.

Condensed Water Inside the Force

Condensed Water Inside the Force

Given the durability of the Flex, I was shocked that a device which feels just as rugged can’t even handle a splash of water. Having a display to show time and step count is a great feature, but if you can’t even wash dishes with it, I think there’s a brand-damaging design flaw in this product. The whole point of the Flex is that it’s thoughtless, and simply disappears into your life for days at a time until it needs to be charged. If getting my activity tracker wet is instead tantamount to spilling water on a mogwai,  that completely changes the use case and makes it stick out like a sore thumb in my daily routine. I’ll be returning Fitbit’s latest creation, and leaving “the force” to Starwars…

Sample Signal Selection

While reviewing some previous posts, it occurred to me that my methodology for identifying cross correlation sample signals – i.e.,  choosing a “representative” signal at an arbitrary point in time for an arbitrary duration – was not very scientific. Using any discrete sample walking signal (rather than the average walking signal) might result in lower than expected correlations. So, I decided to build a script which would automatically extract an average sample signal from a specified window of data, in the hope that it could be used in the future to generate better correlations.

I used the data from a previous post, and found a window of data that I consider to be representative of general walking, from time 160 < t < 180:

Plotted Data from All 3 Axis from Time 160 < t < 180

Plotted Data from All 3 Axis from Time 160 < t < 180

There are a lot of interesting features in the plot above which could be used to identify a step – for example, each Z axis peak seems to be accompanied by a Y axis peak, and every other local minima in the X axis seems to correspond to a peak in both the Y and Z axes. I will ignore these for now, and focus on the X axis, where it is clear that period of the signal is two peaks long, i.e. there is a larger peak, a smaller peak, and then the signal repeats. This makes sense in the real world, in which the force the logger experiences in the X axis is larger from the step of one foot (presumably, the step which occurs on the same side of the body on which the logger is being worn) than in the other. I created a script, signalextractor.m, which takes the data from an input window, runs a peak detection, and then extracts the data in two-peak long periods. Plotted on top of each other, the data for each two-step period looks like this:

Data from Multiple Steps in the X Axis

Data from Multiple Steps in the X Axis

I was very pleased with the consistency between the various periods shown in the plot, although the discontinuity between the beginning and ending values is disconcerting. I will assume however, that this is simply due to the low sample rate, and that if I had sampled at the logger’s maximum rate of 200Hz, the fist and last values for each plotted period would be nearly the same.  I then used this extracted data to produce an average sample signal, which is plotted below ontop of the underlying sample data points:

Average Signal ontop of Multiple X-Axis Two-step Cycles

Average Signal ontop of Multiple X-Axis Two-step Cycles

I now have a useful script which I can use in the future to generate an average sample signal, as well as a means for investigating the standard deviation of accelerometer readings at each point in the step cycle. It might be useful in a future step counting algorithm, for example, to reject any signal as a step if it falls outside of a given range at a given point in time.

DIY Medical Implant

Many thanks to TKM for letting me know about this DIY implant a bio-hacker recently embedded into his arm. As he aptly pointed out, I will never be this hardcore, although I’d like to think I would have done a better job with those sutures…

Coors Light & Fourier Transforms

I wanted to use the same methodology from a previous post (Coffee Run) to analyze a new set of data,  so I took a  different route  around the block to buy a six pack of Coors Light at the corner store. I also wanted to test out my iPhone 5s’ M7 motion co-processor, so I kept my phone in my left pant’s pocket and used the Argus app to note the number of steps (any app should give the same results, since it’s just calling an API from the M7).  I used the correlationwindower.m script from an earlier post on the raw data (beerrundata), and generated the following result using the first block of steps just before t=100 as the sample data:

Plot of the Correlation of a Sample Signal (91<t<97) With the Dataset (blue) ontop of the Raw Data (Green)

Plot of the Correlation of a Sample Signal (91<t<97) With the Dataset (blue) ontop of the Raw Data (Green)

Immediately, you can tell something is wrong because from 260 < t < 310 on the X axis, when I am clearly standing in line not moving, the correlation is really high. This led me to think about micro-oscillations that might be showing up in the noise during periods of non-motion, so I wanted to look at the data in frequency rather than time. I built a simple script to perform a discrete Fourier transform on the data using the fft command in Matlab (quickdft.m), and produced the following plot:

Fourier Transform of the Data from Each Axis

Fourier Transform of the Data from Each Axis

I was pretty excited by this data. First, the data from the three axes are all very consistent with each other, which gave me a lot of confidence that I can ultimately build an algorithm that can extract motion from any axis (versus just the X axis, currently). Secondly, I thought it was very interesting that the peaks for the Y and Z axis were greatest around 1 Hz, while the X axis peak was greatest at around 2 Hz. I timed my gate, and found that each step takes about a half second, so what must be going on is that the X axis is oscillating with each step, while the Y and Z axes are oscillating with each stepping cycle (i.e. both the left and right foot). This makes sense, given that my hand was positioned as follows:

Logger Positioning During Normal Walking Motion

Logger Positioning During Normal Walking Motion

So, you get an acceleration in the X axis with every step as you bob up and down while walking, you probably get some lurching back and forth in the Y axis which occurs with each step pair, and in the Z axis you sway side to side a bit with each step pair since no two legs are exactly the same length. It is also interesting to note the 2Hz frequency component of the Y axis signal, which is probably capturing some of that same individual step motion as the X axis because some part of my the raising and lowering with each step is actually occurring in the Y axis. At the same time, the absence of any 2Hz signal in the Z axis is very comforting as it reinforces the idea that the Z axis is experiencing a side to side sway.

I thought that perhaps creating a band pass filter might solve the problem of the phantom correlation data during periods of no motion; perhaps there was a high frequency noise signal with lower frequency elements that were seeping into the data. So, I created the following filter in Matlab’s filter design tool, which was launched with the fdatool command.

A Simple Bandpass Filter

A Simple Bandpass Filter

I used the bandpass filter on the X axis raw data, and produced the plot below:

X Axis Data After Bandpass Filtering

X Axis Data After Bandpass Filtering

Running a simple peak detection on this data  ([x_pks, x_locs] = findpeaks(xf,’MINPEAKHEIGHT’,.1,’MINPEAKDISTANCE’,10);) counted 549 steps, which is pretty close to the 539 recorded by my Fitbit Flex and the 536 recorded by the iPhone 5s. I then changed the parameters of the filter to focus even more tightly on the 2Hz frequency with the following settings (fstop 1.5, fpass1 = 1.8, fpass2 = 2.2, fstop2 = 2.5), and counted only 507 steps, which led me to believe that the previous answer was mostly just lucky, although it is possible that tightening the filter might have excluded steps which occurred at a slower pace. Overall, I was a little disappointed with this experiment in frequency, but I learned a lot which can hopefully be applied to future experiments.

Interesting post from FitBit’s Blog

Found this really interesting blog post (pdf here: Fitbit 11-23-10) from Fitbit’s chief data scientist explaining how steps do not necessarily equate to calories. Will have to check out the two authors mentioned in the post as well….  Of course, I need to figure out how to reliable detect steps first! It had not occurred to me to look for academic research on the the matter, however, so perhaps I should look for some literature on the subject before blinding performing my next analysis.

Coffee Run

I decided it was finally time to do some real signal processing, so I thought I would start off by trying to count the number of steps in one of my daily routines: walking to Real Foods in the morning to get a coffee and a bagel. I walked naturally across my apartment to establish a baseline signal, with both my Flex and my logger on my right arm. The Flex recorded 21 steps in this period. I then walked out of the apartment, down about 10 steps, down a gently downward sloping street for about 1.5 blocks, up a flight of about 10 steps into the Real Foods, got a cup of coffee and ordered my bagel. I stood around for a while waiting, and then reversed the process after check out.  After the initial calibration steps, I recorded 517 additional steps on my Flex which equated to 0.24 miles as I undertook the following trip:


I then plotted the data (Real Foods Data) my logger had recorded at a sample rate of 25 Hz, as shown below:

Plot of Logger Data From a Quick Trip to Real Foods

Plot of Logger Data From a Quick Trip to Real Foods

The first yellow bar shows the calibration steps, and the second and third bars show the trips to and from Real Foods, respectively. My first thought was to run a script from a previous post for the calibration step data to see how it performed.  I did this from time t = 43 to t = 60, with the minimum distance between samples set at d = 10, and produced the following plot:

myfirstpeakdetector.m Running From Time t = 43 to t = 60

myfirstpeakdetector.m Running From Time t = 43 to t = 60

The simple algorithm (which is incredibly sensitive to the input data range is defined) calculated the following number of peaks (i.e. step count):

Algorithm Output

Algorithm Output

While the Y and Z axes are close to the expected value of 21, the X axis seems to be an outlier, which drags the average value higher. I won’t get into a discussion about potential causes for this, since the real goal here was simply to find a data range within the calibration steps that was suitable to use as a sample data signal. I chose to use t = 45 to t = 55, which seems like it should contain a very crisp, uniform signal for all three axes. I wanted to run a cross correlation of the sample data range versus all the data for each axis, with hoped that I could use periods of high correlation to window a peak detection algorithm, thereby only counting peaks which matched the pattern of a series of steps. I created a script correlationplotter.m which uses the xcorr function to sweep data from each axis from t = 45 to t=55 across all the data from that axis, and then plot the correlation (blue) on top of the original data (green). It produced the following plot:

Correlation Between All Data and a Data Sample  from t=45 to t=55 Plotted Ontop of the Original Data for each Axis

Correlation Between All Data and a Data Sample from t=45 to t=55 Plotted Ontop of the Original Data for each Axis

The X Axis appears to have good data. The correlation output is high (magnitude has been scaled to match original signal magnitude) during the sample data period, as well as during the two heavy period of walking to and from the store. The relationship between the two data sets from t=550 to t=650 is truly text book, and gave me great confidence in the methodology being used Unfortunately, this methodology does not seem to hold up as well for the Y and Z axes, in which the relationship between the correlation and the original data seems to be the opposite of what it should be; it is lower during periods of heavy walking, and higher during periods of random motion or little movement. This could possibly be explained by the fact that with my arm in the normal walking position, i.e. hanging to my side, the X axis is most likely to experience a force that is directly related to the number of steps, as my arm is moved up and down when the arch of my foot expands and contracts as part of my stride. I will move on with an analysis solely of the X axis, but if anyone can spot an error in my correlation methodology for the other two axes, I would greatly appreciate a comment on what I’m doing wrong.

I created a script correlationwindower.m, which performs a cross correlation of the data from an axis with a sample data set selected by a user definable range. The script then creates a windowing vector by converting every data point in the correlation vector to either a 0 or a 1 based on whether its value is greater or less than a user definable threshold, expressed as a percentage of the maximum value in the correlation data set (after scaling). This windowing vector is then multiplied by the original data set to remove all values which do not have a decent correlation with the sample data set. It then performs a peak detection on this new windowed data, where peaks must be greater than the mean of all non-zero values, and have a minimum distance between them of a user definable number of samples. The input window for the script looks like this:

Input Window for correlationwindower.m

Input Window for correlationwindower.m

The windowed signal looks like this:

X Axis Signal Windowed by It's Correlation With a Sample Dataset

X Axis Signal Windowed by It’s Correlation With a Sample Dataset

And the script outputs the detected peaks plotted on top of the original, “un-windowed” signal:

Original Signal With Detected Peaks

Original Signal With Detected Peaks

As you can see, for the input parameters shown in the previous screenshot of the inputs window, the script detected 583 peaks, which is about 13% more than we expected; not too bad considering all the non-cyclical motion that was going on. I thought it was also worth performing a brief sensitivity analysis around both the correlation windowing threshold value and the minimum distance between peaks. The table below shows the absolute percentage different between the number of steps the script calculated given different values for the aforementioned parameters, and the 517 steps the Flex recorded. All parameter pairs with a 13% or less absolute difference have been highlighted, to show that inputting those parameters would have been at least as accurate as the data shown above:

Sensitivity Analysis Around Minimum peak distance and Windowing Threshold

Sensitivity Analysis Around Minimum peak distance and Windowing Threshold

The sensitivity analysis shows that there is a broad tolerance centered roughly around the 10 samples, 60% cut off threshold for the windowing. This is probably a good thing, although it would be nice to expand the map further. Overall, I was happy with the first real signal processing that I’ve done so far, and I look forward to trying the script methodology out on other data sets in the future.

UPDATE: I just realized that my algorithm is actually more accurate than I had previously thought. The 583 detected peaks includes ALL the peaks in the sample, when what should be compared to the Fitbit’s 517 measured peaks are just the peaks which occurred after calibration set was recorded. Removing the peaks that occurred before t = 55s brings the total detected peaks down by 30 to 553, which is only a 7% difference from what the Fitbit calculated. The sensitivity table above is now invalid because it includes these early peaks, but I’m not going to bother to recalculate it because it’s scotch thirty.

Follow

Get every new post delivered to your Inbox.