Algo Optomization

I wanted to test the algorithm from a previous post on multiple samples, so I recorded three trips on my tracker, Flex and M7 while walking around SF. The algorithm has two parameters than can be adjust in order to fine tune it: The cutoff point for the amplitude above and below 1G needed to trigger the peak sample counter, and the minimum number of data points above that threshold in order to record a peak (two peaks = one step). I sampled this data at 100Hz, because when I last used the algorithm I was running out of resolution.

I wrote a script MultiComplexCounter.m which took the vector sum for a given trip, and calculated the absolute percent difference between its calculated steps and the Felx’s steps for an array of different values for the two parameters mentioned above. I then took this data from the three trips and averaged it, to produce the table below.  Combinations of parameters which have less than a 3% error are in Red, less than a 10% error are in Yellow:

Averaged Percent Error For 3 Separate Trials Across and Spectrum of Values for Two Parameters

Averaged Percent Error For 3 Separate Trials Across and Spectrum of Values for Two Parameters

I was pretty stoked that the average percent error was so low in several places. I would imagine that the Flex’s error versus the actual number of steps walked is somewhere in this range, so I am not that far off. I think it is also comforting to see that there is a clear band which shows a relationship between the number of samples in the peak and the threshold of peak detection. As the threshold for beginning to count samples expands on either side of 1G, it makes sense that there would be fewer samples in that peak to be counted.

The data gets even better, however, when you look at the individual trials, which boast combinations of parameters which yield percent errors as low as 0.2%, 0.3% and 0.9%! To put this in perspective, a 0.2% difference from the Flex is 1 step in 500. That is surely just as accurate as anything FitBit can do! You can see all the data tables here in Trial Tables.

Analyzing the data further, if you plot all the parameter combinations which give errors of less than 1% (there is at least one combination for each trial) on an individualized basis, you can see that the black boxes representing them are all fairly tightly clustered:

Black Boxes for Individual Values less than 1% On Top of Averaged Data

Black Boxes for Individual Values less than 1% On Top of Averaged Data

If you then take the average values for the parameters the black boxes represent, and round as needed since the number of samples must be an integer, you find that the ideal values moving forward are a +/-Threshold = 0.1583 and an Average Samples per Peak of 7. When using this algorithm to analyze data moving forward, I will be sure to use these values!

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: