# 06-01 Timestamping results
I implemented yesterday's proposed timestamping algorithm that matches a client measurements with a gateway measurement of least time difference. Incoming measurements are handled by an event listener, so that processing is decoupled from http traffic. An NTP server running in my LAN guarantees sub millisecond time accuracy (0.002 ms jitter) and the raspberry pis recalibrate their network time on boot.
I ran a test with client and server at fixed distances, hence the lines should be the same and the corrected line as flat as possible. Here are the results:
(click to enlarge)
One further reason why correcting without timestamps is wrong is clear when one notices that measurements are not received by the server in chronological order. This is shown by the lines jumping backwards on the x-axis. Therefore it is wrong to correct the latest measurement in cache, since perhaps it is older / newer than a gateway measurement.
The correction is wrong when the measurement stream is constant. When client and gateway measurements are more erratic and parallel (as towards the end of the example above), the green line actually seems to be better than the uncorrected versions.
# Problem:
- The red and blue lines do not always dip and rise at the same time. Consider the first half of the test: blue is almost perfectly flat and red is very noisy. Since the devices are subject to the same environment, the curve drops / rises should be same. Since the correction offset is based on the flat blue line and the blue line is constantly above its rough average the algorithm is confused and deducts the same offset from client (red) measurements. Like this the green and red lines are parallel to each other.
# Solutions
# Test some more configurations
I need to try some more things out with the setup and tweak parameters. Collecting more measurements will perhaps improve results since the average gateway RSSI is more accurate. Also the effects of placing the gateway and client farther away from a server to induce higher variance would be interesting to observe. My assumption is that then the gateway is more effective at smoothing the curve. At closer ranges it might be better to just save the uncorrected RSSI.
# Increase the uploading interval
Since everything is timestamped, it does not matter for the correction algorithm how often I upload measurements. The benefit of increasing the time between uploads is that fewer CPU cycles (JS is single threaded) are wasted on uploading which could be detecting measurements. Since I use an event listener in the backend for processing incoming measurements, I can wait an arbitrarily selected duration to let all measurements upload and reduce time deltas. At the same time this increases the system latency, but quite frankly I am not too worried about this at the moment (my target refresh rate is pegged at 1 Hz).