# 05-30 RSSI correction results are poor

The real time corrected RSSI measurement has larger fluctuations than the uncorrected RSSI. It has higher amplitudes. Honestly I was half expecting this. The reason is most likely that the gateway and client RSSI are not paired with their real time counterparts in backend and are corrected with the wrong offset.

# Problem

Corrected vs uncorrected RSSI Stream

The blue and red lines ran through my RSSI correction algoritm with correction offset calculated using a rolling average gateway RSSI of varying window sizes. 20 measurements was recommended by an IEEE research paper as a rolling average window size. I tested measurement averaging set sizes at intervals between 10 and 100. has shown to be the best when set to a 20 measurement average.

Note that the data set for the graph is not based on the exact same measurements since I only saved the modified RSSI after correction for technical reasons. The reader may be mislead to think that the lines represent correction attempts on the same data. This is wrong.

It is clearly visible that the uncorrected RSSI has a lower standard deviation than the other graphs based on corrected RSSI . To recall, here is the simple calculation behind :

First the calculation offset is derived from the difference between a gateway RSSI measurement and the average gateway RSSI . Then the client side RSSI is calculated by deducting from an incoming client measurement .

The first reason for the poor results is that and are not being correctly being matched up with their real time counterparts in the backend due to network and scanning latency. Depending on the temporal offset this means that the wrong is applied to a client measurement and this makes it possible for even larger amplitude valleys and peaks.

Secondly, the scanner does not take evenly sized samples from servers. This randomness increases chances that the wrong client side measurements are linked up with those of the gateway. Consider this logging output from the gateway with four active servers:

storing measurement queue
RAM: 57% or 8.12631607055664Mb of 14.08203125 Mb
Uptime: 313 seconds
measurement: -60 from 24:6f:28:7a:42:a2
measurement: -54 from 24:6f:28:7a:57:02
measurement: -64 from 24:6f:28:7a:48:2e
measurement: -58 from 24:6f:28:7a:57:02
measurement: -62 from 24:6f:28:7a:48:2e
measurement: -60 from 24:6f:28:7a:42:a2
storing measurement queue
RAM: 58% or 8.21072769165039Mb of 14.08203125 Mb
Uptime: 313 seconds
measurement: -54 from 24:6f:28:7a:57:02
measurement: -56 from 24:6f:28:7a:48:2e
measurement: -73 from 24:6f:28:7a:42:a2
measurement: -64 from 24:6f:28:7a:41:3a
measurement: -58 from 24:6f:28:7a:57:02
measurement: -60 from 24:6f:28:7a:42:a2
measurement: -54 from 24:6f:28:7a:57:02
  • There is only one measurement from server with mac address 41:3a
  • In a single request there can be multiple measurements from the same server of varying RSSI (57:02, 42:a2, 48:2e)
  • There are bursts of activity: Sometimes 41:3a contributes the most measurements to a request

Like this is it very unlikely that matching client and gateway measurements end up in the backend at the same time.

# Solutions

# 1 Rate limiting on the client side

To solve the randomness of measurent frequency in each request, one could validate if a measurement from each server is passed to the backend by the gateway. It would increase the throughput of the requests. The problem with this simply is that requests would come in at equally random rates and would not help to syncing up client and gateway measurements.

# 2 Time Stamp each measurement

Each measurement is tagged with the precise time on the Raspberry Pi's clock. The benefits are manifold: It is no longer necessary to spam the server with 20 requests a second in an attempt to keep the gateway and client measurements in the pool as close to real time as possible. The matching up of client and server measurements is no dependent on actual real time transmission. Low latency is still achieved since high upload frequencies are supported, but there is more leeway for the backend. The calculation of finding the closest measurements in time is not computationally expensive at low set sizes (O(n)).

# 2.1 Timekeeping on Raspberry Pi

The Raspberry Pi doesn't have a RTC for timekeeping and thus depends on network time pools (NTP) from network traffic to set the time. NTP tends to drift by roughly 10 seconds per day (140 ppm), so about 6.9ms every minute. Since the measurement intervals are at roughly 100ms, this is unacceptable at uptimes above a few minutes.

# 2.2 Recalibration of NTP

A cronjob that is executed every 5 minutes will recalibrate the system time. Network proximity decreases time delays. Switzerland has some NTP pools, but an NTP server running in the LAN will yield an error range under 1ms. This would be more than enough for my application.

# Further actions

I will timestamp measurements using accurate NTP time and match gateway measurements with corresponding client measurements of least timing difference. Depending on how hard it is to set up an NTP pool, I will set one up on my server. Otherwise a Swiss pool should be fine.

Last Updated: 11/23/2020, 9:42:47 PM