# 08-24 Evaluating and improving trilateration

After fixing the nginx bad gateway issue by overriding the docker networking with direct host level port bindings, I managed to test the trilateration system a little bit. Much as expected, I wasn't exactly blown away by the initial precision of the results. The coordinates were plausible and reacted to real life changes in the position of the device. Orthogonal movements in X or Y direction were registered as such in software. As expected, the coordinates were subject to quite large fluctuations even when the device was stationary. After relocating the client, it seemed like it required a few seconds of "recalibration" until the software got used to its new position.

Another thing I tested was the optimization method, Scipy supplies many functions purported with reducing residuals of a loss function. The results were identical for all methods, so I settled for the fmin_powell function since it had the nicest syntax.

# Problem

The accuracy of the coordinates was quite underwhelming. In the beginning the results seemed correct within a fairly reasonable margin of error. After letting the system run for a while, it became quite clear that the results were diverging from the actual location, accuracy took a nosedive. The general question that remains to be answered is what can be done to improve the results.

# Possible solutions

# Use specific path loss coefficients for each server

The distances used in today's test run were based on one single PLM coefficient set. The problem with this is manifold, starting with the fact that the testing environment probably changed (different WiFi traffic, temperature etc). Furthermore, I only did a single calibration run for a single server and used the resulting values for 4 servers. As suggested by a paper from the ETH, each server has a specific set of path loss coefficients. In previous experiments I collected data for all servers and consecutively used the averaged coefficient set for all servers. The gains of generating separate coefficient sets would most likely be huge. Today I manually tweaked the coefficients in PhpMyAdmin, the effect on the accuracy was quite obvious.

# Client Doubling

Using two Raspberry Pis as a single unit doubles the amount of data that can be picked up. Putting them a few centimeters (at least half a wavelength @ 2.4 GHz) apart would also add spatial diversity to the results. I expect the reactivity of the system to be better for movements along the axis of spatial separation. It should be possible to have some sort of an adapter that can split the devices apart, then the only remaining necessary change would be to bind the same mac address to both clients.

# Post processinng of coordinates

Currently the python trilaterator client is set to refresh the position every 100ms. It is completely sufficient to output an estimation only once a second. Therefore it is viable to interpolate a position based on the high refresh rate results that is composed of many submeasurements.

# Further actions

This week I will still try to implement at least the top two points listed above. I think both are viable and can realistically improve the accuracy and performance of the system. Since I now am way ahead of my schedule, I have to consider introducing Kalman Filtering in the python trilateration backend, or move forward directly with the GUI. This is highly based on how hard the implementation of a Kalman Filter is, so there is some more research required here. The merit of doing this is clear though, as it is very different to trilateration and thus might be able to catch inaccuracies in trilateration.

Last Updated: 11/23/2020, 8:10:00 AM