How to stabilize the RSSI (Received Signal Strength Indicator) of low energy Bluetooth beacons (BLE) for more accurate distance calculation?
We are trying to develop an indoor navigation system and came across this problem where the RSSI is fluctuating so much that, the distance estimation is nowhere near the correct value. We tried using an advance average calculator but to no use,
The device is constantly getting RSSI values, how to filter them, how to get the mean value, I am completely lost, please help.
Can anyone suggest any npm library or point in the right direction, I have been searching for many days but have not gotten anywhere.
FRONT END: ReactNative BACKEND: NODEJS
In addition to the answer of #davidgyoung, we would like to point out that any filtering method is a compromise between quality of noise level reduction and the time-lag introduced by this filtration (depending on the characteristic filtering time you use in your method). As was pointed by #davidgyoung, if you take characteristic filtering period T you will get an average time-lag of about T/2.
Thus, I think the best approach to solve your problem is not to try to find the best filtering method but to make changes on the transmitter’s end itself.
First you can increase the number of signals, transmitter per second (most of the modern beacon allow to do so by using manufacturer applications and API).
Secondly, you can increase beacon's power (which is also usually one of the beacon’s settings), which usually reduces signal-to-noise ratio.
Finally, you can compare beacons from different vendors. At Navigine company we experimented and tested lots of different beacons from multiple manufacturers, and it appears that signal-to-noise ratio can significantly vary among existing manufacturers. From our side, we recommend taking a look at kontakt.io beacons (https://kontakt.io/) as an one of the recognized leaders with 5+ years experience in the area.
It is unlikely that you will find a pre-built package that will do what you want as your needs are pretty specific. You will most likely have to wtite your own filtering code.
A key challenge is to decide the parameters of your filtering, as an indoor nav use case often is impacted by time lag. If you average RSSI over 30 seconds, for example, the output of your filter will effectively give you the RSSI of where a moving object was on average 15 seconds ago. This may be inappropriate for your use case if dealing with moving objects. Reducing the averaging interval to 5 seconds might help, but still introduces time lag while reducing smoothing of noise. A filter called an Auto-Regressive Moving Average Filter might be a good choice, but I only have an implementation in Java so you would need to translate to JavaScript.
Finally, do not expect a filter to solve all your problems. Even if you smooth out the noise on the RSSI you may find that the distance estimates are not accurate enough for your use case. Make sure you understand the limits of what is possible with this technology. I wrote a deep dive on this topic here.
Related
I have a set of 12 IMUs mounted on the end effector of my robot arm which I read using a micro controller to determine it's movement. With my controller I can read two sensors simultaneously using direct memory access. After acquiring the measurements I would like to fuse them to make up for the sensor error and generate a more reliable reading than having only one sensor.
After some research my understanding is that I can use a Kalman filter to reach my desired outcome, but still have the problem of all the sensor values having different time stamps, since I can read only two at a time and even if both time stamps will be synchronized perfectly, the next pair will have a different time stamp if only in the µs range.
Now I know controls engineering principles but am completely new to the topic of sensor fusion and google presents me with too many results to find a solution in a reasonable amount of time.
Therefore my question, can anybody point me into the right direction by naming me a certain keyword I need to look for or literature I should work through to better understand that topic, please?
Thank you!
The topic you are dealing with is not an easy one. Try to have a look at the multi-rate kalman filters.
The idea is that you design different kalman filters for each combination of sensor that you can available at the same time, and use it when you have the data from those sensors, while the system state is passed between the various filters.
I'm developing an Android application where we use the OBDII to read car's engine parameters. Currently we are obtaining speed (kmh), engine's RPM and mass airflow live while driving the car.
We now have to find a way how using these parameters we are able to obtain from OBDII determine which gear is set at the moment.
I thought about just specifying that for instance given RPM level and given speed the car is driving on a particular gear, but I thing it will not do the trick.
Maybe some of you have some experience in such a field - I will be very helpful for any help!
Without any prior physics knowledge (i.e. knowing if there is any formula to calculate the gear from a set of input parameters), you could make use of the area of data mining.
You just calculate a lot of data (including gear!) and then check, if it is possible to find a formula including the channels you think might be relevant, that gives you the correct gear often enough (how often is your choice, might be 90% or 99%).
Apart from this, I'd say that it is quite hard to find a formula valid for each car with your input parameters (engine rpm, air mass flow, speed in km/h). The problem is, that the actual speed in km/h is also dependant on the wheel size and such stuff. And I do not know if all transmissions use the same transmission ratios (probably not, because we have transmissions with 5 gears and such with 6 gears). Thus, measuring your rpm at the engine might result in totally different rpm's on the wheel which might have totally different diameters per car.
So for the future users dealing with the same problem:
It is sufficient to divide the current speed by the current RPM actually. By doing so, driving on a particular gear you will get a constant value (+- some small mistake value).
It means that if you have 5 gears in your car, you can ask the user to calibrate the device on each gear meaning that while user is driving on a parcitular gear (does not matter on which RPM/speed level) diving the current speed by current RPM will give you a constant value. If you count this constant using different "levels" of RPMs and speed on a particular gear, it will give you some small mistake value (lets say mistake will be 0,001). So for example, on 2nd gear you will get constant = 0,02 +- 0,001. Defining such a constant value on each gear you can almost certainly classify on which gear you are driving at the moment by simply dividing the current speed by current RPM and checking, for which gear's constant value (+- mistake value) that value fits.
I tested it - it works perfectly fine ;).
We're building a GIS interface to display GPS track data, e.g. imagine the raw data set from a guy wandering around a neighborhood on a bike for an hour. A set of data like this with perhaps a new point recorded every 5 seconds, will be large and displaying it in a browser or a handheld device will be challenging. Also, displaying every single point is usually not necessary since a user can't visually resolve that much data anyway.
So for performance reasons we are looking for algorithms that are good at 'reducing' data like this so that the number of points being displayed is reduced significantly but in such a way that it doesn't risk data mis-interpretation. For example, if our fictional bike rider stops for a drink, we certainly don't want to draw 100 lat/lon points in a cluster around the 7-Eleven.
We are aware of clustering, which is good for when looking at a bunch of disconnected points, however what we need is something that applies to tracks as described above. Thanks.
A more scientific and perhaps more math heavy solution is to use the Ramer-Douglas-Peucker algorithm to generalize your path. I used it when I studied for my Master of Surveying so it's a proven thing. :-)
Giving your path and the minimum angle you can tolerate in your path, it simplifies the path by reducing the number of points.
Typically the best way of doing that is:
Determine the minimum number of screen pixels you want between GPS points displayed.
Determine the distance represented by each pixel in the current zoom level.
Multiply answer 1 by answer 2 to get the minimum distance between coordinates you want to display.
starting from the first coordinate in the journey path, read each next coordinate until you've reached the required minimum distance from the current point. Repeat.
Tried doing a bit of research on the following with no luck. Thought I'd ask here in case someone has come across it before.
I help a volunteer-run radio station with their technology needs. One of the main things that have come up is they would like to schedule their advertising programmatically.
There are a lot of neat and complex rule engines out there for advertising, but all we need is something pretty simple (along with any experience that's worth thinking about).
I would like to write something in SQL if possible to deal with these entities. Ideally if someone has written something like this for other advertising mediums (web, etc.,) it would be really helpful.
Entities:
Ads (consisting of a category, # of plays per day, start date, end date or permanent play)
Ad Category (Restaurant, Health, Food store, etc.)
To over-simplify the problem, this will be a elegant sql statement. Getting there... :)
I would like to be able to generate a playlist per day using the above two entities where:
No two ads in the same category are played within x number of ads of each other.
(nice to have) high promotion ads can be pushed
At this time, there are no "ad slots" to fill. There is no "time of day" considerations.
We queue up the ads for the day and go through them between songs/shows, etc. We know how many per hour we have to fill, etc.
Any thoughts/ideas/links/examples? I'm going to keep on looking and hopefully come across something instead of learning it the long way.
Very interesting question, SMO. Right now it looks like a constraint programming problem because you aren't looking for an optimal solution, just one that satisfies all the constraints you have specified. In response to those who wanted to close the question, I'd say they need to check out constraint programming a bit. It's far closer to stackoverflow that any operations research sites.
Look into constraint programming and scheduling - I'll bet you'll find an analogous problem toot sweet !
Keep us posted on your progress, please.
Ignoring the T-SQL request for the moment since that's unlikely to be the best language to write this in ...
One of my favorites approaches to tough 'layout' problems like this is Simulated Annealing. It's a good approach because you don't need to think HOW to solve the actual problem: all you define is a measure of how good the current layout is (a score if you will) and then you allow random changes that either increase or decrease that score. Over many iterations you gradually reduce the probability of moving to a worse score. This 'simulated annealing' approach reduces the probability of getting stuck in a local minimum.
So in your case the scoring function for a given layout might be based on the distance to the next advert in the same category and the distance to another advert of the same series. If you later have time of day considerations you can easily add them to the score function.
Initially you allocate the adverts sequentially, evenly or randomly within their time window (doesn't really matter which). Now you pick two slots and consider what happens to the score when you switch the contents of those two slots. If either advert moves out of its allowed range you can reject the change immediately. If both are still in range, does it move you to a better overall score? Initially you take changes randomly even if they make it worse but over time you reduce the probability of that happening so that by the end you are moving monotonically towards a better score.
Easy to implement, easy to add new 'rules' that affect score, can easily adjust run-time to accept a 'good enough' answer, ...
Another approach would be to use a genetic algorithm, see this similar question: Best Fit Scheduling Algorithm this is likely harder to program but will probably converge more quickly on a good answer.
I'm using GPS units and mobile computers to track individual pedestrians' travels. I'd like to in real time "clean" the incoming GPS signal to improve its accuracy. Also, after the fact, not necessarily in real time, I would like to "lock" individuals' GPS fixes to positions along a road network. Have any techniques, resources, algorithms, or existing software to suggest on either front?
A few things I am already considering in terms of signal cleaning:
- drop fixes for which num. of satellites = 0
- drop fixes for which speed is unnaturally high (say, 600 mph)
And in terms of "locking" to the street network (which I hear is called "map matching"):
- lock to the nearest network edge based on root mean squared error
- when fixes are far away from road network, highlight those points and allow user to use a GUI (OpenLayers in a Web browser, say) to drag, snap, and drop on to the road network
Thanks for your ideas!
I assume you want to "clean" your data to remove erroneous spikes caused by dodgy readings. This is a basic dsp process. There are several approaches you could take to this, it depends how clever you want it to be.
At a basic level yes, you can just look for really large figures, but what is a really large figure? Yeah 600mph is fast, but not if you're in concorde. Whilst you are looking for a value which is "out of the ordinary", you are effectively hard-coding "ordinary". A better approach is to examine past data to determine what "ordinary" is, and then look for deviations. You might want to consider calculating the variance of the data over a small local window and then see if the z-score of your current data is greater than some threshold, and if so, exclude it.
One note: you should use 3 as the minimum satellites, not 0. A GPS needs at least three sources to calculate a horizontal location. Every GPS I have used includes a status flag in the data stream; less than 3 satellites is reported as "bad" data in some way.
You should also consider "stationary" data. How will you handle the pedestrian standing still for some period of time? Perhaps waiting at a crosswalk or interacting with a street vendor?
Depending on what you plan to do with the data, you may need to supress those extra data points or average them into a single point or location.
You mention this is for pedestrian tracking, but you also mention a road network. Pedestrians can travel a lot of places where a car cannot, and, indeed, which probably are not going to be on any map you find of a "road network". Most road maps don't have things like walking paths in parks, hiking trails, and so forth. Don't assume that "off the road network" means the GPS isn't getting an accurate fix.
In addition to Andrew's comments, you may also want to consider interference factors such as multipath, and how they are affected in your incoming GPS data stream, e.g. HDOPs in the GSA line of NMEA0183. In my own GPS controller software, I allow user specified rejection criteria against a range of QA related parameters.
I also tend to work on a moving window principle in this regard, where you can consider rejecting data that represents a spike based on surrounding data in the same window.
Read the posfix to see if the signal is valid (somewhere in the $GPGGA sentence if you parse raw NMEA strings). If it's 0, ignore the message.
Besides that you could look at the combination of HDOP and the number of satellites if you really need to be sure that the signal is very accurate, but in normal situations that shouldn't be necessary.
Of course it doesn't hurt to do some sanity checks on GPS signals:
latitude between -90..90;
longitude between -180..180 (or E..W, N..S, 0..90 and 0..180 if you're reading raw NMEA strings);
speed between 0 and 255 (for normal cars);
distance to previous measurement matches (based on lat/lon) matches roughly with the indicated speed;
timedifference with system time not larger than x (unless the system clock cannot be trusted or relies on GPS synchronisation :-) );
To do map matching, you basically iterate through your road segments, and check which segment is the most likely for your current position, direction, speed and possibly previous gps measurements and matches.
If you're not doing a realtime application, or if a delay in feedback is acceptable, you can even look into the 'future' to see which segment is the most likely.
Doing all that properly is an art by itself, and this space here is too short to go into it deeply.
It's often difficult to decide with 100% confidence on which road segment somebody resides. For example, if there are 2 parallel roads that are equally close to the current position it's a matter of creative heuristics.