Controller reading two different position signals on CAN at every 20ms, to confirm position need to take 5 samples. Overall debounce time 100ms. How to calculate position when all 5 values are a little bit varied. Is there any logic to handle this kind of scenario.
ex: values are 5 5.1 5.2 5.3 5.4 then the average is 5.2, it's not a good method because already it is crossed position 5.4
Pls, suggest on this one.
Related
Is there currently a way to incorporate traffic patterns into OptaPlanner with the package and delivery VRP problem?
Eg. Let's say I need to optimize 500 pickup and deliveries today and tomorrow amongst 30 vehicles where each pickup has a 1-4hr time window. I want to avoid busy areas of the city during rush hours when possible.
New pickups can also be added (or cancelled in the meantime).
I'm sure this is a common problem. Does a decent solution exist for this in OptaPlanner?
Thanks!
Users often do this, but there is no out-of-the-box example of it.
There are several ways to do it, but one way is to add a 3th dimension to the distanceMatrix, indicating the departureTime. Typically that use a granularity of 15 minutes, 30 minutes or 1 hour.
There are 2 scaling concerns here:
memory. 15 minutes means 24 * 4 = 96 per day. Given that with 2 dimensions, a 10k locations distanceMatrix uses almost 2 GB RAM, clearly memory can become a concern.
pre-calculation time. Calculation the distance matrix can be time consuming. "bulk algorithms" can help here. For example, graphhopper community doesn't support bulk distance calculations, but their enterprise version - as well OSRM (which is free) does. Getting a 3 dimensional matrix from the remote Google Maps API, or the remote enterprise Graphhopper API, can result in bandwidth concerns (see above, the distance matrix can become several GB in size, especially in non-binary formats such as JSON or CSV).
In any case, one that 3 dimensional matrix is there, it's just a matter of adjust OptaPlanner examples's ArrivalTimeUpdateListener to use getDistance(from, to, departureTime).
I am using optaplanner 8.4.1 and constraint flow API.
Does optaplanner have a way to output the best solution to the problem based on simple constraints, and then input the solution into complex constraints to find the best solution? Can this speed up the time to the best solution?
Constraints are too complex and slow, especially when you use groupby and toList methods. My current logic involves the judgment of consecutive substrings.
For example: there are 10 positions and 10 balls with serial numbers. The color of the ball can be white, black or red. At this point, you need to line up 10 balls into 10 positions. It is required that the white balls must be piled together. The white balls have a range of 4 to 6; the position here is the planning entity, and the ball is the planning variable. Here we need to calculate how many continuous white balls there are. Constraint flow currently only supports 4 bases, so it can only be judged in the form of groupBy and toList
You can't do that as a single solver currently, but you should be able to do that by running 2 Solvers (with a different SolverConfig) one after another. For you example you could have BasicConstraintProvider with the hard constraints and a FullConstraintProvider that extends the basic one and adds the soft constraints.
That being said, I'd first invest time in improving the score calculation speed (see info log or a benchmark report), to avoid such workarounds all together. That number should be above 10 000.
Also, FIRST_FEASIBLE_FIT might be intresting to look at.
I don't know the details of your problem, but what I am doing is:
Run solver for some time (maybe even using less number of constraints), then serialize solution it provides
Run solver second time (could be with different settings) providing previous solution as a problem, it will keep looking for better one.
It works for me in version 8.30.
Maybe it will help somebody else.
How to stabilize the RSSI (Received Signal Strength Indicator) of low energy Bluetooth beacons (BLE) for more accurate distance calculation?
We are trying to develop an indoor navigation system and came across this problem where the RSSI is fluctuating so much that, the distance estimation is nowhere near the correct value. We tried using an advance average calculator but to no use,
The device is constantly getting RSSI values, how to filter them, how to get the mean value, I am completely lost, please help.
Can anyone suggest any npm library or point in the right direction, I have been searching for many days but have not gotten anywhere.
FRONT END: ReactNative BACKEND: NODEJS
In addition to the answer of #davidgyoung, we would like to point out that any filtering method is a compromise between quality of noise level reduction and the time-lag introduced by this filtration (depending on the characteristic filtering time you use in your method). As was pointed by #davidgyoung, if you take characteristic filtering period T you will get an average time-lag of about T/2.
Thus, I think the best approach to solve your problem is not to try to find the best filtering method but to make changes on the transmitter’s end itself.
First you can increase the number of signals, transmitter per second (most of the modern beacon allow to do so by using manufacturer applications and API).
Secondly, you can increase beacon's power (which is also usually one of the beacon’s settings), which usually reduces signal-to-noise ratio.
Finally, you can compare beacons from different vendors. At Navigine company we experimented and tested lots of different beacons from multiple manufacturers, and it appears that signal-to-noise ratio can significantly vary among existing manufacturers. From our side, we recommend taking a look at kontakt.io beacons (https://kontakt.io/) as an one of the recognized leaders with 5+ years experience in the area.
It is unlikely that you will find a pre-built package that will do what you want as your needs are pretty specific. You will most likely have to wtite your own filtering code.
A key challenge is to decide the parameters of your filtering, as an indoor nav use case often is impacted by time lag. If you average RSSI over 30 seconds, for example, the output of your filter will effectively give you the RSSI of where a moving object was on average 15 seconds ago. This may be inappropriate for your use case if dealing with moving objects. Reducing the averaging interval to 5 seconds might help, but still introduces time lag while reducing smoothing of noise. A filter called an Auto-Regressive Moving Average Filter might be a good choice, but I only have an implementation in Java so you would need to translate to JavaScript.
Finally, do not expect a filter to solve all your problems. Even if you smooth out the noise on the RSSI you may find that the distance estimates are not accurate enough for your use case. Make sure you understand the limits of what is possible with this technology. I wrote a deep dive on this topic here.
I've been using oxyplot for a month now and I'm pretty happy with what it delivers. I'm getting data from an oscilloscope and, after a fast processing, I'm plotting it in real time to a graph.
However, if I compare my application CPU usage to the one provided by the oscilloscope manufacturer, I'm loading a lot more the CPU. Maybe they're using some gpu-based plotter, but I think I can reduce my CPU usage with some modifications.
I'm capturing 10.000 samples per second and adding it to a LineSeries. I'm not plotting all that data, I'm decimating it to a constant number of points, let's say 80 points for a 20 secs measure, so I have 4 points/sec while totally zoomed out and a bit more detail if I zoom in to a specific range.
With the aid of ReSharper, I've noticed that the application is calling a lot of times (I've 6 different plots) the IsValidPoint method (something like 400.000.000 times), which is taking a lot of time.
I think the problem is that, when I add new points to the series, it checks for every point if it is a valid point, instead of the added values only.
Also, it spends a lot of time in the MeasureText/DrawText method.
My question is: is there a way to override those methods and to adapt it to my needs? I'm adding 10.000 new values each second, but the first ones remain the same, so there's no need for re-validate them. Also, the text shown doesn't change.
Thank you in advance for any advice you can give me. Have a good day!
Let's imagine there is a LED. It blinks on and off with a period of 0.5 seconds.
So on second 0 => 0.5 it is on, from second 0.5 => 1 it is off, from second 1 =>1.5 it is on again.
Let's imagine I want to read that input from outside camera (say iphone camera).
What I do is:
1. Take input stream, make image out of it, scan the image for presence of certain number of white pixels, if it is there, the led is on, I write "1" to my file. If it is not there I write "0".
I read input stream twice a second. So generally speaking if everything goes well and my processing does not lag somewhere I get good results. But imagine if:
0 => 0.5 LED is ON
0.49 => my camera reads info as "1"
0.5 => 1.0 LED is OFF
0.99 => my camera reads info as "0"
1.0 => 1.5 LED is ON
1.51 => my camera lagged and reads it as "0"
So we have data corruption here.
Question is, how do I synchronise reading so it preferrably goes into the middle of this window, for larger margin of error.
Also imagine if I'm trying to do that 10 times per second. The window becomes even less.
What can I read on the topic? What can I do to make it more reliable?
One possible solution seems to be reading input 4 times a second and to use data based on groups of 2 inputs.
Sounds like you might want to read about ways of encoding timecode.
http://en.wikipedia.org/wiki/Timecode
LTC (audio signal): http://en.wikipedia.org/wiki/Linear_timecode
VITC (analogue video signal): http://en.wikipedia.org/wiki/Vertical_interval_timecode
Each of these transmits 80 bits of data at the chosen frame rate (say, 30fps). I’m not sure how you’d do it in your case.
Clearly, having a smaller window is more accurate. With LTC, as audio is often sampled at around 44kHz, it’s possible to get it almost exactly spot on.
If the iPhone camera can only take 2 photos per second, I wonder if you could try taking photos at a different interval (say, even 0.7 seconds), and somehow do the maths to work out if it should be on or off (as the LED is still alternating at 0.5s). Over a period of a few seconds, it might be the same as sampling it every 0.1 seconds? (I’m just pulling numbers out of the sky, but I imagine you could work out something like that)
Another thought: can you use the video from the camera instead of a sequence of photos? You might be able to get 30fps that way? (I’m not sure – haven’t looked into it) There might be improvements around this in iOS 6.0 too (something worth checking, if you’re a developer).