Correcting SLAM drift error using GPS measurements - gps

I'm trying to figure out how to correct drift errors introduced by a SLAM method using GPS measurements, I have two point sets in euclidian 3d space taken at fixed moments in time:
The red dataset is introduced by GPS and contains no drift errors, while blue dataset is based on SLAM algorithm, it drifts over time.
The idea is that SLAM is accurate on short distances but eventually drifts, while GPS is accurate on long distances and inaccurate on short ones. So I would like to figure out how to fuse SLAM data with GPS in such way that will take best accuracy of both measurements. At least how to approach this problem?

Since your GPS looks like it is very locally biased, I'm assuming it is low-cost and doesn't use any correction techniques, e.g. that it is not differential. As you probably are aware, GPS errors are not Gaussian. The guys in this paper show that a good way to model GPS noise is as v+eps where v is a locally constant "bias" vector (it is usually constant for a few metters, and then changes more or less smoothly or abruptly) and eps is Gaussian noise.
Given this information, one option would be to use Kalman-based fusion, e.g. you add the GPS noise and bias to the state vector, and define your transition equations appropriately and proceed as you would with an ordinary EKF. Note that if we ignore the prediction step of the Kalman, this is roughly equivalent to minimizing an error function of the form
measurement_constraints + some_weight * GPS_constraints
and that gives you a more straigh-forward, second option. For example, if your SLAM is visual, you can just use the sum of squared reprojection errors (i.e. the bundle adjustment error) as the measurment constraints, and define your GPS constraints as ||x- x_{gps}|| where the x are 2d or 3d GPS positions (you might want to ignore the altitude with low-cost GPS).
If your SLAM is visual and feature-point based (you didn't really say what type of SLAM you were using so I assume the most widespread type), then fusion with any of the methods above can lead to "inlier loss". You make a sudden, violent correction, and augment the reprojection errors. This means that you lose inliers in SLAM's tracking. So you have to re-triangulate points, and so on. Plus, note that even though the paper I linked to above presents a model of the GPS errors, it is not a very accurate model, and assuming that the distribution of GPS errors is unimodal (necessary for the EKF) seems a bit adventurous to me.
So, I think a good option is to use barrier-term optimization. Basically, the idea is this: since you don't really know how to model GPS errors, assume that you have more confidance in SLAM locally, and minimize a function S(x) that captures the quality of your SLAM reconstruction. Note x_opt the minimizer of S. Then, fuse with GPS data as long as it does not deteriorate S(x_opt) more than a given threshold. Mathematically, you'd want to minimize
some_coef/(thresh - S(X)) + ||x-x_{gps}||
and you'd initialize the minimization with x_opt. A good choice for S is the bundle adjustment error, since by not degrading it, you prevent inlier loss. There are other choices of S in the litterature, but they are usually meant to reduce computational time and add little in terms of accuracy.
This, unlike the EKF, does not have a nice probabilistic interpretation, but produces very nice results in practice (I have used it for fusion with other things than GPS too, and it works well). You can for example see this excellent paper that explains how to implement this thoroughly, how to set the threshold, etc.
Hope this helps. Please don't hesitate to tell me if you find inaccuracies/errors in my answer.

Related

Smoothed Particle Hydrodynamics - Particle Density Estimation Issue

I'm currently writing an SPH Solver using CUDA on https://github.com/Mathiasb17/sph_opengl.
I have pretty good results and performances but in my mind they still seem pretty weird for some reason :
https://www.youtube.com/watch?v=_DdHN8qApns
https://www.youtube.com/watch?v=Afgn0iWeDoc
In some implementations, i saw that a particle does not contribute to its own internal forces (which would be 0 anyways due to the formulas), but it does contribute to its own density.
My simulations work "pretty fine" (i don't like "pretty fine", i want it perfect) and in my implementation a particle does not contribute to its own density.
Besides when i change the code so it does contribute to its own density, the resulting simulation becomes way too unstable (particles explode).
I asked this to a lecturer in physics based animation, he told me a particle should not contribute to its density, but did not give me specific details about this assertion.
Any idea of how it should be ?
As long as you calculate the density with the summation formula instead of the continuity equation, yes you need to do it with self-contribution.
Here is why:
SPH is an interpolation scheme, which allows you to interpolate a specific value in any position in space over a particle cloud. Any position means you are not restricted to evaluate it on a particle, but anywhere in space. If you do so, obviously you need to consider all particles within the influence radius. From this point of view, it is easy to see that interpolating a quantity at a particle's position does not influence its contribution.
For other quantities like forces, where the derivative of some quantity is approximated, you don't need to apply self-contribution (that would lead to the evaluation of 0/0).
To discover the source of the instability:
check if the kernel is normalised
are the stiffness of the liquid and the time step size compatible (for the weakly compressible case)?

how to overcome tracking jitter

I'm working with a object tracking project.
Steps:
1.Preprocessing the image and achieve some candidates regions of interest.
2.For each region, test if it is the target by ORB/BF.
3.After the target region determined, acquire coordinates of some points on the target and their corresponding coordinates in the world coordinate system.
4.Use solvePnP(in opencv) to get rotation vector and translation vector.
5.Translation vector is used in VR for localization and view control.
Tracking jitter means, although the object is stationary, because of some tracking errors, such as noise, the position of the target is slightly changing. Then, look at step 4 and step 5, due to the change, translation vector is slightly changed and with the Head Mounted Device, I feel the jitter all the time.
Seems to me that tracking jitter is unavoidable because of change in environment or some noise. But one pixel value change can lead to about a few centimeters change in z value in translation vector. So any proper way to deal with it?
I have googled but there didn't seem much information.Effects of Tracking Technology, Latency, and Spatial Jitter on Object Movement mentions the phemomenon, but did not provide a solution. Another interesting paper is Motion Tracking Requirements and Technologies. So can anyone offer some useful information?
It occurs to me that fileter is needed to do some post processing to the tracking data. But the idea is not very idea. Kalman filter can be used for tracking and can be used to attenuate noise. I don't know whether it can compensate for this kind of jitter(I mean, very small fluctuation in values) very well. And investigate how to incorporate Kalman filter into this project is another topic and need extra time.

GPS reported accuracy, error function

Most GPS systems report "accuracy" in units of meters, with the figure varying over orders of magnitude. What does this figure mean? How can it be translated to an error function for estimation, i.e. the probability of an actual position given the GPS reading and its reported accuracy?
According to the Wikipedia article on GPS accuracy, a reading down to 3 meters can be achieved by precisely timing the radio signals arriving at the receiver. This seems to correspond with the tightest error margin reported by e.g. an iPhone. But that wouldn't account for external signal distortion.
It sounds like an error function should have two domains, with a gentle linear slope out to the reported accuracy and then a polynomial or exponential increase further out.
Is there a better approach than to tinker with it? Do different GPS chipset vendors conform to any kind of standard meaning, or do they all provide only some kind of number for the sake of feature parity?
The number reported is usually called HEPE, Horizontal Estimated Position Error. In theory, 67% of the time the measurement should be within HEPE of the true position, and 33% of the time the measurement should be in horizontal error by more than the HEPE.
In practice, no one checks HEPE's very carefully, and in my experience, HEPE's reported for 3 or 4 satellite fixes are much larger than they need to be. That is, in my experience 3 satellite fixes are accurate to within a HEPE distance much more than 67% of the time.
The "assumed" error distribution is circular gaussian. So in principle you could find the right ratios for a circular gaussian and derive the 95% probability radius and so on. But to have these numbers be meaningful, you would need to do extensive statistical testing to verify that indeed you are getting around 95%.
The above are my impressions from working in the less accuracy sensitive parts of GPS over the years. Concievably, people who work on using GPS for aircraft landing may have a better sense of how to predict errors and error rates, but the techniques and methods they use are likely not available in consumer GPS devices.

FFT Pitch Detection for iOS using Accelerate Framework?

I have been reading up on FFT and Pitch Detection for a while now, but I'm having trouble piecing it all together.
I have worked out that the Accelerate framework is probably the best way to go with this, and I have read the example code from apple to see how to use it for FFTs. What is the input data for the FFT if I wanted to be running the pitch detection in real time? Do I just pass in the audio stream from the microphone? How would I do this?
Also, after I get the FFT output, how can I get the frequency from that? I have been reading everywhere, and can't find any examples or explanations of this?
Thanks for any help.
Frequency and pitch are not the same thing - frequency is a physical quantity, pitch is a psychological percept - they are similar, but there are important differences, which may or may not matter to you, depending on the type of instrument for which you are trying to measure pitch.
You need to read up a little on the various pitch detection algorithms (and on the meaning of pitch itself), decide what algorithm you want to use and only then set about implementing it. See this Wikipedia page for a good overview of pitch and pitch detection (note that you can use FFT for the autocorrelation-based and frequency domain methods).
As for using the FFT to identify peaks in a spectrum and their associated frequencies, there are many questions and answers related to this on SO already, see for example: How do I obtain the frequencies of each value in an FFT?
I have an example implementation of an Autocorrelation function available online for ios 5.1. Look at this post for a link to the implementation AND functions on how to find the nearest note and how to create a string representing the pitch (A, A#, B, B#, etc...)
While the FFT is very useful in many applications, it might not be the most accurate if you're trying to do simple pitch detection. (It can be as accurate, but you have to deal with complex numbers to do a lot of phase calculations)

Modeling human running on a soccer field

In a soccer game, I am computing a steering force using steering behaviors. This part is ok.
However, I am looking for the best way to implement simple 2d human locomotion.
For instance, the players should not "steer" (or simply add acceleration computed from steering force) to its current velocity when the cos(angle) between the steering force and the current velocity or heading vectors is lower than 0.5 because it looks as if the player is a vehicule. A human, when there is an important change of direction, slows down and when it has slowed enough, it starts accelerating in the new direction.
Does anyone have any advice, ideas on how to achieve this behavior? Thanks in advance.
Make it change direction very quickly but without perfect friction. EG super mario
Edit: but feet should not slide - use procedural animation for feet
This is already researched and developed in an initiative called "Robocup". They have a simulation 2D league that should be really similar to what you are trying to accomplish.
Here's a link that should point you to the right direction:
http://wiki.robocup.org/wiki/Main_Page
Maybe you could compute the curvature. If the curvature value is to big, the speed slows down.
http://en.wikipedia.org/wiki/Curvature
At low speed a human can turn on a dime. At high speed only very slight turns require no slowing. The speed and radius of the turn are thus strongly correlated.
How much a human slows down when aiming toward a target is actually a judgment call, not an automatic computation. One human might come to almost a complete stop, turn sharply, and run directly toward the target. Another human might slow only a little and make a wide curving arc—even if this increases the total length to the target. The only caveat is that if the desired target is inside the radius of the curve at the current speed, the only reasonable path is to slow since it would take a wide loop far from the target in order to reach it (rather than circling it endlessly).
Here's how I would go about doing it. I apologize for the Imperial units if you prefer metric.
The fastest human ever recorded traveled just under 28 mph. Each of your human units should be given a personal top speed between 1 and 28 mph.
Create a 29-element table of the maximum acceleration and deceleration rates of a human traveling at each whole mph in a straight line. It doesn't have to be exact--just approximate accel and decel values for each value. Create fast, medium, slow versions of the 29-element table and assign each human to one of these tables. The table chosen may be mapped to the unit's top speed, so a unit with a max of 10mph would be a slow accelerator.
Create a 29-element table of the sharpest radius a human can turn at that mph (0-28).
Now, when animating each human unit, if you have target information and must choose an acceleration from that, the task is harder. If instead you just have a force vector, it is easier. Let's start with the force vector.
If the force vector's net acceleration and resultant angle would exceed the limit of the unit's ability, restrict the unit's new vector to the maximum angle allowed, and also decelerate the unit at its maximum rate for its current linear speed.
During the next clock tick, being slower, it will be able to turn more sharply.
If the force vector can be entirely accommodated, but the unit is traveling slower than its maximum speed for that curvature, apply the maximum acceleration the unit has at that speed.
I know the details are going to be quite difficult, but I think this is a good start.
For the pathing version where you have a target and need to choose a force to apply, the problem is a bit different, and even harder. I'm out of ideas for now--but suffice it to say that, given the example condition of the human already running away from the target at top stpeed, there will be a best-time path that is between on the one hand, slowing enough while turning to complete a perfect arc to the target, and on the other hand stopping completely, rotating completely and running straight to the target.