I wrote simple NMEA parser and I'm reading latitude and longitude from GPS. Values readed by my parser are very scattered comparing to some GPS commercial library I'm using. Values readed from this library are very smooth.
What algorithms I should use to correct readed position from gps?
One option would be a Kalman filter
GPS Signals read from a GPS receivers are today only scattered, if the receiver
is standing still, or moving very slowly.
A simple and very effective approach is to ignore all positions which have its speed attribute below 5km/h.
This works well for vehicles.
For pedestrians it gets more complex.
Related
I have a set of 12 IMUs mounted on the end effector of my robot arm which I read using a micro controller to determine it's movement. With my controller I can read two sensors simultaneously using direct memory access. After acquiring the measurements I would like to fuse them to make up for the sensor error and generate a more reliable reading than having only one sensor.
After some research my understanding is that I can use a Kalman filter to reach my desired outcome, but still have the problem of all the sensor values having different time stamps, since I can read only two at a time and even if both time stamps will be synchronized perfectly, the next pair will have a different time stamp if only in the µs range.
Now I know controls engineering principles but am completely new to the topic of sensor fusion and google presents me with too many results to find a solution in a reasonable amount of time.
Therefore my question, can anybody point me into the right direction by naming me a certain keyword I need to look for or literature I should work through to better understand that topic, please?
Thank you!
The topic you are dealing with is not an easy one. Try to have a look at the multi-rate kalman filters.
The idea is that you design different kalman filters for each combination of sensor that you can available at the same time, and use it when you have the data from those sensors, while the system state is passed between the various filters.
Recently I am working to plot latitude and longitude of GPS data on Google Map. The latitude and longitude is in NMEA format and I have converted it in to compatible format to display on google map. I am able to plot the data on google map successfully. My data is supposed to be on a straight line but it goes zigzag like mountains. Is is the problem of GPS data not having accurate latitude and longitude? or the problem while converting the NMEA format data? How to fix these kinds of errors if the GPS data is not 100% accurate?
Thank you.
Depending on the quality of your GPS receiver (recreational, professional, survey, military, etc), the accuracy of your GPS solution can be different. Some receivers can track more GPS signals at different frequencies, and can have an access to DGPS information, etc.
For recreational GPS receivers the error level can be at the order of 10 meters. So my suggestion is if you know that uour solution is a straight line, you can do a least square parameter estimation to a linear function. That approach tend to smooth out error and you can get better solution.
How can I track fast hand movement using kinect?
I've tried both Openni and Microsoft sdk to track hand. On both of them, there are lots of jitters and inaccurate movement of joints.
Here is an example video of kinect fruit ninja: Example Video
On that video, there are no jitters and inaccuracy and also it's tracking the fast hand movements.
What am I missing? Is there any kinds of kinect hardware versions or types which I should look into.
My best guess is that Fruit Ninja applies some sort of smoothing at some point. What you're seeing in that video is almost certainly not the raw data they're getting from the Kinect. The data from the Kinect will always have some kind of jitter; real-world sensor data almost always does. You'll need to smooth it - exactly how to do that depends on the application; it could be something simple like modelling a kind of damping and/or inertia on the point that's being moved by the hand (which is what I suspect Fruit Ninja is doing), or you could look at something like a Kalman filter for a robust (but more computationally-intensive) way to reduce the noise in your sensor readings.
I have been reading up on FFT and Pitch Detection for a while now, but I'm having trouble piecing it all together.
I have worked out that the Accelerate framework is probably the best way to go with this, and I have read the example code from apple to see how to use it for FFTs. What is the input data for the FFT if I wanted to be running the pitch detection in real time? Do I just pass in the audio stream from the microphone? How would I do this?
Also, after I get the FFT output, how can I get the frequency from that? I have been reading everywhere, and can't find any examples or explanations of this?
Thanks for any help.
Frequency and pitch are not the same thing - frequency is a physical quantity, pitch is a psychological percept - they are similar, but there are important differences, which may or may not matter to you, depending on the type of instrument for which you are trying to measure pitch.
You need to read up a little on the various pitch detection algorithms (and on the meaning of pitch itself), decide what algorithm you want to use and only then set about implementing it. See this Wikipedia page for a good overview of pitch and pitch detection (note that you can use FFT for the autocorrelation-based and frequency domain methods).
As for using the FFT to identify peaks in a spectrum and their associated frequencies, there are many questions and answers related to this on SO already, see for example: How do I obtain the frequencies of each value in an FFT?
I have an example implementation of an Autocorrelation function available online for ios 5.1. Look at this post for a link to the implementation AND functions on how to find the nearest note and how to create a string representing the pitch (A, A#, B, B#, etc...)
While the FFT is very useful in many applications, it might not be the most accurate if you're trying to do simple pitch detection. (It can be as accurate, but you have to deal with complex numbers to do a lot of phase calculations)
I want to get the pitch of a song at any point. I plan on storing the pitches later. How can I read say... an mp3 file or wav file to get the pitch played at a certain point?
Here is a visual example:
Say I wanted to get the pitch that is here at ^this point of the song.
Thanks if you can!
The matter is a tad more complicated than you may be anticipating.
While time-domain approaches exist (that is, approaches which work with the PCM data directly), frequency-domain pitch detection is going to be more accurate. You can read a very simplified overview here.
What you probably want is a Fourier Transform, which can be used to transform blocks of your signal from time-domain to frequency-domain (that is, a distribution of frequency content over a given span of the signal). From there, you would need to analyze the frequency spectrum within that block. The problem becomes even harder still, because there is no best way to deduce pitch from a sampled frequency spectrum in the general case. The aforementioned Wikipedia article should give you a foundation for looking into those algorithms.
Finally, it's worth noting that this is really a language-agnostic question, unless your primary interest is in reading a WAV file specifically using VB.NET.