How to implement lowpass filter to reduce noise in gyroscope values? - labview

I am new to labview and I need help.
I am using myrio with gyroscope, and when I display the gyroscope values I get noise.
My question is: How can I implement lowpass filter to reduce the noise in X , Y and Z rates of the gyroscope?
I searched a lot, but I did not understand how can I know what is the sampling frequency, the low and the high cutoff frequency.
Thank you so much.

If you're data is noisy you should try to fix the problem before you digitize the data. If a physical low-pass filter will do the trick, install one. The better the signal before the DAQ the better the data will be once it's digitized.
Some other signal conditioning considerations: make sure to reduce the length of wire from the gyroscope to the DAQ to only what's necessary, if possible eliminate any sources of noise from the environment (like any large rotating magnets--seriously I once helped someone who was complaining about noise when they were using an unshielded wire next to an MRI machine), and if you're going to add any signal conditioning try to amplify close to your sensor.
If you still would like to filter in software, there's an example included with LabVIEW that demonstrates both the point-by-point VIs and the array based VIs. It's called PtByBp and Array Based Filter.vi and can be found in the Example Finder under Analysis, Signal Processing and Mathematics >> Filtering and Conditioning

Please install this FREE toolkit from ni.com: http://sine.ni.com/nips/cds/view/p/lang/en/nid/212733
There are examples and good ready to use application how to use myRIO gyroscope and how to do proper DSP.

Sampling frequency is how fast you sample. Look for this value in the ADC settings. Low and high cutoffs - play with those values. Doing an FFT on your signal may help you to determine spectral frequency density, and decide where to cut.

Related

Correcting SLAM drift error using GPS measurements

I'm trying to figure out how to correct drift errors introduced by a SLAM method using GPS measurements, I have two point sets in euclidian 3d space taken at fixed moments in time:
The red dataset is introduced by GPS and contains no drift errors, while blue dataset is based on SLAM algorithm, it drifts over time.
The idea is that SLAM is accurate on short distances but eventually drifts, while GPS is accurate on long distances and inaccurate on short ones. So I would like to figure out how to fuse SLAM data with GPS in such way that will take best accuracy of both measurements. At least how to approach this problem?
Since your GPS looks like it is very locally biased, I'm assuming it is low-cost and doesn't use any correction techniques, e.g. that it is not differential. As you probably are aware, GPS errors are not Gaussian. The guys in this paper show that a good way to model GPS noise is as v+eps where v is a locally constant "bias" vector (it is usually constant for a few metters, and then changes more or less smoothly or abruptly) and eps is Gaussian noise.
Given this information, one option would be to use Kalman-based fusion, e.g. you add the GPS noise and bias to the state vector, and define your transition equations appropriately and proceed as you would with an ordinary EKF. Note that if we ignore the prediction step of the Kalman, this is roughly equivalent to minimizing an error function of the form
measurement_constraints + some_weight * GPS_constraints
and that gives you a more straigh-forward, second option. For example, if your SLAM is visual, you can just use the sum of squared reprojection errors (i.e. the bundle adjustment error) as the measurment constraints, and define your GPS constraints as ||x- x_{gps}|| where the x are 2d or 3d GPS positions (you might want to ignore the altitude with low-cost GPS).
If your SLAM is visual and feature-point based (you didn't really say what type of SLAM you were using so I assume the most widespread type), then fusion with any of the methods above can lead to "inlier loss". You make a sudden, violent correction, and augment the reprojection errors. This means that you lose inliers in SLAM's tracking. So you have to re-triangulate points, and so on. Plus, note that even though the paper I linked to above presents a model of the GPS errors, it is not a very accurate model, and assuming that the distribution of GPS errors is unimodal (necessary for the EKF) seems a bit adventurous to me.
So, I think a good option is to use barrier-term optimization. Basically, the idea is this: since you don't really know how to model GPS errors, assume that you have more confidance in SLAM locally, and minimize a function S(x) that captures the quality of your SLAM reconstruction. Note x_opt the minimizer of S. Then, fuse with GPS data as long as it does not deteriorate S(x_opt) more than a given threshold. Mathematically, you'd want to minimize
some_coef/(thresh - S(X)) + ||x-x_{gps}||
and you'd initialize the minimization with x_opt. A good choice for S is the bundle adjustment error, since by not degrading it, you prevent inlier loss. There are other choices of S in the litterature, but they are usually meant to reduce computational time and add little in terms of accuracy.
This, unlike the EKF, does not have a nice probabilistic interpretation, but produces very nice results in practice (I have used it for fusion with other things than GPS too, and it works well). You can for example see this excellent paper that explains how to implement this thoroughly, how to set the threshold, etc.
Hope this helps. Please don't hesitate to tell me if you find inaccuracies/errors in my answer.

Signal chain for tone detection?

I'm trying to make an embedded thingy that detects the presence of a 19kHz tone from an electret microphone. I have a multistage bandpass filter/preamp hooked into the ADC of a microcontroller, and am trying to figure out the best way to digitally condition the signal in order to detect the presence of the tone.
I have implemented a Goertzel filter to look for the frequency of interest. My ADC takes 400 samples at a frequency of 4000KHz, then the micro processes the block and adds the result to a 100 point moving average. Looking at terminal output after each block, I can definitely see an overall jump in the numbers when the transmitter is turned on. However, there's a lot of noise in the power readings when the thing is turned on, and the the noise floor in the room I'm in keeps changing, too. I am not sure how to tune the thresholding level/filter out all of this noise.
I've tried a few things, but they all seem to be pretty noisy as the baseline of my signal drifts all over the place:
Preprocessing the block with Hamming/Blackman windows
Ratio of total received block power to band power in filter output
Ratio of power of band in interest (19kHz) to a band outside of,
but near band of interest (18.5kHz)
EDIT: I've done some more reading since posting this. Is calculating (2*Ew)/(N*Et) where Ew is the output from my filter and Et is the sum of the squares in my block the best way to do this test?
Any advice on how to deal with this and/or do a better method of signal extraction?
Thanks!

relation between FFT and IQ data

I have been studying how the USRP works and, particularly, how to sense the energy from a signal. So far, I have understood that the USRP sense IQ data, and then, process it applying FFT. (I have been looking at usrp_spectrum_sense.py )
What are the units of the IQ samples? What are the units after the FFT is done? The only data that is needed to develop the FFT is the IQ samples?
Thanks in advance :D
Unfortunately, there are no units for the IQ samples, it is simply a magnitude (for the UHD driver, it is between -1 and 1). The specifications of many of the daughterboards of the USRP do not specify receiver sensitivity. You cannot use the IQ samples alone to figure out incoming power, you will need to you own calibration such as using a signal generator to generate a signal of known power into the USRP input and recording what you see using GNURadio. You would then need to wrap the data coming from your FFT with calculation derived from this calibration data you have collected.

Transform discrete data to continuous

I'm writing a program that has, as one facet, a wave filtration/resolution routine. The more data I collect, the bigger the files stored to the device get. I'm collecting data at discrete time steps, and in the interest of accuracy I'm doing this pretty frequently. However, I noticed that the overall wave form tends to be wide enough that I could be collecting data at about half the rate I am and still be able to draw an accurate-enough-for-my-purposes waveform over the data.
So the question: is there a way to, from this data, create a continuous mathematic description of the curve? I haven't been able to find anything. My data is float inside of NSNumbers contained by an NSArray.
The two things I would like to be able to do are get intersections points for a threshold and find local maximums. The ability to do either one of these would be sufficient.
-EDIT-
If anyone knows a good objective-c FFT method for 1-dimensional real arrays I would love to hear it.
Apple includes an FFT in the Accelerate framework.
Using Fourier Transforms
Example: FFT Sample
Also: Using the Apple FFT and Accelerate Framework

FFT Pitch Detection for iOS using Accelerate Framework?

I have been reading up on FFT and Pitch Detection for a while now, but I'm having trouble piecing it all together.
I have worked out that the Accelerate framework is probably the best way to go with this, and I have read the example code from apple to see how to use it for FFTs. What is the input data for the FFT if I wanted to be running the pitch detection in real time? Do I just pass in the audio stream from the microphone? How would I do this?
Also, after I get the FFT output, how can I get the frequency from that? I have been reading everywhere, and can't find any examples or explanations of this?
Thanks for any help.
Frequency and pitch are not the same thing - frequency is a physical quantity, pitch is a psychological percept - they are similar, but there are important differences, which may or may not matter to you, depending on the type of instrument for which you are trying to measure pitch.
You need to read up a little on the various pitch detection algorithms (and on the meaning of pitch itself), decide what algorithm you want to use and only then set about implementing it. See this Wikipedia page for a good overview of pitch and pitch detection (note that you can use FFT for the autocorrelation-based and frequency domain methods).
As for using the FFT to identify peaks in a spectrum and their associated frequencies, there are many questions and answers related to this on SO already, see for example: How do I obtain the frequencies of each value in an FFT?
I have an example implementation of an Autocorrelation function available online for ios 5.1. Look at this post for a link to the implementation AND functions on how to find the nearest note and how to create a string representing the pitch (A, A#, B, B#, etc...)
While the FFT is very useful in many applications, it might not be the most accurate if you're trying to do simple pitch detection. (It can be as accurate, but you have to deal with complex numbers to do a lot of phase calculations)