I am using IMU with frequency 50Hz and GPS with frequency 4Hz, therefore, for every 12-13 points of IMU I get the same measurement from GPS. Is there some way to predict the GPS position for points 2-11? I do not have other data, just the position data from GPS.
Thank you
Related
I have a vehicle with a tracker installed. The device has a gps syst, 3-axis accelerometer, 3-axis magnetomet and a gyromeyet. Is it possible to determine by how much the vehicle rolled back on a slope or hill. Using gps angle wasn't an option as the angle given for short backward movement isn't always reliable. Can accelerometer be used in such a scenario??
You're right that the GPS angle (heading) will not help you in a single-antennae setup. On its own a GPS receiver needs a minimum distance of movement to determine heading.
A simple GPS receiver, when used without GPS corrections (which is the case for off-the-shelf GPS devices and mobile phones/tablets), has a minimum ~5 meter accuracy. That's why a short backward movement will not yield the desired results.
In construction/mining applications, there is often a fixed GPS base station nearby that broadcasts GPS corrections, which allows vehicle-mounted GPS receiver to apply corrections, reduce error and ultimately get centimeter-level accuracy.
So in conclusion, your 3-axis accelerometer will likely be the only sensor that you can rely on until your vehicle has rolled back at least 5 meters.
If your accelerometer is sensitive enough, you'll get measurable sensor values. However, if you rollback is very slow, where the G forces are almost imperceptible to the accelerometer, then you're out of luck.
This is assuming that you want near real-time detection of vehicle rollback.
If I scatter more infrared points around the room from separate infrared laser speckle projectors, and in doing so increase the point cloud resolution spread over objects in the room, will this result in a higher resolution 3d scan captured by the infrared camera on the Kinect 360? Or are there limitations on how much point cloud data the IR camera and/or Kinect software/hardware can process?
Found this paper which answered my question:
http://www.cs.cornell.edu/~hema/rgbd-workshop-2014/papers/Alhwarin.pdf
I'm using a high accuracy GPS RTK setup to precisely locate a mobile robotic platform in the field (down to 10 cm accuracy). I have also a 9DOF IMU mounted on the platform (9DOF sparkfun IMU Razor).
The Question is, Do I really need to perform a sensor fusion between IMU and GPS like what this ROS node do (http://wiki.ros.org/robot_localization) to estimate the robot pose? or is it just enough to read the Pitch,Yaw,Rotation data from the IMU to know the heading along with the GPS Long,Lat,Alt ?
What cases make it essential to perform this type of fusion ?
Thanks in advance
It is essential to perform fusion because:
1) Roll, Pitch, and Rotation data from the IMU are not perfect, and they will drift over time due to gyro errors. The magnetic field sensor in the IMU module limits this, but crudely. Fusion allows the GPS RTK measurements to be used to continuously estimate the dominant error sources in the IMU and maintain better attitude information.
2) The IMU supports position estimation when GPS-RTK is lost through signal blockage or any other outage, such that the robotic platform is not lost when and if GPS signals are interrupted.
Currently I'm attempting to read a 600ppr optical encoder with a simple attachInterrupt() function through the built in Cloud 9 IDE (node.js), the issue is that if the rotary encoder is rotated too quickly the position data becomes lost; it appears that the frequency of the signal provided by the encoder exceeds the interrupt's sampling rate.
My question is there a way to increase the sampling rate to somewhere in the range of 100KHz, currently it seems to sample at roughly 2KHz.
Thank you for your help!
I just bought Digital heart beat rate sensor:
http://www.dealextreme.com/p/digital-heart-beat-rate-sensor-3-5mm-data-port-16009
And I'm looking how I can make application for iOS to work with.
Sensor has 3.5mm jack and I can detect signal with audio framework on iOS.
Can you give me some guidelines how to start with detecting these signals into heart beat rates?
That sensor looks rather like one I have here in my junk box. If so, it generates a voltage signal which depends on the pressure exerted on it by the skin against which it is pressed. If there is a strong pulse at the point of pressure, I see a signal on an oscilloscope which has a component at the pulse rate: so it is at a frequency of around 1-2Hz.
This is WAY below the audio range, and in most audio interfaces would be filtered out before it ever got to the audio in ADC. I don't have a handy iPhone to check this on, but it would be bad design if the audio input did let such frequencies through. And Mr Jobs (R.I.P.) did not approve of bad design!
There is also a lot of interference at other frequencies: mains hum (50Hz here), and at lower frequencies spurious signals from muscle twitches.
To make this work, you would need some sort of signal conditioning. If it was up to me, I would use a high input impedance amplifier, with about a 0.1Hz - 10Hz passband, followed by a voltage to frequency converter. That would give me a tone, which i could set in the audio band, whose frequency varied up & down as the pressure on the sensor changes. That would let me use fairly simple frequency detection software to recover the pressure waveform, which could then be processed using autocorrelation or similar techniques to recover the heartbeat frequency. A DTMF decoder is not the right tool, though.
I did find when I played about with the senor that it was very touchy, responding to almost everything going, and it wouldn't be easy to pick out the heartbeat. Your sensor may be different, though.