Why is Gyroscope needed if Quaternions is present? - gyroscope

If the Quaternions provide accurate orientation, Why do the sensors also provide gyroscope data?

Related

Calibration of magnetometer attached to a vehicle as Figure 8 calibration isn't possible in such scrnaroo

I was trying to find a way to calibrate a magnetometer attached to a vehicle as Figure 8 method of calibration is not really posible on vehicle.
Also removing magnetomer calibrating and fixing won't give exact results as fixing it back to vehicle introduces more hard iron distortion as it was calibrated without the vehicle environment.
My device also has a accelerometer and gps. Can I use accelerometer or gps data (this are calibrated) to automatically calibrate the magnetometer
Given that you are not happy with the results of off-vehicle calibration, I doubt that accelerometer and GPS data will help you a lot unless measured many times to average the noise (although technically it really depends on the precision of the sensors, so if you have 0.001% accelerometer you might get a very good data out of it and compensate inaccuracy of the GPS data).
From the question, I assume you want just a 2D data and you'll be using the Earth's magnetic field as a source (as otherwise, GPS wouldn't help). You might be better off renting a vehicle rotation stand for a day - it will have a steady well known angular velocity and you can record the magnetometer data for a long period of time (say for an hour, over 500 rotations or so) and then process it by averaging out any noise. Your vehicle will produce a different magnetic field while the engine is off, idle and running, so you might want to do three different experiments (or more, to deduce the engine RPM effect to the magnetic field it produces). Also, if the magnetometer is located close to the passengers, you will have additional influences from them and their devices. If rotation stand is not available (or not affordable), you can make a calibration experiment with the GPS (to use the accelerometers or not, will depend on their precision) as following:
find a large flat empty paved surface with no underground magnetic sources (walk around with your magnetometer to check) then put the
vehicle into a turn on this surface and fix the steering wheel use the cruise control to fix the speed
wait for couple of circles to ensure they are equal make a recording of 100 circles (or 500 to get better precision)
and then average the GPS noise out
You can do this on a different speed to get the engine magnetic field influence from it's RPM
I had performed a similar procedure to calibrate the optical sensor on the steering wheel to build the model of vehicle angular rotation from the steering wheel angle and current speed and that does not produce very accurate results due to the tire slipping differently on a different surface, but it should work okay for your problem.

Forward and backward movement detection with IMU

We have an embedded device mounted in a vehicle. It has accelerometer, gyrosopce and GPS sensors on board. The goal is to distinguish when vehicle is moving forward and backward (in reverse gear). Sensor's axis are aligned with vehicle's axis.
Here's our observations:
It's not enough to check direction of acceleration, because going backwards and braking while moving forward would show results in the same direction.
We could say that if GPS speed decreased 70 -> 60 km/h it was a braking event. But it becomes tricky when speed is < 20 km/h. Decrease 20 -> 10 km/h is possible when going both directions.
We can't rely on GPS angle at low speeds.
How could we approach this problem? Any ideas, articles or researches would be helpful.
You are looking for Attitude and heading reference system implementation. Here's an open source code library. It works by fusing the two data sources (IMU and GPS) to determine the location and the heading.
AHRS provides you with roll, pitch and yaw which are the angles around X, Y and Z axises of the IMU device.
There are different algorithms to do that. Examples of AHRS algorithms are Madgwick and Mahony algorithms. They will provide you with quaternion and Eurler angles which can easily help you identify the orientation of the vehicle at any time.
This is a cool video of AHRS algo running in real time.
Similar question is here.
EDIT
Without Mag data, you won't get high accuracy and your drift will increase over time.
Still, you can perform AHRS on 6DoF (Acc XYZ and Gyr XYZ) using Madgwick algorithm. You can find an implementation here. If you want to dive into the theory of things, have a look at Madgwick's internal report.
Kalman Filter could be an option to merge your IMU 6DoF with GPS data which could dramatically reduce the drift over time. But that requires a good understanding of Kalman Filters and probably custom implementation.

I want to calculate stair steps count using pressure sensor and pedometer

guys
i want to make wearable device which have provision of stairs steps count. i have LPS25HB (pressure sensor), LSM6DS3 (accelerometer) pressure sensor give the altimeter and accelerometer give the steps count.
but my pressure sensor is one type barometer and its value changed according to atmospheric pressure.values variation 3~5 feets from reading of pressure sensor.
All you are interested in is relative change in altitude. The absolute altitude is irrelevant - though you could get that by augmenting the data with GPS.
You cannot of course distinguish perhaps between stairs and a steep hill, but from a fitness monitoring point of view that too is perhaps irrelevant; "work done" (i.e calories burned) is simply a function of the steps taken and altitude gained.
Because local atmospheric pressure changes with weather conditions and temperature as well as altitude an air pressure sensor cannot be an accurate source of absolute altitude without calibration to some known reference under the prevailing conditions.

How can I compute optical flow from a depth image stream from a depth camera?

I have a depth camera feed already set up and in order to make it more interesting I want to compute some data out of it like normals, motion/optical flow and other data sets to use them for visual effects. I am particularly interested in optical flow and whether it can be computed from a depth only stream.
Has this been implemented? If so I'd like to know what are the methods and understand which one would be the easiest to use.
I worked on Kinect depth camera and implemented a patient tracking algorithm. The algorithm itself is commercial and I cannot disclose the details. But I can give my two cents here.
The depth feed from Kinect should not directly used for optical flow (motion tracking), due to no depth pixels. You can use inpainting to fill in gaps in the depth image. If you are using OpenCV, you can refer to the implementation here.
http://www.morethantechnical.com/2011/03/05/neat-opencv-smoothing-trick-when-kineacking-kinect-hacking-w-code/
I suggest using a smoothing filter after inpainting to have a smooth depth data near object edges. You can use simple filters present in OpenCV with depth stream. It would be nice to downsample 16 bit depth to 8 bit RGB image to help visualize disparity image.
I believe you can then use the resulting stream with optical flow algorithm from OpenCV. Here is an example.
http://docs.opencv.org/3.1.0/d7/d8b/tutorial_py_lucas_kanade.html#gsc.tab=0
You can also use Dense trajectory implementation, but I believe it is processor intensive and the final frame rate might be really slow.
https://lear.inrialpes.fr/people/wang/dense_trajectories
Hope this helps.

How can the auto-focus of camera be explained using pinhole camera model?

Shifting the auto-focus in real-world camera doesn't change the focal length, rotation, or any other camera parameter in pinhole camera model. However, it does shift the image plane and affect the depth of field. How is this possible?
I understand that complex mechanism of real-world camera cannot be easily explained with pinhole camera model. However, I believe that there should be some link between them as we use this simplified model in various real-world computer vision applications.
Short answer: it cannot. The pinhole camera model has no notion of 'focus'.
A more interesting question is, I think, the effect of changing the focusing distance on a pinhole approximation of the lens+camera combination, the approximation itself being estimated, for example, through a camera calibration procedure.
With "ordinary" consumer-type lenses having moderate non-linear distortion, usually one observes significant changes in:
The location of the principal point (which is anyway hard to estimate precisely, and confused with the center of the distortion)
The amount of nonlinear distortion (especially with cheaper lenses and wide FOV).
The "effective" field of view - due to the fact that a change in nonlinear distortion will "pull-in" a wider or thinner view at the edges.
The last item implies a change of the calibrated focal length, and this is sometimes "surprising" for novices, who are taught that a lens's focus and focal length do not mix. To convince yourself that the FOV change is in fact happening, visualize the bounding box of the undistorted image, which is "butterfly"-shaped in the common case of barrel distortion. The pinhole model FOV angle is twice the arctangent of the ratio between the image half-width and the calibrated approximation to the physical focal length (which is the distance between the sensor and the lens's last optical surface). Changing the distortion stretches or squeezes that half-width value.