Remove gravity from IMU accelerometer - embedded

I've found this beautiful quick way to remove gravity from accelerometer readings. However, I have a 6dof IMU (xyz gyro, xyz accel, no magnetometer) so I am not sure if I can use this code (I tried and it doesn't work correctly).
How would someone remove the gravity component? It's a big obstacle because I can't proceed with my project.
EDIT:
What I have:
quaternion depicting the position of aircraft (got that using Extended Kalman Filter)
acceleration sensor readings (unfiltered; axes aligned as the plane is aligned; gravity is also incorporated in these readings)
What I want:
remove the gravity
correct (rotate) the accelerometer readings so it's axes will be aligned with earth's frame of reference's axes
read the acceleration towards earth (now Z component of accelerometer)
Basically I want to read the acceleration towards earth no matter how the plane is oriented! But first step is to remove gravity I guess.

UPDATE: OK, so what you need is to rotate a vector with quaternion. See here or here.
You rotate the measured acceleration vector with the quaternion (corresponding to the orientation) then you substract gravity [0, 0, 9.81] (you may have -9.81 depending on your sign conventions) from the result. That's all.
I have implemented sensor fusion for Shimmer 2 devices based on this manuscript, I highly recommend it. It only uses accelerometers and gyroscopes but no magnetometer, and does exactly what you are looking for.
The resource you link to in your question is misleading. It relies on the quaternion that comes from sensor fusion. In other words, somebody already did the heavy lifting for you, already prepared the gravity compensation for you.

Related

Detecting Angle using Kinect

I have a flat pan and using kinetic v1.
I want to receive the angle of the pan using kinetic camera.
for eg: If I put the angle in 45 degrees so kinetic will read the closet or exact angle it placed.
is this possible or any solutions ?
Thanks.
I don't know exactly how the data comes back in Kinect V1 but I believe this methodology should work for you.
First: You have to assume that the Kinect is your level of reference, if it is necessary to get the pans angle relative to the ground then make sure the Kinect is level with the ground.
Second: Separate the pan data from all other data. This should be straight forward, the pan should be the closets object so transmit the closest measurements into 3D coordinate points (array of x,y,z).
Third: Assuming you wish for horizontal angle find the highest and lowest grounds of data and average their depth from the camera. Then save both those depths and the vertical distance they are away from each other.
Fourth: Now you can essentially do the math for a triangle. Given you know the width of the pan(saves steps to know the objects size otherwise you have to estimate that too) you can solve for a triangle with sides a: distance to point 1, side b: distance to point 2, side c: size of pan and finding the angle of where points a and c or b and c meet will give you the horizontal angle of the pan relative to the Kinect.
Fifth: For verification your measurements came back correct you can then use the angle you found to calculate the width of the pan given the angle and distance of the top and bottom most points.
Needless to say, you need to make sure that your understanding of trig is solid for this task.

Optical Flow egomotion estimation

below you can see the result of the optical flow if a camera makes a translation movement. If the camera makes a roll rotation the result looks like the second picture. Is it possible to retrieve the yaw angle from a camera if its only rotation around the yaw axis?
I think in the optical flow you can recognize if the camera is rotating around the yaw axis (z-axis), but i don't know how to retrieve the information how much the cam has rotated.
I would be gradeful for any hints. Thanks
Translation:
Roll rotation:
Orientation of camera:
If you have a pure rotation of your cam then you can use findhomography. You need four point correspondence in your pictures. For a pure rotation the homography matrix is already a rotation matrix. Otherwise you need to decompose the homograohy matrix. For a camera movement off 6 dof you can use the function find essential matrix and decompose this to translation and rotation.

How to calibrate a camera and a robot

I have a robot and a camera. The robot is just a 3D printer where I changed the extruder for a tool, so it doesn't print but it moves every axis independently. The bed is transparent, and below the bed there is a camera, the camera never moves. It is just a normal webcam (playstation eye).
I want to calibrate the robot and the camera, so that when I click on a pixel on a image provided by the camera, the robot will go there. I know I can measure the translation and the rotation between the two frames, but that will probably return lots of errors.
So that's my question, how can I relate the camera and a robot. The camera is already calibrated using chessboards.
In order to make everything easier, the Z-axis can be ignored. So the calibration will be over X and Y.
It depends of what error is acceptable for you.
We have similar setup where we have camera which looks at some plane with object on it that can be moved.
We assume that the image and plane are parallel.
First lets calculate the rotation. Put the tool in such position that you see it on the center of the image, move it on one axis select the point on the image that is corresponding to tool position.
Those two points will give you a vector in the image coordinate system.
The angle between this vector and original image axis will give the rotation.
The scale may be calculated in the similar way, knowing the vector length (in pixels) and the distance between the tool positions(in mm or cm) will give you the scale factor between the image and real world axis.
If this method won't provide enough accuracy you may calibrate the camera for distortion and relative position to the plane using computer vision techniques. Which is more complicated.
See the following links
http://opencv.willowgarage.com/documentation/camera_calibration_and_3d_reconstruction.html
http://dasl.mem.drexel.edu/~noahKuntz/openCVTut10.html

Creating seamless worldmaps with Fractal Brownian Motion

I'm creating heightmaps using Fractal Brownian Motion. I'm then coloring it based on the heights and mapping it to a sphere. My problem is that the heightmap doesn't wrap seamlessly. I've used the Diamond Square algorithm and it's pretty easy to make things seamless using it, but I can't seem to figure out how to do it with fBm and I seem to be having trouble finding an explanation for it on the web.
To clarify, by "seamless", I mean that when I map it to a sphere, it creates a seamless map on the sphere.
Instead of calculating the heightmap per pixel on the heightmap, calculate the heightmap in 3D space based on each point on the sphere and then map that to an image pixel. You're going to have trouble wrapping a 2D, rectangular heightmap like that onto a sphere without getting ugly results at the poles unless you start your calculations from the sphere.
fBM generalizes to 3 dimensions, so given a point on the sphere you can get the height at that point, and then you can do the math to map that value to where it should be stored in the heightmap image.
Or you could use one of the traditional map projections. A cylindrical projection (x, y)->(x, sin y) would give you a seam of just one meridian, which you could rotate to the back. Or you could "antialias" the edge by one or another means.
With a stereographic projection (x,y,z)->(x/(z+1),y/(z+1)), there's only one sour point (the projection point itself).

Working with Accelerometer

I am working on gestures using acceleration values (x, y, z) from a device.
If I hold the device in my hand in a resting position (x,y,z) = ((0,0,0)). But if I change the direction of device (still at resting position) the values are changed to something like ((766,766,821)). As all the x, y, z axis are changed compared to their original orientations.
Is there any way (trigonometric function OR other) to resolve this issue?
The acceleration due to gravity will always be present. It appears you are subtracting that value from one of the axes when the device is in a particular orientation.
What you will need to do to detect gestures is to detect the tiny difference that momentarily appears from the acceleration due to gravity as the devices begins moving. You won't be able to detect if the device is stationary or moving at a constant velocity, but you will be able to determine if it is turning or being accelerated.
The (x,y,z) values give you a vector, which gives the direction of the acceleration. You can compute the (square of the) length of this vector as x^2 + y^2 + x^2. If this is the same as when the device is at rest, then you know the device is unaccelerated, but in a certain orientation. (Either at rest, or moving at a constant velocity.)
To detect movement, you need to notice the momentary change in the length of this vector as the device begins to move, and again when it is brought to a stop. This change will likely be tiny compared to gravity.
You will need to compare the orientation of the acceleration vector during the movement to determine the direction of the motion. Note that you won't be able to distinguish every gesture. For example, moving the device forward (and stopping there) has the same effect as tilting the device slightly, and then bringing it back to the same orientation.
The easier gestures to detect are those which change the orientation of the device. Other gestures, such as a punching motion, will be harder to detect. They will show up as a change in the length of the acceleration vector, but the amount of change will likely be tiny.
EDIT:
The above discussion is for normalized values of x, y, and z. You will need to determine the values to subtract from the readings to get the vector. From a comment above, it looks like 766 are the "zero" values to subtract. But they might be different for the different axes on your device. Measure the readings with the devices oriented in all six directions. That is get the maximum and minimum values for x, y, and z. The central values should be halfway between the extremes (and hopefully 766).
Certain gestures will have telltale signatures.
Dropping the device will reduce the acceleration vector momentarily, then increase it momentarily as the device is brought to a stop.
Raising the device will increase the vector momentarily, before decreasing it momentarily.
A forward motion will increase the vector momentarily, but tilt it slightly forward, then increase it again momentarily, but tilted backward, as the device is brought to a stop.
Most of the time the length of the vector will equal the acceleration due to gravity.
If the device is not compensating automatically for the gravitational acceleration you need to substract the (0,0,~9.8m/s2) vector from the output of the device.
However, you will also need to have the orientation of the device (Euler angle or Rotation Matrix). If your device isn't providing that it's basically impossible to tell if the signaled acceleration is caused by actually moving the device (linear acc) or by simply rotating it (gravity changing direction).
Your compensated acceleration will become:
OutputAcc = InputAcc x RotMat - (0,0,9.8)
This way your OutputAcc vecor will always be in a local coord frame (ie. Z is always up)
I find your question unclear. What exactly do you measure and what do you expect?
In general, an accelerometer will, if held in fixed position, measure the gravity of the earth. This is displayed as acceleration upwards, which might sound strange at first but is completely correct: as the gravity is accelerating "down" and the device is in a fixed position some force in the opposite direction, i.e. "up" needs to be applied. The force you need to hold the device in a fixed position is this force, which has a corresponding acceleration in the "up" direction.
Depending on your device this gravity acceleration might be substracted before you get the values in the PC. But, if you turn the acceleratometer, the gravity acceleration is still around and still points to the same "up" direction. If, before turning the acceleratometer, "up" would correspond to x it will correspond to a different axis if turned 90°, say y. Thus, both the measured acceleration on x and y axis will change.
So to answer your question it's necessary to know how your accelerometer presents the values. I doubt that in a resting position the acceleration values measured are (0, 0, 0).
Your comment makes your question clearer. What you need to do is calibrate your accelerometer every time the orientation changes. There is no getting around this. You could make it a UI element in your application or if it fits with your uses, recalibrate to 0 if the acceleration is relatively constant for some amount of time (won't work if you measure long accelerations).
Calibration is either built into the device's api (check the documentation) or something you have to do manually. To do it manually, you have to read the current acceleration and store those 3 values. Then whenever you take a reading from the device, subtract those 3 values from each read value.