Optical Flow egomotion estimation - camera

below you can see the result of the optical flow if a camera makes a translation movement. If the camera makes a roll rotation the result looks like the second picture. Is it possible to retrieve the yaw angle from a camera if its only rotation around the yaw axis?
I think in the optical flow you can recognize if the camera is rotating around the yaw axis (z-axis), but i don't know how to retrieve the information how much the cam has rotated.
I would be gradeful for any hints. Thanks
Translation:
Roll rotation:
Orientation of camera:

If you have a pure rotation of your cam then you can use findhomography. You need four point correspondence in your pictures. For a pure rotation the homography matrix is already a rotation matrix. Otherwise you need to decompose the homograohy matrix. For a camera movement off 6 dof you can use the function find essential matrix and decompose this to translation and rotation.

Related

How to calculate Distance D to object from a tilted camera with known H?

Considering the camera is not tilted, it is easy to get the distance. However, can you refer a source where camera is tilted at some angle on y axis. By tilt i mean pitch. Also the camera is not looking directly at the object.

Compute absolute rotaion matrix from relative rotation matrix

Using a homography matrix, I am able to find a mapping from one image to another. From this matrix I can also compute a relative rotation matrix between the two images. How can I then compute an absolute rotation matrix? And what are the differences between these two matrices?
General points:
A general homography between images does not imply a camera motion that is a pure rotation.
However, camera motion that is a pure rotation, or one whose translation is very small compared to the distance from the camera and the scene, is well modeled by a homography.
Specifically to your question:
A "relative" rotation is just that, a motion from the orientation of the first camera to the one of the second camera.
An "absolute" rotation, or orientation, describes a motion with respect to a specified "reference" coordinate frame that is constant and independent of the camera motion.
As a special case, if you have only two camera poses, and you use the first one as the reference, then the relative pose of the second one is also its absolute pose.

Three.js camera understanding

Here's the task:
We have an Mesh, drawn in position POS with rotation ROT
Also we have a camera Which position and rotation is relative to Mesh For example camera point is CPOS and camera rotation is CROT.
How to calculate resulting angle for camera? I was assuming that it something like:
camera.rotation.x = mesh.rotation.x + viewport.rotation.x
camera.rotation.y = mesh.rotation.y + viewport.rotation.y
camera.rotation.z = mesh.rotation.z + viewport.rotation.z
That worked strange and wrong.
Then I decided to read about it on docs and completely dissapointed.
There are several kind of rotation structures (Euler, Quaternion). But What a want is something different.
Imagine, like you are on spaceship. And it moves in space. You are sitting at starboard turret and looking at objects. They seems like passing by...
Then you want to turn your head - Angel of your head is known to you (in raw opengl, I'd just multiplied head rotation matrix on ship's rotation matrix and got my projection matrix).
In other words I want only x and y axis for camera rotations, combined in matrix. Then I want to multiply it with position-rotation matrix of an object. And this final matrix would be my projection matrix.
How could I do the same in THREE.js?
-----EDIT-----
Thank you for the answer.
Which coords should I give to a camera? It should be local, mesh relative coords, or something absolute?
I understand, that this questions are obvious, but there's no any description about relative objects in THREE.JS docs (besides api description). And the answer might be ambiguous.
Add the camera as a child of the mesh like so:
mesh.add( camera );
When the camera is a child of an object, the camera's position and orientation are specified relative to the parent object.
You can set the camera's orientation by setting either the camera's quaternion or Euler rotation -- your choice.
Please note that the renderer updates the object's matrix and matrixWorld for you. You do not need to do that manually.
three.js r.63

Remove gravity from IMU accelerometer

I've found this beautiful quick way to remove gravity from accelerometer readings. However, I have a 6dof IMU (xyz gyro, xyz accel, no magnetometer) so I am not sure if I can use this code (I tried and it doesn't work correctly).
How would someone remove the gravity component? It's a big obstacle because I can't proceed with my project.
EDIT:
What I have:
quaternion depicting the position of aircraft (got that using Extended Kalman Filter)
acceleration sensor readings (unfiltered; axes aligned as the plane is aligned; gravity is also incorporated in these readings)
What I want:
remove the gravity
correct (rotate) the accelerometer readings so it's axes will be aligned with earth's frame of reference's axes
read the acceleration towards earth (now Z component of accelerometer)
Basically I want to read the acceleration towards earth no matter how the plane is oriented! But first step is to remove gravity I guess.
UPDATE: OK, so what you need is to rotate a vector with quaternion. See here or here.
You rotate the measured acceleration vector with the quaternion (corresponding to the orientation) then you substract gravity [0, 0, 9.81] (you may have -9.81 depending on your sign conventions) from the result. That's all.
I have implemented sensor fusion for Shimmer 2 devices based on this manuscript, I highly recommend it. It only uses accelerometers and gyroscopes but no magnetometer, and does exactly what you are looking for.
The resource you link to in your question is misleading. It relies on the quaternion that comes from sensor fusion. In other words, somebody already did the heavy lifting for you, already prepared the gravity compensation for you.

How to calibrate a camera and a robot

I have a robot and a camera. The robot is just a 3D printer where I changed the extruder for a tool, so it doesn't print but it moves every axis independently. The bed is transparent, and below the bed there is a camera, the camera never moves. It is just a normal webcam (playstation eye).
I want to calibrate the robot and the camera, so that when I click on a pixel on a image provided by the camera, the robot will go there. I know I can measure the translation and the rotation between the two frames, but that will probably return lots of errors.
So that's my question, how can I relate the camera and a robot. The camera is already calibrated using chessboards.
In order to make everything easier, the Z-axis can be ignored. So the calibration will be over X and Y.
It depends of what error is acceptable for you.
We have similar setup where we have camera which looks at some plane with object on it that can be moved.
We assume that the image and plane are parallel.
First lets calculate the rotation. Put the tool in such position that you see it on the center of the image, move it on one axis select the point on the image that is corresponding to tool position.
Those two points will give you a vector in the image coordinate system.
The angle between this vector and original image axis will give the rotation.
The scale may be calculated in the similar way, knowing the vector length (in pixels) and the distance between the tool positions(in mm or cm) will give you the scale factor between the image and real world axis.
If this method won't provide enough accuracy you may calibrate the camera for distortion and relative position to the plane using computer vision techniques. Which is more complicated.
See the following links
http://opencv.willowgarage.com/documentation/camera_calibration_and_3d_reconstruction.html
http://dasl.mem.drexel.edu/~noahKuntz/openCVTut10.html