Compute absolute rotaion matrix from relative rotation matrix - numpy

Using a homography matrix, I am able to find a mapping from one image to another. From this matrix I can also compute a relative rotation matrix between the two images. How can I then compute an absolute rotation matrix? And what are the differences between these two matrices?

General points:
A general homography between images does not imply a camera motion that is a pure rotation.
However, camera motion that is a pure rotation, or one whose translation is very small compared to the distance from the camera and the scene, is well modeled by a homography.
Specifically to your question:
A "relative" rotation is just that, a motion from the orientation of the first camera to the one of the second camera.
An "absolute" rotation, or orientation, describes a motion with respect to a specified "reference" coordinate frame that is constant and independent of the camera motion.
As a special case, if you have only two camera poses, and you use the first one as the reference, then the relative pose of the second one is also its absolute pose.

Related

Extracting a plane from an image taken by a camera

I have a camera at a known fixed location and orientation.
I also have a plane at a known location whose z position changes.
I want to turn the image from the camera into a top down view of the plane.
I can do this without knowing any positions by using the 4 points of the plane for a homography matrix and warping the image but each time the plane moves in Z I have to repeat this process.
After searching around online most methods seem to center on finding features of the image (using SIFT or something like it) then computing a homography matrix.
With the problem so constrained I thought there may be a simple linear algebra based approach.

Camera's extrinsic matrix

I am trying to use MATLAB's camera calibrator to calibrate an infrared camera. I was able to get the intrinsic matrix by just feeding around 100 images to the calibrator. But I'm struggling with how to get the extrinsic matrix [R|t].
Because the extrinsic matrix is used to map the world frame with the camera frame, so in theory, when the camera(object) is moving, there will be many extrinsic matrices.
In the picture below, if the intrinsic matrix is determined using 50 images, then there are 50 extrinsic matrices correspond to each image. Am I correct?
You are right. Usually, a by-product of an intrinsic calibration is the extrinsic matrix for each pattern observed; this is mostly used to draw the patterns with respect to the camera as in the picture you posted.
What you usually do afterwards is to define some external reference frame that makes sense for you application, also known as the 'world' reference frame, and compute the pose of the camera with respect to it. That's the extrinsic matrix you always hear about.
For this, you:
Define the reference frame and take some points with known 3D coordinates on it; this can be a grid drawn on the floor, for example.
Take a picture of the 3D points with the calibrated camera and get a list of the correspondent 2D (image) coordinates of the points.
Use a pose estimation function that takes: the camera intrinsic parameters, the 3D points and the correspondent 2D image points. I am more familiar with OpenCV, but the Matlab function that seems to do the job is: https://www.mathworks.com/help/vision/ref/estimateworldcamerapose.html

Pinhole camera model - Finding rotation from optical axis

In the pinhole camera model, is it possible to determine the rotation required from the optical/principal axis (the axis which pierces the image plane) to intersect a given pixel coordinate (u,v)?
I have an image where I am detecting a marker in space, and have the intrinsic and extrinsic camera parameters available. I am using the extrinsic parameters to cast a 2d ray into a separately constructed map (which is overhead and 2d), however I would like the ray angle to change depending on if the detected marker is to the left or right inside of the image.
My first thought was to use arctan utilizing the focal length and the u coordinate (x-axis on image plane from center of image) to determine an angle, however I don't think the units of measurement cooperate: one should be in real world meters and the other is arbitrary pixels.

Circular Motion of calibrated camera with given degree

according to this project (carving a dinosaur) I'd like to create a dataset with 36 images taken from an object and estimate the appropriate camera projection matrix.
Therefore I calibrated my camera once (extrinsic/intrinsic) for the first image with three chessboard patterns and now I want to add circular motion (rougly 10 degrees) according to the 36 images I've taken to get something like shown here:
My camera is static while the photographed object was rotated 10 degrees for every image.
How do I achieve this? Is it correct to create rotation matrices by hand and add it just to my camera projection matrix?
Thanks for advice
Modifying rotation matrices is not enough, you need to change position of the camera. In structure from motion problem it is assumed that scene is static, while camera is moving. You can consider such case because only relational movement is important.
Let the extrinsic camera matrix be A = R[I | -C], where C is position of camera center in global frame and R is rotation from global frame to the camera frame. Let Ra represent rotation by angle alpha about vertical axis in global frame. It can be written as (cos(alpha),-sin(alpha),0;sin(alpha),cos(alpha),0;0,0,1). Then the required camera matrix can be computed as A2 = R2[I | -C2], where R2 = R * transpose(Ra) and C2 = Ra * C.
However, you should ensure two things when using this approach. Firstly vertical axis of global frame must correspond to a real-world vertical direction. Secondly the origin of global frame must lie on the axis of the camera center rotation. The latter can be achieved by putting the object at the origin of global frame.
If angles are measured inaccurately or global frame is not centered well, then the computed extrinsic matrix can also be inaccurate. It can be used as an initial estimate for a structure from motion algorithm in this case. The other alternative is to calibrate the camera for each frame, not only the first one.

How to calibrate a camera and a robot

I have a robot and a camera. The robot is just a 3D printer where I changed the extruder for a tool, so it doesn't print but it moves every axis independently. The bed is transparent, and below the bed there is a camera, the camera never moves. It is just a normal webcam (playstation eye).
I want to calibrate the robot and the camera, so that when I click on a pixel on a image provided by the camera, the robot will go there. I know I can measure the translation and the rotation between the two frames, but that will probably return lots of errors.
So that's my question, how can I relate the camera and a robot. The camera is already calibrated using chessboards.
In order to make everything easier, the Z-axis can be ignored. So the calibration will be over X and Y.
It depends of what error is acceptable for you.
We have similar setup where we have camera which looks at some plane with object on it that can be moved.
We assume that the image and plane are parallel.
First lets calculate the rotation. Put the tool in such position that you see it on the center of the image, move it on one axis select the point on the image that is corresponding to tool position.
Those two points will give you a vector in the image coordinate system.
The angle between this vector and original image axis will give the rotation.
The scale may be calculated in the similar way, knowing the vector length (in pixels) and the distance between the tool positions(in mm or cm) will give you the scale factor between the image and real world axis.
If this method won't provide enough accuracy you may calibrate the camera for distortion and relative position to the plane using computer vision techniques. Which is more complicated.
See the following links
http://opencv.willowgarage.com/documentation/camera_calibration_and_3d_reconstruction.html
http://dasl.mem.drexel.edu/~noahKuntz/openCVTut10.html