Circular Motion of calibrated camera with given degree - camera

according to this project (carving a dinosaur) I'd like to create a dataset with 36 images taken from an object and estimate the appropriate camera projection matrix.
Therefore I calibrated my camera once (extrinsic/intrinsic) for the first image with three chessboard patterns and now I want to add circular motion (rougly 10 degrees) according to the 36 images I've taken to get something like shown here:
My camera is static while the photographed object was rotated 10 degrees for every image.
How do I achieve this? Is it correct to create rotation matrices by hand and add it just to my camera projection matrix?
Thanks for advice

Modifying rotation matrices is not enough, you need to change position of the camera. In structure from motion problem it is assumed that scene is static, while camera is moving. You can consider such case because only relational movement is important.
Let the extrinsic camera matrix be A = R[I | -C], where C is position of camera center in global frame and R is rotation from global frame to the camera frame. Let Ra represent rotation by angle alpha about vertical axis in global frame. It can be written as (cos(alpha),-sin(alpha),0;sin(alpha),cos(alpha),0;0,0,1). Then the required camera matrix can be computed as A2 = R2[I | -C2], where R2 = R * transpose(Ra) and C2 = Ra * C.
However, you should ensure two things when using this approach. Firstly vertical axis of global frame must correspond to a real-world vertical direction. Secondly the origin of global frame must lie on the axis of the camera center rotation. The latter can be achieved by putting the object at the origin of global frame.
If angles are measured inaccurately or global frame is not centered well, then the computed extrinsic matrix can also be inaccurate. It can be used as an initial estimate for a structure from motion algorithm in this case. The other alternative is to calibrate the camera for each frame, not only the first one.

Related

Extracting a plane from an image taken by a camera

I have a camera at a known fixed location and orientation.
I also have a plane at a known location whose z position changes.
I want to turn the image from the camera into a top down view of the plane.
I can do this without knowing any positions by using the 4 points of the plane for a homography matrix and warping the image but each time the plane moves in Z I have to repeat this process.
After searching around online most methods seem to center on finding features of the image (using SIFT or something like it) then computing a homography matrix.
With the problem so constrained I thought there may be a simple linear algebra based approach.

Compute absolute rotaion matrix from relative rotation matrix

Using a homography matrix, I am able to find a mapping from one image to another. From this matrix I can also compute a relative rotation matrix between the two images. How can I then compute an absolute rotation matrix? And what are the differences between these two matrices?
General points:
A general homography between images does not imply a camera motion that is a pure rotation.
However, camera motion that is a pure rotation, or one whose translation is very small compared to the distance from the camera and the scene, is well modeled by a homography.
Specifically to your question:
A "relative" rotation is just that, a motion from the orientation of the first camera to the one of the second camera.
An "absolute" rotation, or orientation, describes a motion with respect to a specified "reference" coordinate frame that is constant and independent of the camera motion.
As a special case, if you have only two camera poses, and you use the first one as the reference, then the relative pose of the second one is also its absolute pose.

Pinhole camera model - Finding rotation from optical axis

In the pinhole camera model, is it possible to determine the rotation required from the optical/principal axis (the axis which pierces the image plane) to intersect a given pixel coordinate (u,v)?
I have an image where I am detecting a marker in space, and have the intrinsic and extrinsic camera parameters available. I am using the extrinsic parameters to cast a 2d ray into a separately constructed map (which is overhead and 2d), however I would like the ray angle to change depending on if the detected marker is to the left or right inside of the image.
My first thought was to use arctan utilizing the focal length and the u coordinate (x-axis on image plane from center of image) to determine an angle, however I don't think the units of measurement cooperate: one should be in real world meters and the other is arbitrary pixels.

Three.js camera understanding

Here's the task:
We have an Mesh, drawn in position POS with rotation ROT
Also we have a camera Which position and rotation is relative to Mesh For example camera point is CPOS and camera rotation is CROT.
How to calculate resulting angle for camera? I was assuming that it something like:
camera.rotation.x = mesh.rotation.x + viewport.rotation.x
camera.rotation.y = mesh.rotation.y + viewport.rotation.y
camera.rotation.z = mesh.rotation.z + viewport.rotation.z
That worked strange and wrong.
Then I decided to read about it on docs and completely dissapointed.
There are several kind of rotation structures (Euler, Quaternion). But What a want is something different.
Imagine, like you are on spaceship. And it moves in space. You are sitting at starboard turret and looking at objects. They seems like passing by...
Then you want to turn your head - Angel of your head is known to you (in raw opengl, I'd just multiplied head rotation matrix on ship's rotation matrix and got my projection matrix).
In other words I want only x and y axis for camera rotations, combined in matrix. Then I want to multiply it with position-rotation matrix of an object. And this final matrix would be my projection matrix.
How could I do the same in THREE.js?
-----EDIT-----
Thank you for the answer.
Which coords should I give to a camera? It should be local, mesh relative coords, or something absolute?
I understand, that this questions are obvious, but there's no any description about relative objects in THREE.JS docs (besides api description). And the answer might be ambiguous.
Add the camera as a child of the mesh like so:
mesh.add( camera );
When the camera is a child of an object, the camera's position and orientation are specified relative to the parent object.
You can set the camera's orientation by setting either the camera's quaternion or Euler rotation -- your choice.
Please note that the renderer updates the object's matrix and matrixWorld for you. You do not need to do that manually.
three.js r.63

World space to screen space (perspective projection)

I'm using a 3d engine and need to translate between 3d world space and 2d screen space using perspective projection, so I can place 2d text labels on items in 3d space.
I've seen a few posts of various answers to this problem but they seem to use components I don't have.
I have a Camera object, and can only set it's current position and lookat position, it cannot roll. The camera is moving along a path and certain target object may appear in it's view then disappear.
I have only the following values
lookat position
position
vertical FOV
Z far
Z near
and obviously the position of the target object.
Can anyone please give me an algorithm that will do this using just these components?
Many thanks.
all graphics engines use matrices to transform between different coordinats systems. Indeed OpenGL and DirectX uses them, because they are the standard way.
Cameras usually construct the matrices using the parameters you have:
view matrix (transform the world to position in a way you look at it from the camera position), it uses lookat position and camera position (also the up vector which usually is 0,1,0)
projection matrix (transforms from 3D coordinates to 2D Coordinates), it uses the fov, near, far and aspect.
You could find information of how to construct the matrices in internet searching for the opengl functions that create them:
gluLookat creates a viewmatrix
gluPerspective: creates the projection matrix
But I cant imagine an engine that doesnt allow you to get these matrices, because I can ensure you they are somewhere, the engine is using it.
Once you have those matrices, you multiply them, to get the viewprojeciton matrix. This matrix transform from World coordinates to Screen Coordinates. So just multiply the matrix with the position you want to know (in vector 4 format, being the 4ยบ component 1.0).
But wait, the result will be in homogeneous coordinates, you need to divide X,Y,Z of the resulting vector by W, and then you have the position in Normalized screen coordinates (0 means the center, 1 means right, -1 means left, etc).
From here it is easy to transform multiplying by width and height.
I have some slides explaining all this here: https://docs.google.com/presentation/d/13crrSCPonJcxAjGaS5HJOat3MpE0lmEtqxeVr4tVLDs/present?slide=id.i0
Good luck :)
P.S: when you work with 3D it is really important to understand the three matrices (model, view and projection), otherwise you will stumble every time.
so I can place 2d text labels on items
in 3d space
Have you looked up "billboard" techniques? Sometimes just knowing the right term to search under is all you need. This refers to polygons (typically rectangles) that always face the camera, regardless of camera position or orientation.