I am trying to use MATLAB's camera calibrator to calibrate an infrared camera. I was able to get the intrinsic matrix by just feeding around 100 images to the calibrator. But I'm struggling with how to get the extrinsic matrix [R|t].
Because the extrinsic matrix is used to map the world frame with the camera frame, so in theory, when the camera(object) is moving, there will be many extrinsic matrices.
In the picture below, if the intrinsic matrix is determined using 50 images, then there are 50 extrinsic matrices correspond to each image. Am I correct?
You are right. Usually, a by-product of an intrinsic calibration is the extrinsic matrix for each pattern observed; this is mostly used to draw the patterns with respect to the camera as in the picture you posted.
What you usually do afterwards is to define some external reference frame that makes sense for you application, also known as the 'world' reference frame, and compute the pose of the camera with respect to it. That's the extrinsic matrix you always hear about.
For this, you:
Define the reference frame and take some points with known 3D coordinates on it; this can be a grid drawn on the floor, for example.
Take a picture of the 3D points with the calibrated camera and get a list of the correspondent 2D (image) coordinates of the points.
Use a pose estimation function that takes: the camera intrinsic parameters, the 3D points and the correspondent 2D image points. I am more familiar with OpenCV, but the Matlab function that seems to do the job is: https://www.mathworks.com/help/vision/ref/estimateworldcamerapose.html
Related
I have a camera at a known fixed location and orientation.
I also have a plane at a known location whose z position changes.
I want to turn the image from the camera into a top down view of the plane.
I can do this without knowing any positions by using the 4 points of the plane for a homography matrix and warping the image but each time the plane moves in Z I have to repeat this process.
After searching around online most methods seem to center on finding features of the image (using SIFT or something like it) then computing a homography matrix.
With the problem so constrained I thought there may be a simple linear algebra based approach.
Using a homography matrix, I am able to find a mapping from one image to another. From this matrix I can also compute a relative rotation matrix between the two images. How can I then compute an absolute rotation matrix? And what are the differences between these two matrices?
General points:
A general homography between images does not imply a camera motion that is a pure rotation.
However, camera motion that is a pure rotation, or one whose translation is very small compared to the distance from the camera and the scene, is well modeled by a homography.
Specifically to your question:
A "relative" rotation is just that, a motion from the orientation of the first camera to the one of the second camera.
An "absolute" rotation, or orientation, describes a motion with respect to a specified "reference" coordinate frame that is constant and independent of the camera motion.
As a special case, if you have only two camera poses, and you use the first one as the reference, then the relative pose of the second one is also its absolute pose.
In the pinhole camera model, is it possible to determine the rotation required from the optical/principal axis (the axis which pierces the image plane) to intersect a given pixel coordinate (u,v)?
I have an image where I am detecting a marker in space, and have the intrinsic and extrinsic camera parameters available. I am using the extrinsic parameters to cast a 2d ray into a separately constructed map (which is overhead and 2d), however I would like the ray angle to change depending on if the detected marker is to the left or right inside of the image.
My first thought was to use arctan utilizing the focal length and the u coordinate (x-axis on image plane from center of image) to determine an angle, however I don't think the units of measurement cooperate: one should be in real world meters and the other is arbitrary pixels.
I've managed to implement the Marching Cubes algorithm in C#. Up to now I've tried the algorithm to render a sphere. That's an easy one because the density function is not very complex to code.
But now I want to get the algorithm to go further and render some interesting terrains for games. So I would need proper density functions for this task.
First thing that comes to my head is a Volumetric Perlin Noise. That's ok but I am looking for a terrain without convex shapes, I mean, no caves and similar geometries by the moment.
Ok, I know that for that a simple height map can do the job, but I want a voxel-generated terrain. What type of density function o pseudocode would I need to implement them?
You can easily convert a heightmap into voxel terrain. Each pixel in your heightmap corresponds to a column of voxels in your voxel world. For a given pixel in the heightmap read the height. Then iterate over each voxel in the corresponding column and set it to 'solid' if it is less than your reference height or 'empty' if it is more than your reference height.
Here is some sample code using the PolyVox library.
I'm using a 3d engine and need to translate between 3d world space and 2d screen space using perspective projection, so I can place 2d text labels on items in 3d space.
I've seen a few posts of various answers to this problem but they seem to use components I don't have.
I have a Camera object, and can only set it's current position and lookat position, it cannot roll. The camera is moving along a path and certain target object may appear in it's view then disappear.
I have only the following values
lookat position
position
vertical FOV
Z far
Z near
and obviously the position of the target object.
Can anyone please give me an algorithm that will do this using just these components?
Many thanks.
all graphics engines use matrices to transform between different coordinats systems. Indeed OpenGL and DirectX uses them, because they are the standard way.
Cameras usually construct the matrices using the parameters you have:
view matrix (transform the world to position in a way you look at it from the camera position), it uses lookat position and camera position (also the up vector which usually is 0,1,0)
projection matrix (transforms from 3D coordinates to 2D Coordinates), it uses the fov, near, far and aspect.
You could find information of how to construct the matrices in internet searching for the opengl functions that create them:
gluLookat creates a viewmatrix
gluPerspective: creates the projection matrix
But I cant imagine an engine that doesnt allow you to get these matrices, because I can ensure you they are somewhere, the engine is using it.
Once you have those matrices, you multiply them, to get the viewprojeciton matrix. This matrix transform from World coordinates to Screen Coordinates. So just multiply the matrix with the position you want to know (in vector 4 format, being the 4ยบ component 1.0).
But wait, the result will be in homogeneous coordinates, you need to divide X,Y,Z of the resulting vector by W, and then you have the position in Normalized screen coordinates (0 means the center, 1 means right, -1 means left, etc).
From here it is easy to transform multiplying by width and height.
I have some slides explaining all this here: https://docs.google.com/presentation/d/13crrSCPonJcxAjGaS5HJOat3MpE0lmEtqxeVr4tVLDs/present?slide=id.i0
Good luck :)
P.S: when you work with 3D it is really important to understand the three matrices (model, view and projection), otherwise you will stumble every time.
so I can place 2d text labels on items
in 3d space
Have you looked up "billboard" techniques? Sometimes just knowing the right term to search under is all you need. This refers to polygons (typically rectangles) that always face the camera, regardless of camera position or orientation.