I would like to know if it is possible to know the camera calibration matrix anyhow , just by knowing it's specifications , without using camera calibration???
You can take a guess, but this will not replace a proper calibration, since every single camera is different--even if it is of the exact same type.
In your camera matrix, you have usually fx, fy, cx, cy (for square pixels). Take cx=w/2 and cy=h/2, where w and h are the width and height of your image, respectively.
For fx and fy, it is a bit trickier. Theoretically, we have fx = w*f_mm/w_mm, where f_mm is the focal length of your lens in mm and w_mm is the width of your CCD sensor in mm.
However, since lenses are round and sensors usually not, you cannot just take the values from the specifications. There are tables that should give a good estimate for sensor width and height given the sensor size from the specifications, e.g. on Wikipedia. However, if the lens is mounted slightly different, these values are not true anymore.
With this, you will also not calibrate for distortions. It is highly recommended to do a proper calibration, e.g. with a checkerboard.
Related
I have the task to simulate a camera with a full well capacity of 10.000 Photons per sensor element
in numpy. My first Idea was to do it like that:
camera = np.random.normal(0.0,1/10000,np.shape(img))
Imgwithnoise= img+camera
but it hardly shows an effect.
Has someone an idea how to do it?
From what I interpret from your question, if each physical pixel of the sensor has a 10,000 photon limit, this points to the brightest a digital pixel can be on your image. Similarly, 0 incident photons make the darkest pixels of the image.
You have to create a map from the physical sensor to the digital image. For the sake of simplicity, let's say we work with a grayscale image.
Your first task is to fix the colour bit-depth of the image. That is to say, is your image an 8-bit colour image? (Which usually is the case) If so, the brightest pixel has a brightness value = 255 (= 28 - 1, for 8 bits.) The darkest pixel is always chosen to have a value 0.
So you'd have to map from the range 0 --> 10,000 (sensor) to 0 --> 255 (image). The most natural idea would be to do a linear map (i.e. every pixel of the image is obtained by the same multiplicative factor from every pixel of the sensor), but to correctly interpret (according to the human eye) the brightness produced by n incident photons, often different transfer functions are used.
A transfer function in a simplified version is just a mathematical function doing this map - logarithmic TFs are quite common.
Also, since it seems like you're generating noise, it is unwise and conceptually wrong to add camera itself to the image img. What you should do, is fix a noise threshold first - this can correspond to the maximum number of photons that can affect a pixel reading as the maximum noise value. Then you generate random numbers (according to some distribution, if so required) in the range 0 --> noise_threshold. Finally, you use the map created earlier to add this noise to the image array.
Hope this helps and is in tune with what you wish to do. Cheers!
I would like to calculate the Horizontal and Vertical field of view from the camera intrinsic matrix for the cameras used in the KITTI dataset. The reason I need the Field of view is to convert a depth map into 3D point clouds.
Though this question has been asked quite a long time ago, I felt it needed an answer as I ran into the same issue and was unable to find any info on it.
I have however solved it using the information available in this document and some more general camera calibration documents
Firstly, we need to convert the supplied disparity into distance. This can be done through fist converting the disp map into floats through the method in the dev_kit where they state:
disp(u,v) = ((float)I(u,v))/256.0;
This disparity can then be converted into a distance through the default stereo vision equation:
Depth = Baseline * focal length/ Disparity
Now come some tricky parts. I searched high and low for the focal length and was unable to find it in documentation.
I realised just now when writing that the baseline is documented in the aforementioned source however from section IV.B we can see that it can be found in P(i)rect indirectly.
The P_rects can be found in the calibration files and will be used for both calculating the baseline and the translation from uv in the image to xyz in the real world.
The steps are as follows:
For pixel in depthmap:
xyz_normalised = P_rect \ [u,v,1]
where u and v are the x and y coordinates of the pixel respectively
which will give you a xyz_normalised of shape [x,y,z,0] with z = 1
You can then multiply it with the depth that is given at that pixel to result in a xyz coordinate.
For completeness, as P_rect is the depth map here, you need to use P_3 from the cam_cam calibration txt files to get the baseline (as it contains the baseline between the colour cameras) and the P_2 belongs to the left camera which is used as a reference for occ_0 files.
I'm trying to use a VectorNav VN100 IMU to map a path through an underground tunnel (GPS denied environment) and am wondering what is the best approach to take to do this.
I get lots of data points from the VN100 these include: orientation/pose (Euler angles, quaternions), and acceleration and gyroscope values in three dimensions. The acceleration and gyro values are given in raw and filtered formats where filtered outputs have been filtered using an onboard Kalman filter.
In addition to IMU measurements I also measure GPS-RTK coordinates in three dimensions at the start and end-points of the tunnel.
How should I approach this mapping problem? I'm quite new to this area and do not know how to extract position from the acceleration and orientation data. I know acceleration can be integrated once to give velocity and that in turn can be integrated again to get position but how do I combine this data together with orientation data (quaternions) to get the path?
In robotics, Mapping means representing the environment using perception sensor (like 2D,3D laser or Cameras).
Once you got the map, it can be used by robot to know its location(Localization). Map is also used for find a path between locations to move from one place to another place(Path planning).
In your case you need a perception sensor to get the better location estimation. With only IMU you can track the position using Extended Kalman filter(EKF) but it drifts quickly.
Robot Operating System has EKF implementation you can refer it.
Ok so I came across a solution that gets me somewhat closer to my goal of finding the path travelled underground, although it is by no means the final solution I'm posting my algorithm here in the hopes that it helps someone else.
My method is as follows:
Rotate the Acceleration vector A = [Ax, Ay, Az] output by the VectorNav VN100 into the North, East, Down frame by multiplying by the quaternion VectorNav output Q = [q0, q1, q2, q3]. How to multiply a vector by a quaternion is outlined in this other post.
Basically you take the acceleration vector and add a fourth component on to the end of it to act as the scalar term, then multiply by the quaternion and it's conjugate (N.B. the scalar terms in both matrices should be in the same position, in this case the scalar quaternion term is the first term, so therefore a zero scalar term should be added on to the start of the acceleration vector) e.g. A = [0,Ax,Ay,Az]. Then perform the following multiplication:
A_ned = Q A Q*
where Q* is the complex conjugate of the quaternion (i, j, and k terms are negated).
Integrate the rotated acceleration vector to get the velocity vector: V_ned
Integrate the Velocity vector to get the position in north, east, down: R_ned
There is substantial drift in the velocity and position due to sensor bias which causes drift. This can be corrected for somewhat if we know the start and end velocity and start and end positions. In this case the start and end velocities were zero so I used this to correct the drift in the velocity vector.
Uncorrected Velocity
Corrected Velocity
My final comparison between IMU position vs GPS is shown here (read: there's still a long way to go).
GPS-RTK data vs VectorNav IMU data
Now I just need to come up with a sensor fusion algorithm to try to improve the position estimation...
Recently I'm struggling with a pose estimation problem with a single camera. I have some 3D points and the corresponding 2D points on the image. Then I use solvePnP to get the rotation and translation vectors. The problem is, how can I determine whether the vectors are right results?
Now I use an indirect way to do this:
I use the rotation matrix, the translation vector and the world 3D coordinates of a certain point to obtain the coordinates of that point in Camera system. Then all I have to do is to determine whether the coordinates are reasonable. I think I know the directions of x, y and z axes of Camera system.
Is Camera center the origin of the Camera system?
Now consider the x component of that point. Is x equavalent to the distance of the camera and the point in the world space in Camera's x-axis direction (the sign can then be determined by the point is placed on which side of the camera)?
The figure below is in world space, while the axes depicted are in Camera system.
========How Camera and the point be placed in the world space=============
|
|
Camera--------------------------> Z axis
| |} Xw?
| P(Xw, Yw, Zw)
|
v x-axis
My rvec and tvec results seems right and wrong. For a specified point, the z value seems reasonable, I mean, if this point is about one meter away from the camera in the z direction, then the z value is about 1. But for x and y, according to the location of the point I think x and y should be positive but they are negative. What's more, the pattern detected in the original image is like this:
But using the points coordinates calculated in Camera system and the camera intrinsic parameters, I get an image like this:
The target keeps its pattern. But it moved from bottom right to top left. I cannot understand why.
Yes, the camera center is the origin of the camera coordinate system, which seems to be right following to this post.
In case of camera pose estimation, value seems reasonable can be named as backprojection error. That's a measure of how well your resulting rotation and translation map the 3D points to the 2D pixels. Unfortunately, solvePnP does not return a residual error measure. Therefore one has to compute it:
cv::solvePnP(worldPoints, pixelPoints, camIntrinsics, camDistortion, rVec, tVec);
// Use computed solution to project 3D pattern to image
cv::Mat projectedPattern;
cv::projectPoints(worldPoints, rVec, tVec, camIntrinsics, camDistortion, projectedPattern);
// Compute error of each 2D-3D correspondence.
std::vector<float> errors;
for( int i=0; i < corners.size(); ++i)
{
float dx = pixelPoints.at(i).x - projectedPattern.at<float>(i, 0);
float dy = pixelPoints.at(i).y - projectedPattern.at<float>(i, 1);
// Euclidean distance between projected and real measured pixel
float err = sqrt(dx*dx + dy*dy);
errors.push_back(err);
}
// Here, compute max or average of your "errors"
An average backprojection error of a calibrated camera might be in the range of 0 - 2 pixel. According to your two pictures, this would be way more. To me, it looks like a scaling problem. If I am right, you compute the projection yourself. Maybe you can try once cv::projectPoints() and compare.
When it comes to transformations, I learned not to follow my imagination :) The first thing I Do with the returned rVec and tVec is usually creating a 4x4 rigid transformation matrix out of it (I posted once code here). This makes things even less intuitive, but instead it is compact and handy.
Now I know the answers.
Yes, the camera center is the origin of the camera coordinate system.
Consider that the coordinates in the camera system are calculated as (xc,yc,zc). Then xc should be the distance between the camera and
the point in real world in the x direction.
Next, how to determine whether the output matrices are right?
1. as #eidelen points out, backprojection error is one indicative measure.
2. Calculate the coordinates of the points according to their coordinates in the world coordinate system and the matrices.
So why did I get a wrong result(the pattern remained but moved to a different region of the image)?
Parameter cameraMatrix in solvePnP() is a matrix supplying the parameters of the camera's external parameters. In camera matrix, you should use width/2 and height/2 for cx and cy. While I use width and height of the image size. I think that caused the error. After I corrected that and re-calibrated the camera, everything seems fine.
I have a facial animation rig which I am driving in two different manners: it has an artist UI in the Maya viewports as is common for interactive animating, and I've connected it up with the FaceShift markerless motion capture system.
I envision a workflow where a performance is captured, imported into Maya, sample data is smoothed and reduced, and then an animator takes over for finishing.
Our face rig has the eye gaze controlled by a mini-hierarchy of three objects (global lookAtTarget and a left and right eye offset).
Because the eye gazes are controlled by this LookAt setup, they need to be disabled when eye-gaze-including motion capture data is imported.
After the motion capture data is imported, the eye gazes are now set with motion capture rotations.
I am seeking a short Mel routine that does the following: it marches through the motion capture eye rotation samples, backwards calculates and sets each eyes' LookAt target position, and averages the two to get the global LookAt target's position.
After that Mel routine is run, I can turn the eye's LookAt constraint back on, the eyes gaze control returns to rig, nothing has changed visually, and the animator will have their eye UI working in the Maya viewport again.
I'm thinking this should be common logic for anyone doing facial mocap. Anyone got anything like this already?
How good is the eye tracking in the mocap? There may be issues if the targets are far away: depending on the sampling of the data, you may get 'crazy eyes' which seem not to converge, or jumpy data. If that's the case you may need to junk the eye data altogether, or smooth it heavily before retargeting.
To find the convergence of the two eyes, you try this (like #julian I'm using locators, etc since doing all the math in mel would be irritating).
1) constrain a locator to one eye so that one axis oriented along the look vector and the other is in the plane of the second eye. Let's say the eye aims down Z and the second eye is in the XZ plane
2) make a second locator, parented to the first, and constrained to the second eye in the same way: pointing down Z, with the first eye in the XZ plane
3) the local Y rotation of the second locator is the angle of convergence between the two eyes.
4) Figure out the focal distance using the law of sines and a cheat for the offset of the second eye relative to the first. The local X distance of the second eye is one leg of a right triangle. The angles of the triangle are the convergence angle from #3 and 90- the convergence angle. In other words:
focal distance eye_locator2.tx
-------------- = ---------------
sin(90 - eye_locator2.ry) sin( eye_locator2.ry)
so algebraically:
focal distance = eye_locator2.tx * sin(90 - eye_locator2.ry) / sin( eye_locator2.ry)
You'll have to subtract the local Z of eye2, since the triangle we're solving is shifted backwards or forwards by that much:
focal distance = (eye_locator2.tx * sin(90 - eye_locator2.ry) / sin( eye_locator2.ry)) - eye_locator2.tz
5) Position the target along the local Z direction of the eye locator at the distance derived above. It sounds like the actual control uses two look targets that can be moved apart to avoid crosseyes - it's kind of judgement call to know how much to use that vs the actual convergence distance. For lots of real world data the convergence may be way too far away for animator convenience: a target 30 meters away is pretty impractical to work with, but might be simulated with a target 10 meters away with a big spread. Unfortunately there's no empirical answer for that one - it's a judgement call.
I don't have this script but it would be fairly simple. Can you provide an example maya scene? You don't need any math. Here's how you could go about it:
Assume the axis pointing through the pupil is positive X, and focal length is 10 units.
Create 2 locators. Parent one to each eye. Set their translations to
(10, 0, 0).
Create 2 more locators in worldspace. Point constrain them to the others.
Create a plusMinusAverage node.
Connect the worldspace locator's translations to plusMinusAverage1 input 1 and 2
Create another locator (the lookAt)
Connect the output of plusMinusAverage1 to the translation of the lookAt locator.
Bake the translation of the lookAt locator.
Delete the other 4 locators.
Aim constrain the eyes' X axes to the lookAt.
This can all be done in a script using commands: spaceLocator, createNode, connectAttr, setAttr, bakeSimulation, pointConstraint, aimConstraint, delete.
The solution ended up being quite simple. The situation is motion capture data on the rotation nodes of the eyes, while simultaneously wanting (non-technical) animator over-ride control for the eye gaze. Within Maya, constraints have a weight factor: a parametric 0-1 value controlling the influence of the constraint. The solution is for the animator to simply key the eyes' lookAt constraint weight to 1 when they want control over the eye gaze, key those same weights to 0 when they want the motion captured eye gaze, and use a smooth transition of those constraint weights to mask the transition. This is better than my original idea described above, because the original motion capture data remains in place, available as reference, allowing the animator to switch back and forth if need be.