I am performing a 3D Delaunay Triangulation of points sampled from a sphere, and I am looking at the vertices of the resultant triangulation essentially by doing this:
for(Delaunay_Vertex_iter p = T.vertices_begin(); p != T.vertices_end(); p++){
std::cout << p->point() << endl;
}
While T.number_of_vertices() == 270, I get 271 vertices, the first one being the origin (0, 0, 0). Why?
This is the infinite vertex, which has unspecified coordinates and happens to be the origin here. You should iterate using finite_vertices_begin()/finite_vertices_end() instead.
See http://doc.cgal.org/latest/Triangulation_3/ for information about the infinite vertex.
This can well happen, since floating point numbers are inherently NOT exactly on unit spheres. Hence, the data type or your kernel and the proximity of your sampling affects the results.
You can use CGAL's spherical kernel for the 3D case or the implementation described in:
https://stackoverflow.com/a/45240506/4994003
to avoid precision issues for the general dD case.
Related
I have an open surface represented by a point cloud which I've put through the steps described in the Point Set Processing and Surface Reconstruction tutorial. However, applying any of the 3 discussed Reconstruction algorithms results in a closed mesh with extreme extraneous polygons, especially along the "enclosing dome" of the atmosphere around the point cloud model. See attached pictures at the bottom of this question for visual context.
Is there an accurate, or canonically correct within CGAL, method to prune both the large and small triangles which do not intersect with the original point cloud? I have tried iterating over the faces in the output mesh and removing the faces which satisfy either of two conditions: (1) Any length of a given face is longer than the average spacing computed for the origin point cloud, and (2) The given area of the face is larger than some threshold. Either filtering condition results in a model which is not viewable as a mesh in PLY format given the various viewers I've tried. Here is some code which I've considered, but apparently does not do the job:
for (face_descriptor faced : output_mesh.faces()) {
//std::cout << faced << std::endl;
std::vector<double> lengths;
// Get edges from face descriptor
// - NOTE :> Assume three total half-edges because triangles. Not guaranteed, tho.
for (halfedge_descriptor hed : CGAL::halfedges_around_face( output_mesh.halfedge(faced), output_mesh )) {
vertex_descriptor target_vertex = output_mesh.target(hed);
Point_3& target_point = output_mesh.point(target_vertex);
halfedge_descriptor hed_next = output_mesh.next(hed);
vertex_descriptor target_next = output_mesh.target(hed_next);
Point_3& target_point_next = output_mesh.point(target_next);
double length = CGAL::sqrt(CGAL::squared_distance(target_point, target_point_next));
lengths.push_back(length);
simplex.push_back(target_point);
}
// if edge length greater than limit threshold based on average_spacing,
// mark for removal?
for (double length : lengths) {
if (length > average_spacing) {
output_mesh.remove_face(faced);
//CGAL::Euler::remove_face( output_mesh.halfedge(faced), output_mesh );
std::cout << "Removed face " << faced << " with lengths: ";
// List out the lengths found:
for (double length : lengths) {
std::cout << length << ", ";
}
std::cout << std::endl;
continue;
}
}
}
// clear faces marked as removed
output_mesh.collect_garbage();
//output_mesh.is_valid();
Thanks for your insights in advance! This has had me going in circles for weeks.
Following #sloriot 's comment, using Advancing Front Surface Reconstruction is the correct way of handling this, in addition to the hole-filling and fairing steps.
Due to the contiguous nature of advancing front, however, there are still some extraneous facets generated. A solution we've worked out involves projecting various points onto each facet; calculating the distance to the plane of the facet (simplex) and the barycentric coordinates of each point, as filtering criteria. This allows facets which do not match any point clouds to be removed. It is recommended to do this prior to hole filling/fairing or potentially applying other algorithms from the Polygon Mesh Processing set to perform some additional "fixing" for our application.
Thanks so much for the help!
Recently I'm struggling with a pose estimation problem with a single camera. I have some 3D points and the corresponding 2D points on the image. Then I use solvePnP to get the rotation and translation vectors. The problem is, how can I determine whether the vectors are right results?
Now I use an indirect way to do this:
I use the rotation matrix, the translation vector and the world 3D coordinates of a certain point to obtain the coordinates of that point in Camera system. Then all I have to do is to determine whether the coordinates are reasonable. I think I know the directions of x, y and z axes of Camera system.
Is Camera center the origin of the Camera system?
Now consider the x component of that point. Is x equavalent to the distance of the camera and the point in the world space in Camera's x-axis direction (the sign can then be determined by the point is placed on which side of the camera)?
The figure below is in world space, while the axes depicted are in Camera system.
========How Camera and the point be placed in the world space=============
|
|
Camera--------------------------> Z axis
| |} Xw?
| P(Xw, Yw, Zw)
|
v x-axis
My rvec and tvec results seems right and wrong. For a specified point, the z value seems reasonable, I mean, if this point is about one meter away from the camera in the z direction, then the z value is about 1. But for x and y, according to the location of the point I think x and y should be positive but they are negative. What's more, the pattern detected in the original image is like this:
But using the points coordinates calculated in Camera system and the camera intrinsic parameters, I get an image like this:
The target keeps its pattern. But it moved from bottom right to top left. I cannot understand why.
Yes, the camera center is the origin of the camera coordinate system, which seems to be right following to this post.
In case of camera pose estimation, value seems reasonable can be named as backprojection error. That's a measure of how well your resulting rotation and translation map the 3D points to the 2D pixels. Unfortunately, solvePnP does not return a residual error measure. Therefore one has to compute it:
cv::solvePnP(worldPoints, pixelPoints, camIntrinsics, camDistortion, rVec, tVec);
// Use computed solution to project 3D pattern to image
cv::Mat projectedPattern;
cv::projectPoints(worldPoints, rVec, tVec, camIntrinsics, camDistortion, projectedPattern);
// Compute error of each 2D-3D correspondence.
std::vector<float> errors;
for( int i=0; i < corners.size(); ++i)
{
float dx = pixelPoints.at(i).x - projectedPattern.at<float>(i, 0);
float dy = pixelPoints.at(i).y - projectedPattern.at<float>(i, 1);
// Euclidean distance between projected and real measured pixel
float err = sqrt(dx*dx + dy*dy);
errors.push_back(err);
}
// Here, compute max or average of your "errors"
An average backprojection error of a calibrated camera might be in the range of 0 - 2 pixel. According to your two pictures, this would be way more. To me, it looks like a scaling problem. If I am right, you compute the projection yourself. Maybe you can try once cv::projectPoints() and compare.
When it comes to transformations, I learned not to follow my imagination :) The first thing I Do with the returned rVec and tVec is usually creating a 4x4 rigid transformation matrix out of it (I posted once code here). This makes things even less intuitive, but instead it is compact and handy.
Now I know the answers.
Yes, the camera center is the origin of the camera coordinate system.
Consider that the coordinates in the camera system are calculated as (xc,yc,zc). Then xc should be the distance between the camera and
the point in real world in the x direction.
Next, how to determine whether the output matrices are right?
1. as #eidelen points out, backprojection error is one indicative measure.
2. Calculate the coordinates of the points according to their coordinates in the world coordinate system and the matrices.
So why did I get a wrong result(the pattern remained but moved to a different region of the image)?
Parameter cameraMatrix in solvePnP() is a matrix supplying the parameters of the camera's external parameters. In camera matrix, you should use width/2 and height/2 for cx and cy. While I use width and height of the image size. I think that caused the error. After I corrected that and re-calibrated the camera, everything seems fine.
I'm trying to triangulate given coronary artery model(please refer image and file).
At first, I've tried to triangulate them using 3D constrained Delaunay triangulation in TetGen engine, but it appears that TetGen didn't generate them in all time. I've tried about 40 models with closed boundary, but only half of them was successful.
As an alternative, I found that CGAL 3D mesh generation will generate similar mesh based on Delaunay triangulation(of course, it's different from 3D constrained Delaunay triangulation).
I also tested it for 40 models which is same dataset used in TetGen test, but it appears that only 1/4 of them were successful. It is weird because even less models were processed than in TetGen test.
Is there are any condition for CGAL mesh generation except closed manifold condition(no boundary & manifold)? Here is the code I've used in my test case. It is almost same to example code from CGAL website.
// Create input polyhedron
Polyhedron polyhedron;
std::ifstream input(fileName.str());
input >> polyhedron;
// Create domain
Mesh_domain domain(polyhedron);
// Mesh criteria (no cell_size set)
Mesh_criteria criteria(facet_angle = 25, facet_size = 0.15, facet_distance = 0.008,
cell_radius_edge_ratio = 3);
// Mesh generation
C3t3 c3t3 = CGAL::make_mesh_3<C3t3>(domain, criteria, no_perturb(), no_exude());
findMinAndMax();
cout << "Polygon finish: " << c3t3.number_of_cells_in_complex() << endl;
Here is one of CA model which was used in test case.
The image of CA model
Also, I want to preserve given model triangles in generated mesh like constrained Delaunay triangulation. Is there are any way that generate mesh without specific criteria?
Please let me know if you want to know more.
The problem is that the mesh generator does not construct a good enough initial point set. The current strategy is to shoot rays in random directions from the center of the bounding box of your object. Alternatively one might either take a random sample of points on the surface, or random rays shot from the points on the skeleton. I've put you a hacky solution on github. The first argument is your mesh, the second the grid cell size in order to sample points on the mesh.
Background:
This problem is related with 3D tracking of object.
My system projects object/samples from known parameters (X, Y, Z) to OpenGL and
try to match with image and depth informations obtained from Kinect sensor to infer the object's 3D position.
Problem:
Kinect depth->process-> value in millimeters
OpenGL->depth buffer-> value between 0-1 (which is nonlinearly mapped between near and far)
Though I could recover Z value from OpenGL using method mentioned on http://www.songho.ca/opengl/gl_projectionmatrix.html but this will yield very slow performance.
I am sure this is the common problem, so I hope there must be some cleaver solution exist.
Question:
Efficient way to recover eye Z coordinate from OpenGL?
Or is there any other way around to solve above problem?
Now my problem is Kinect depth is in mm
No, it is not. Kinect reports it's depth as a value in a 11 bit range of arbitrary units. Only after some calibration has been applied, the depth value can be interpreted as a physical unit. You're right insofar, that OpenGL perspective projection depth values are nonlinear.
So if I understand you correctly, you want to emulatea Kinect by retrieving the content of the depth buffer, right? Then the most easy solution was using a combination of vertex and fragment shader, in which the vertex shader passes the linear depth as an additional varying to the fragment shader, and the fragment shader then overwrites the fragment's depth value with the passed value. (You could also use an additional render target for this).
Another method was using a 1D texture, projected into the depth range of the scene, where the texture values encode the depth value. Then the desired value would be in the color buffer.
I'm trying to implement a geometry templating engine. One of the parts is taking a prototypical polygonal mesh and aligning an instantiation with some points in the larger object.
So, the problem is this: given 3d point positions for some (perhaps all) of the verts in a polygonal mesh, find a scaled rotation that minimizes the difference between the transformed verts and the given point positions. I also have a centerpoint that can remain fixed, if that helps. The correspondence between the verts and the 3d locations is fixed.
I'm thinking this could be done by solving for the coefficients of a transformation matrix, but I'm a little unsure how to build the system to solve.
An example of this is a cube. The prototype would be the unit cube, centered at the origin, with vert indices:
4----5
|\ \
| 6----7
| | |
0 | 1 |
\| |
2----3
An example of the vert locations to fit:
v0: 1.243,2.163,-3.426
v1: 4.190,-0.408,-0.485
v2: -1.974,-1.525,-3.426
v3: 0.974,-4.096,-0.485
v5: 1.974,1.525,3.426
v7: -1.243,-2.163,3.426
So, given that prototype and those points, how do I find the single scale factor, and the rotation about x, y, and z that will minimize the distance between the verts and those positions? It would be best for the method to be generalizable to an arbitrary mesh, not just a cube.
Assuming you have all points and their correspondences, you can fine-tune your match by solving the least squares problem:
minimize Norm(T*V-M)
where T is the transformation matrix you are looking for, V are the vertices to fit, and M are the vertices of the prototype. Norm refers to the Frobenius norm. M and V are 3xN matrices where each column is a 3-vector of a vertex of the prototype and corresponding vertex in the fitting vertex set. T is a 3x3 transformation matrix. Then the transformation matrix that minimizes the mean squared error is inverse(V*transpose(V))*V*transpose(M). The resulting matrix will in general not be orthogonal (you wanted one which has no shear), so you can solve a matrix Procrustes problem to find the nearest orthogonal matrix with the SVD.
Now, if you don't know which given points will correspond to which prototype points, the problem you want to solve is called surface registration. This is an active field of research. See for example this paper, which also covers rigid registration, which is what you're after.
If you want to create a mesh on an arbitrary 3D geometry, this is not the way it's typically done.
You should look at octree mesh generation techniques. You'll have better success if you work with a true 3D primitive, which means tetrahedra instead of cubes.
If your geometry is a 3D body, all you'll have is a surface description to start with. Determining "optimal" interior points isn't meaningful, because you don't have any. You'll want them to be arranged in such a way that the tetrahedra inside aren't too distorted, but that's the best you'll be able to do.