How to prune triangles in output mesh of Surface Reconstruction of Point Cloud - cgal

I have an open surface represented by a point cloud which I've put through the steps described in the Point Set Processing and Surface Reconstruction tutorial. However, applying any of the 3 discussed Reconstruction algorithms results in a closed mesh with extreme extraneous polygons, especially along the "enclosing dome" of the atmosphere around the point cloud model. See attached pictures at the bottom of this question for visual context.
Is there an accurate, or canonically correct within CGAL, method to prune both the large and small triangles which do not intersect with the original point cloud? I have tried iterating over the faces in the output mesh and removing the faces which satisfy either of two conditions: (1) Any length of a given face is longer than the average spacing computed for the origin point cloud, and (2) The given area of the face is larger than some threshold. Either filtering condition results in a model which is not viewable as a mesh in PLY format given the various viewers I've tried. Here is some code which I've considered, but apparently does not do the job:
for (face_descriptor faced : output_mesh.faces()) {
//std::cout << faced << std::endl;
std::vector<double> lengths;
// Get edges from face descriptor
// - NOTE :> Assume three total half-edges because triangles. Not guaranteed, tho.
for (halfedge_descriptor hed : CGAL::halfedges_around_face( output_mesh.halfedge(faced), output_mesh )) {
vertex_descriptor target_vertex = output_mesh.target(hed);
Point_3& target_point = output_mesh.point(target_vertex);
halfedge_descriptor hed_next = output_mesh.next(hed);
vertex_descriptor target_next = output_mesh.target(hed_next);
Point_3& target_point_next = output_mesh.point(target_next);
double length = CGAL::sqrt(CGAL::squared_distance(target_point, target_point_next));
lengths.push_back(length);
simplex.push_back(target_point);
}
// if edge length greater than limit threshold based on average_spacing,
// mark for removal?
for (double length : lengths) {
if (length > average_spacing) {
output_mesh.remove_face(faced);
//CGAL::Euler::remove_face( output_mesh.halfedge(faced), output_mesh );
std::cout << "Removed face " << faced << " with lengths: ";
// List out the lengths found:
for (double length : lengths) {
std::cout << length << ", ";
}
std::cout << std::endl;
continue;
}
}
}
// clear faces marked as removed
output_mesh.collect_garbage();
//output_mesh.is_valid();
Thanks for your insights in advance! This has had me going in circles for weeks.

Following #sloriot 's comment, using Advancing Front Surface Reconstruction is the correct way of handling this, in addition to the hole-filling and fairing steps.
Due to the contiguous nature of advancing front, however, there are still some extraneous facets generated. A solution we've worked out involves projecting various points onto each facet; calculating the distance to the plane of the facet (simplex) and the barycentric coordinates of each point, as filtering criteria. This allows facets which do not match any point clouds to be removed. It is recommended to do this prior to hole filling/fairing or potentially applying other algorithms from the Polygon Mesh Processing set to perform some additional "fixing" for our application.
Thanks so much for the help!

Related

Does CGAL 3D mesh generation require condition except closed manifold?

I'm trying to triangulate given coronary artery model(please refer image and file).
At first, I've tried to triangulate them using 3D constrained Delaunay triangulation in TetGen engine, but it appears that TetGen didn't generate them in all time. I've tried about 40 models with closed boundary, but only half of them was successful.
As an alternative, I found that CGAL 3D mesh generation will generate similar mesh based on Delaunay triangulation(of course, it's different from 3D constrained Delaunay triangulation).
I also tested it for 40 models which is same dataset used in TetGen test, but it appears that only 1/4 of them were successful. It is weird because even less models were processed than in TetGen test.
Is there are any condition for CGAL mesh generation except closed manifold condition(no boundary & manifold)? Here is the code I've used in my test case. It is almost same to example code from CGAL website.
// Create input polyhedron
Polyhedron polyhedron;
std::ifstream input(fileName.str());
input >> polyhedron;
// Create domain
Mesh_domain domain(polyhedron);
// Mesh criteria (no cell_size set)
Mesh_criteria criteria(facet_angle = 25, facet_size = 0.15, facet_distance = 0.008,
cell_radius_edge_ratio = 3);
// Mesh generation
C3t3 c3t3 = CGAL::make_mesh_3<C3t3>(domain, criteria, no_perturb(), no_exude());
findMinAndMax();
cout << "Polygon finish: " << c3t3.number_of_cells_in_complex() << endl;
Here is one of CA model which was used in test case.
The image of CA model
Also, I want to preserve given model triangles in generated mesh like constrained Delaunay triangulation. Is there are any way that generate mesh without specific criteria?
Please let me know if you want to know more.
The problem is that the mesh generator does not construct a good enough initial point set. The current strategy is to shoot rays in random directions from the center of the bounding box of your object. Alternatively one might either take a random sample of points on the surface, or random rays shot from the points on the skeleton. I've put you a hacky solution on github. The first argument is your mesh, the second the grid cell size in order to sample points on the mesh.

CGAL 3D Delaunay Triangulation - First Vertex is Origin?

I am performing a 3D Delaunay Triangulation of points sampled from a sphere, and I am looking at the vertices of the resultant triangulation essentially by doing this:
for(Delaunay_Vertex_iter p = T.vertices_begin(); p != T.vertices_end(); p++){
std::cout << p->point() << endl;
}
While T.number_of_vertices() == 270, I get 271 vertices, the first one being the origin (0, 0, 0). Why?
This is the infinite vertex, which has unspecified coordinates and happens to be the origin here. You should iterate using finite_vertices_begin()/finite_vertices_end() instead.
See http://doc.cgal.org/latest/Triangulation_3/ for information about the infinite vertex.
This can well happen, since floating point numbers are inherently NOT exactly on unit spheres. Hence, the data type or your kernel and the proximity of your sampling affects the results.
You can use CGAL's spherical kernel for the 3D case or the implementation described in:
https://stackoverflow.com/a/45240506/4994003
to avoid precision issues for the general dD case.

Calculate the z coordinate of an object relative to the camera with kinect

I want to calculate the z coordinate of an object relative to the camera (kinect), knowing the information of depth from the kinect. I know also the intrinsic parameters.
Any help is much appreciated. Thanks!
If you want the real world measurements (including depth), you want to retrieve the point cloud map, something like:
Mat world;
if( capture.retrieve(world, CV_CAP_OPENNI_POINT_CLOUD_MAP)){
Vec3f pt3D = world.at<Vec3f>(yourY, yourX);
cout << "pt3D" << pt3D << endl;
}
Notice that you pass the y first, then the x.
I've learned that somewhat recently :)

OpenGL texture mapping with different coordinates systems

I already asked a question about texture mapping and these two are related (this question).
I'm working with Quartz Composer which appears to be kind specific with textures...
I have a complex polygon that I triangulate in a specific coordinate system (-1 -> 1 on x | -0.75 -> 0.75 on y). I obtain an array of triangles vertices in this coordinate system (triangles 1 to 6 on the left pic).
Then I render each polygon separately (it's necessary for my program), by applying a scale function on its vertices from this coordinate system to OpenGL one (0. -> 1.). Here, even if for 0->1 range it's kind of stupid :
return (((1. - 0.) * (**myVertexXorY** - minTriangleBound)) / (maxTriangleBound - minTriangleBound)) + 0.;
But I want one image to be textured on these triangles (like on the picture above). So I begin by getting the whole polygon bounds (1 on the right pic), then the triangle bounds (2 on the right pic). I scale 1 to the picture coordinates (3 on the right pic) in pixels, then I get the triangle bounds (2) in pixels.
It gives me the bounds to lock my texture in OpenGL with Quartz :
NSRect myBounds = NSMakeRect(originXinPixels, originYinPixels, widthForTheTriangle, heightForTheTriangle);
And I lock my texture
[myImage lockTextureRepresentationWithColorSpace:space forBounds:myBounds];
Then, with OpenGL :
for (int32 i = 0; i < vertexCount; ++i)
{
verts[i] = myTriangle.vertices[i];
texcoord[0] = [self myScaleFunctionFor:XinQuartzCoordinateSystem From:0 To:1]
texcoord[1] = [self myScaleFunctionFor:YinQuartzCoordinateSystem From:0 To:1]
glTexCoord2fv(texcoord);
}
And I obtain what you can see : sometimes parts of the image are fitting, sometimes no (well, in fact with this particular polygon, it doesn't fit at all...).
I'm not really sure if I did understand your question, but:
What hinders you from directly supplying texture coordinates that do match the topology of your source picture? This was far easier than trying to find some per triangle linear mapping that moves the picture in the right way.

OpenGL models show grid-like lines

Shading problem solved; grid lines persist. See Update 5
I'm working on an .obj file loader for OpenGL using Objective-C.
I'm trying to get objects to load and render them with shading. I'm not using any textures, materials, etc. to modify the model besides a single light source. When I render any model, the shading is not distributed properly (as seen in the pictures below). I believe this has something to do with the normals, but I'm not sure.
These are my results:
And this is the type of effect I'm trying to achieve:
I thought the problem was that the normals I parsed from the file were incorrect, but after calculating them myself and getting the same results, I found that this wasn't true. I also thought not having GL_SMOOTH enabled was the issue, but I was wrong there too.
So I have no idea what I'm doing wrong here, so sorry if the question seems vague. If any more info is needed, I'll add it.
Update:
Link to larger picture of broken monkey head: http://oi52.tinypic.com/2re5y69.jpg
Update 2: If there's is a mistake in the process of me calculating normals, this is what I'm doing:
Create a triangle for each group of indices.
Calculate the normals for the triangle and store it in a vector.
Ensure the vector is normalized with the following function
:
static inline void normalizeVector(Vector3f *vector) {
GLfloat vecMag = VectorMagnitude(*vector);
if (vecMag == 0.0) {
vector->x /= 1.0;
vector->y /= 0.0;
vector->z /= 0.0;
}
vector->x /= vecMag;
vector->y /= vecMag;
vector->z /= vecMag;
}
Update 3: Here's the code I'm using to create the normals:
- (void)calculateNormals {
for (int i = 0; i < numOfIndices; i += 3) {
Triangle triangle;
triangle.v1.x = modelData.vertices[modelData.indices[i]*3];
triangle.v1.y = modelData.vertices[modelData.indices[i]*3+1];
triangle.v1.z = modelData.vertices[modelData.indices[i]*3+2];
triangle.v2.x = modelData.vertices[modelData.indices[i+1]*3];
triangle.v2.y = modelData.vertices[modelData.indices[i+1]*3+1];
triangle.v2.z = modelData.vertices[modelData.indices[i+1]*3+2];
triangle.v3.x = modelData.vertices[modelData.indices[i+2]*3];
triangle.v3.y = modelData.vertices[modelData.indices[i+2]*3+1];
triangle.v3.z = modelData.vertices[modelData.indices[i+2]*3+2];
Vector3f normals = calculateNormal(triangle);
normalizeVector(&normals);
modelData.normals[modelData.surfaceNormals[i]*3] = normals.x;
modelData.normals[modelData.surfaceNormals[i]*3+1] = normals.y;
modelData.normals[modelData.surfaceNormals[i]*3+2] = normals.z;
modelData.normals[modelData.surfaceNormals[i+1]*3] = normals.x;
modelData.normals[modelData.surfaceNormals[i+1]*3+1] = normals.y;
modelData.normals[modelData.surfaceNormals[i+1]*3+2] = normals.z;
modelData.normals[modelData.surfaceNormals[i+2]*3] = normals.x;
modelData.normals[modelData.surfaceNormals[i+2]*3+1] = normals.y;
modelData.normals[modelData.surfaceNormals[i+2]*3+2] = normals.z;
}
Update 4: Looking further into this, it seems like the .obj file's normals are surface normals, while I need the vertex normals. (Maybe)
If the vertex normals are what I need, if anybody can explain the theory behind calculating them, that'd be great. I tried looking it up but I only found examples, not a theory. (e.g. "get the cross product of each face and normalize it"). If I know what I have to do, I can look it up an individual process if I get stuck and won't have to keep updating this.
Update 5: I re-wrote my whole loader, and got it to work, somehow. Although it shades properly, I still have those grid-like lines that you can see on my original results.
Your normalizeVector function is clearly wrong. Dividing by zero is never a good idea. Should be working when vecMag != 0.0 though. How are you calculating the normals? Using cross-product? When happens if you let OpenGL calculate the normals?
It may be the direction of your normals that's at fault - if they're the wrong way around then you'll only see the polygons that are supposed to be pointing away from you.
Try calling:
glDisable(GL_CULL_FACE);
and see whether your output changes.
Given that we're seeing polygon edges I'd also check for:
glPolygonMode(GL_FRONT_AND_BACK, GL_FILL);