Reconstruct surface from 3D triangular meshes - mesh

I have a 3D model, which consists of the 3D triangular meshes. I want to partition the meshes into different groups. Each group represents a surface, such as a planar face, cylindrical surface. This is something like surface recognition/reconstruction.
The input is a set of 3D triangular meshes. The output is the mesh segmentations per surface.
Is there any library meets my requirement?

If you want to go into lots of mesh processing, then the point cloud library is a good idea, but I'd also suggest CGAL: http://www.cgal.org for more algorithms and loads of structures aimed at meshes.
Lastly, the problem you describe is most easily solved on your own:
enumerate all vertices
enumerate all polygons
create an array of ints with the size of the number of vertices in your "big" mesh, initialize with 0.
create an array of ints with the size of the number of polygons in your "big" mesh, initialize with 0.
initialize a counter to 0
for each polygon in your mesh, look at its vertices and the value that each has in the array.
if the values for each vertex are zero, increase counter and assign to each of the values in the vertex array and polygon array correspondingly.
if not, relabel all vertices and polygons with a higher number to the smallest, non-zero number.
The relabeling can be done quickly with a look up table.
This might save you lots of issues interfacing your code to some library you're not really interested in.

You should have a look at the PCL library, it has all these features and much more: http://pointclouds.org/

Related

When loading a model with Assimp how can I get the Vertices that correspond to my materials (C++)

So what I want to do is render each material 1 at a time. Which means that each Material will have it's own vertices. Is there some kind of function within Assimp when I process a mesh that will tell me which material the vertices belong to.
Of course I would put the position, the normal and the texCoord in the vertex and I need the induces.
There is no query to get these meshes implemented in Asset-Importer-Lib right now. But you can write this easily by yourself:
Import your model
Check whether there are any meshes loaded
Loop over all meshes stored in aiScene
Sort them for meshes with the same material index
Loop over all vertices of the list of meshes.
I wrote a blog-post about that: Batch-Rendering for Assimp-Scene

How to map the node identities of my resulting surface mesh generated from Poisson_surface_reconstruction_3 into my starting point sets?

thanks for reading this question. My title is basically what I'm trying to achieve. I did a poisson surface mesh generation using Poisson_surface_reconstruction_3(cgal). I can't figure out how to map the node identities of my resulting surface mesh into my starting point sets?
The output of my poisson surface generation is produced by the following lines:
CGAL::facets_in_complex_2_to_triangle_mesh(c2t3, output_mesh);
out << output_mesh;
In my output file, there are some x y z coordinates, followed by a set of 3 integers each line, I think they indicates which nodes form a delaunay triangle. The problem is that the output points do not correspond to my initial point set, since not any x y z value match to any of my original points. Yet I'm trying to figure out which points are forming a delaunay triangles in my original point set.
Could someone suggest me how can I do this in cgal?
Many thanks.
The poisson recontruction algorithm consist in meshing an implicit function that somehow fits you input points. In practice, it means that you input point will no belong to the set of points of the output surface, and won't even lie exactly on triangles of the output surface. However, they should not be too far from the output surface (except if you have some really sparse sampling parts).
What you can do to locate your input points with the output surface is to use the function closest_point_and_primitive() from the AABB-tree class.
Here is an example of how to build the tree from a mesh.

How to depict multidimentional vectors on two-dinesional plot?

I have a set of vectors in multidimensional space (may be several thousands of dimensions). In this space, I can calculate distance between 2 vectors (as a cosine of the angle between them, if it matters). What I want is to visualize these vectors keeping the distance. That is, if vector a is closer to vector b than to vector c in multidimensional space, it also must be closer to it on 2-dimensional plot. Is there any kind of diagram that can clearly depict it?
I don't think so. Imagine any twodimensional picture of a tetrahedron. There is no way of depicting the four vertices in two dimensions with equal distances from each other. So you will have a hard time trying to depict more than three n-dimensional vectors in 2 dimensions conserving their mutual distances.
(But right now I can't think of a rigorous proof.)
Update:
Ok, second idea, maybe it's dumb: If you try and find clusters of closer associated objects/texts, then calculate the center or mean vector of each cluster. Then you can reduce the problem space. At first find a 2D composition of the clusters that preserves their relative distances. Then insert the primary vectors, only accounting for their relative distances within a cluster and their distance to the center of to two or three closest clusters.
This approach will be ok for a large number of vectors. But it will not be accurate in that there always will be somewhat similar vectors ending up at distant places.

Solving for optimal alignment of 3d polygonal mesh

I'm trying to implement a geometry templating engine. One of the parts is taking a prototypical polygonal mesh and aligning an instantiation with some points in the larger object.
So, the problem is this: given 3d point positions for some (perhaps all) of the verts in a polygonal mesh, find a scaled rotation that minimizes the difference between the transformed verts and the given point positions. I also have a centerpoint that can remain fixed, if that helps. The correspondence between the verts and the 3d locations is fixed.
I'm thinking this could be done by solving for the coefficients of a transformation matrix, but I'm a little unsure how to build the system to solve.
An example of this is a cube. The prototype would be the unit cube, centered at the origin, with vert indices:
4----5
|\ \
| 6----7
| | |
0 | 1 |
\| |
2----3
An example of the vert locations to fit:
v0: 1.243,2.163,-3.426
v1: 4.190,-0.408,-0.485
v2: -1.974,-1.525,-3.426
v3: 0.974,-4.096,-0.485
v5: 1.974,1.525,3.426
v7: -1.243,-2.163,3.426
So, given that prototype and those points, how do I find the single scale factor, and the rotation about x, y, and z that will minimize the distance between the verts and those positions? It would be best for the method to be generalizable to an arbitrary mesh, not just a cube.
Assuming you have all points and their correspondences, you can fine-tune your match by solving the least squares problem:
minimize Norm(T*V-M)
where T is the transformation matrix you are looking for, V are the vertices to fit, and M are the vertices of the prototype. Norm refers to the Frobenius norm. M and V are 3xN matrices where each column is a 3-vector of a vertex of the prototype and corresponding vertex in the fitting vertex set. T is a 3x3 transformation matrix. Then the transformation matrix that minimizes the mean squared error is inverse(V*transpose(V))*V*transpose(M). The resulting matrix will in general not be orthogonal (you wanted one which has no shear), so you can solve a matrix Procrustes problem to find the nearest orthogonal matrix with the SVD.
Now, if you don't know which given points will correspond to which prototype points, the problem you want to solve is called surface registration. This is an active field of research. See for example this paper, which also covers rigid registration, which is what you're after.
If you want to create a mesh on an arbitrary 3D geometry, this is not the way it's typically done.
You should look at octree mesh generation techniques. You'll have better success if you work with a true 3D primitive, which means tetrahedra instead of cubes.
If your geometry is a 3D body, all you'll have is a surface description to start with. Determining "optimal" interior points isn't meaningful, because you don't have any. You'll want them to be arranged in such a way that the tetrahedra inside aren't too distorted, but that's the best you'll be able to do.

Retrieve index of nearest surface-points returned from CGAL's surface_neighbor_coordinates_3

I (relatively new to CGAL and not a C++ expert) am trying to extract the index of the nearest-neighbor 3D points returned from CGAL's surface_neighbor_coordinates_3 (which searches a 2D mesh comprised of 3D points to find natural-neighbors of a provided query-point) in this CGAL example. In other examples (3D interpolation with 3D meshes), I have been able to do this by adding info to vertex handles in the triangulation data structure. In the linked example, I simply wish to retrieve the indices of returned coords with respect to where the points in coords reside index-wise within the input list of points.
The other call-options for surface_neighbor_coordinates_3 seem to suggest this may be possible by passing-in an existing triangulation (with perhaps its info-augmented triangulation-data-structure). However, I'm not sure how to specify the info-augmented Delaunay_triangulation_3 for the case of a 2D mesh consisting of 3D points. I'm experimenting with it (using advancing-front triangulations to 2D-mesh my 3D points) but would like to know if there's some easier way to use the native capabilities of surface_neighbor_coordinates_3 if one only seeks to also have an info field associated with the returned points.
Any help would be greatly appreciated ... this has stumped me for a week.