Are there any options or requirements which make
CGAL::Polygon_mesh_slicer output to be in correct order.
eg. I've loaded a mesh and converted it to CGAL::Surface_mesh. Then I've used slicer on that mesh to get list of polylines. Problem is that these polylines are not in any order CW or CCW.
To be more precise, output polylines are not consecutive
Here is a slice of a cube from the top.
o---1--o
| |
2 3
| |
o--4---o
I would expect that output will be like: 1->2->4->3 or reverse
But I got more or less 1->4->2->3
As stated here, "Each resulting polyline P is oriented such that for two consecutive points p and q in P, the normal vector of the face(s) containing the segment pq, the vector pq, and the orthogonal vector of plane is a direct orthogonal basis. The normal vector of each face is chosen to point on the side of the face where its sequence of vertices is seen counterclockwise."
So the orientation of the polylines depends on the orientation of the plane and of the mesh faces.
Related
When applying crop hull on a 3D concave hull, I get missing points and extra points in the filtered result.
Context
I am using a sphere hull and the point cloud of an arm as input of the algorithm. setCropOutside is
set to true as I want to get the points of the cloud that are inside the hull.
Current Behavior
The resulting filtered cloud has missing points and extra points. On the picture below is the hull made from the sphere point cloud and the arm point cloud with points outside the hull in green and points inside the hull in red.
Here is a link to the files used to reproduce the issue :
polygon.ply contains the hull information
test_hand_downsampled_1000_2.obj is the input point cloud we want to test
expected_filtered_result.obj is the result we wish to obtain (acquired through manual cropping with Blender).
filtered_result.ply contains the result of the algorithm
I compared the obtained result with the desired result made from cropping the arm point cloud with Blender. As can be seen on the above image, some points that are not supposed to be part of the intersection are still considered to be inside the hull.
Numerically:
237 points are expected in the intersection (see expected_filtered_result.obj)
236 points are common
1 point is missing (237-236)
there are 42 extra points
hence 278 points are in resulting point cloud (see filtered_result.ply
NB : Please note there is one minor difference between the expected and given points of the clouds after 3 decimals. For exemple, the point with coordinate (-118.585701, -163.048050, 138.409943) in the expected result is (-118.5857, -163.04803, 138.40997) in obtained result.
thanks for reading this question. My title is basically what I'm trying to achieve. I did a poisson surface mesh generation using Poisson_surface_reconstruction_3(cgal). I can't figure out how to map the node identities of my resulting surface mesh into my starting point sets?
The output of my poisson surface generation is produced by the following lines:
CGAL::facets_in_complex_2_to_triangle_mesh(c2t3, output_mesh);
out << output_mesh;
In my output file, there are some x y z coordinates, followed by a set of 3 integers each line, I think they indicates which nodes form a delaunay triangle. The problem is that the output points do not correspond to my initial point set, since not any x y z value match to any of my original points. Yet I'm trying to figure out which points are forming a delaunay triangles in my original point set.
Could someone suggest me how can I do this in cgal?
Many thanks.
The poisson recontruction algorithm consist in meshing an implicit function that somehow fits you input points. In practice, it means that you input point will no belong to the set of points of the output surface, and won't even lie exactly on triangles of the output surface. However, they should not be too far from the output surface (except if you have some really sparse sampling parts).
What you can do to locate your input points with the output surface is to use the function closest_point_and_primitive() from the AABB-tree class.
Here is an example of how to build the tree from a mesh.
I'm currently implementing a mesh extrusion algorithm for plane shapes, let's assume for a rectangle.
When I extrude this rectangle I create four new sides (resulting in 8 new triangles) and a new bottom for the 3d shape.
This works fine when I duplicate all vertices so that my final cube has 24 of them. But I'd like to avoid these extra vertices now so that I have only 8 vertices. Unfortunately, in this case I do not know how to calculate the UV coordinates and I keep getting wrong results as shown in the image below.
The correct result would look like this (with duplicated faces):
My first question is: Is it possible to generate a good uv map with just 8 vertices (and hence 8 uv coordinates) for a cube?
Second is: How? :)
Thanks for your help.
My friend and I are making a 3d rendering engine from scratch in our VB class at school, but I am not sure how the math to form the cube would work. Given six variables:
rotX
rotY
rotZ
lenX
lenY
lenZ
Which represent the rotation on x,y,z and the length on x,y,z respectively, what would be the formulas to make the cube? I know that all I have to do is calculate three segments and from those segments just create three parallelograms, so I just need the math to find what the three segments are.
Thanks!
there are 2 basic 3D object representations for both are your data is insufficient.
surface representation
objects are set of surface polygons/vertexes/...
for cube its a set of 8 points + the triangles/quads for 6 faces
analytical representation
objects are set of equations describing the object
for cube its a intersection of 6 planes
I think you are using option 1 so what you need is:
- position
- orientation
- size
usually an axis aligned cube looks like this:
const double a=1.0; //cube size;
double pnt[8][3]= //cube points
{
+a,-a,+a,
+a,+a,+a,
-a,+a,+a,
-a,-a,+a,
+a,-a,-a,
+a,+a,-a,
-a,+a,-a,
-a,-a,-a
};
int tab[24]=
{
0,1,2,3, // 1st.quad
7,6,5,4, // 2nd.quad
4,5,1,0, // 3th.quad ...
5,6,2,1,
6,7,3,2,
7,4,0,3
};
well for size and orientation you can apply transformation matrix
or directly recompute points by direction vectors
so you need to remember position (point) and orientation (3 vectors) and size (scalar)
all above can be stored in single transformation matrix 4x4
but if you want the vectors then points will be like this:
P(+a,-a,+a) -> +a*I -a*J +a*K
where I,J,K are the orientation vectors
a is cube size
P(+a,-a,+a) is original axis aligned point in table above
Option 2 is more tricky to implement and unless you really need it (ray-tracing renders) then forget about it.
I'm trying to implement a geometry templating engine. One of the parts is taking a prototypical polygonal mesh and aligning an instantiation with some points in the larger object.
So, the problem is this: given 3d point positions for some (perhaps all) of the verts in a polygonal mesh, find a scaled rotation that minimizes the difference between the transformed verts and the given point positions. I also have a centerpoint that can remain fixed, if that helps. The correspondence between the verts and the 3d locations is fixed.
I'm thinking this could be done by solving for the coefficients of a transformation matrix, but I'm a little unsure how to build the system to solve.
An example of this is a cube. The prototype would be the unit cube, centered at the origin, with vert indices:
4----5
|\ \
| 6----7
| | |
0 | 1 |
\| |
2----3
An example of the vert locations to fit:
v0: 1.243,2.163,-3.426
v1: 4.190,-0.408,-0.485
v2: -1.974,-1.525,-3.426
v3: 0.974,-4.096,-0.485
v5: 1.974,1.525,3.426
v7: -1.243,-2.163,3.426
So, given that prototype and those points, how do I find the single scale factor, and the rotation about x, y, and z that will minimize the distance between the verts and those positions? It would be best for the method to be generalizable to an arbitrary mesh, not just a cube.
Assuming you have all points and their correspondences, you can fine-tune your match by solving the least squares problem:
minimize Norm(T*V-M)
where T is the transformation matrix you are looking for, V are the vertices to fit, and M are the vertices of the prototype. Norm refers to the Frobenius norm. M and V are 3xN matrices where each column is a 3-vector of a vertex of the prototype and corresponding vertex in the fitting vertex set. T is a 3x3 transformation matrix. Then the transformation matrix that minimizes the mean squared error is inverse(V*transpose(V))*V*transpose(M). The resulting matrix will in general not be orthogonal (you wanted one which has no shear), so you can solve a matrix Procrustes problem to find the nearest orthogonal matrix with the SVD.
Now, if you don't know which given points will correspond to which prototype points, the problem you want to solve is called surface registration. This is an active field of research. See for example this paper, which also covers rigid registration, which is what you're after.
If you want to create a mesh on an arbitrary 3D geometry, this is not the way it's typically done.
You should look at octree mesh generation techniques. You'll have better success if you work with a true 3D primitive, which means tetrahedra instead of cubes.
If your geometry is a 3D body, all you'll have is a surface description to start with. Determining "optimal" interior points isn't meaningful, because you don't have any. You'll want them to be arranged in such a way that the tetrahedra inside aren't too distorted, but that's the best you'll be able to do.