I have a 100'000 facet stl file that throws a CGAL error when I try to render it.
Worked it down to the first 2 facets causing an error.
Any hint how the combination of these 2 facets are a problem for CGAL and how I could probably fix it? (I use python stl library to filter out my facets from a larger stl file I use as a base. So I might be able to modify the points coordinates?).
I get the error with F6 when I try to difference() these 2 triangles from a cube.
ERROR: CGAL error in CGAL_Nef_polyhedron3(): CGAL ERROR: assertion violation
difference() {
import("outerShell_ml.stl");
translate([-11.4,-9.8,-6.5])
#cube(1);
}
solid outerShell_ml
facet normal -0.685994 -0.514497 -0.514495
outer loop
vertex -11.440000 -9.540000 -6.400000
vertex -11.259999 -9.780000 -6.400000
vertex -11.320000 -9.719999 -6.380000
endloop
endfacet
facet normal -0.665639 -0.554699 -0.499233
outer loop
vertex -11.440000 -9.540000 -6.400000
vertex -11.620000 -9.360000 -6.360000
vertex -11.740000 -9.179999 -6.400000
endloop
endfacet
endsolid outerShell_ml
Related
I'm trying to use the function of libigl uniformly_sample_two_manifold, but it does not work as described and there is no documentation whatsoever to help me understand why.
I have a 3D mesh represented as Eigen::MatrixXd V with vertices and Eigen::MatrixXi F with faces. I'm attempting to use the function as follows:
igl::uniformly_sample_two_manifold(V, F, 20, 1.0, Out);
... giving the function my vertices, faces, and asking for 20 uniform samples in the Out structure. I set the "push factor" to 1 since I donĀ“t think I have any use for it now.
I noticed that the function specifically askes for "positions of mesh in weight space", which I presumed means the vertex positions. If I use it like this, however, the function returns the expected amount of vertices which are clustered very close to each other and are by no means uniformly distributed across the mesh.
Does anyone happen to know how to correctly use this function? Or would anyone know what does this "weight space" mean?
Thanks!
I am using CGAL for a project of mine. I create a AABB tree out of a mesh file (.off). First I extract Polyhedron from my mesh, then I get the triangles and finally I insert them in the tree.
All of this went smoothly.
The problem is when I use the do_intersect function of the tree.
Given two points, A and B, I would like to know if the ray or segment connecting the two insersect with something.
Most of the time this works properly, sometimes I get the floating point error. 'sometimes' means with a very few subsets of points.
Is there a reason for this?
Here there is a snippet of my code:
glm::vec3 pointA, pointB; // assume this are filled with some values
// the elements inside points above are floats.
Point_3 pA(pointA.x, pointA.y, pointA.z);
Point_3 pB(pointB.x, pointB.y, pointB.z);
Segment segment_query(pA, pB);
my_tree->do_intersect(segment_query); // here sometimes crashes
Before anybody asks pointB is a specific point on the surface of the mesh and it does not give any problems with most of points so I would assume the error is not related to it. Instead pointA is somewhere in the space.
Thank you for you answers.
I'm trying to triangulate given coronary artery model(please refer image and file).
At first, I've tried to triangulate them using 3D constrained Delaunay triangulation in TetGen engine, but it appears that TetGen didn't generate them in all time. I've tried about 40 models with closed boundary, but only half of them was successful.
As an alternative, I found that CGAL 3D mesh generation will generate similar mesh based on Delaunay triangulation(of course, it's different from 3D constrained Delaunay triangulation).
I also tested it for 40 models which is same dataset used in TetGen test, but it appears that only 1/4 of them were successful. It is weird because even less models were processed than in TetGen test.
Is there are any condition for CGAL mesh generation except closed manifold condition(no boundary & manifold)? Here is the code I've used in my test case. It is almost same to example code from CGAL website.
// Create input polyhedron
Polyhedron polyhedron;
std::ifstream input(fileName.str());
input >> polyhedron;
// Create domain
Mesh_domain domain(polyhedron);
// Mesh criteria (no cell_size set)
Mesh_criteria criteria(facet_angle = 25, facet_size = 0.15, facet_distance = 0.008,
cell_radius_edge_ratio = 3);
// Mesh generation
C3t3 c3t3 = CGAL::make_mesh_3<C3t3>(domain, criteria, no_perturb(), no_exude());
findMinAndMax();
cout << "Polygon finish: " << c3t3.number_of_cells_in_complex() << endl;
Here is one of CA model which was used in test case.
The image of CA model
Also, I want to preserve given model triangles in generated mesh like constrained Delaunay triangulation. Is there are any way that generate mesh without specific criteria?
Please let me know if you want to know more.
The problem is that the mesh generator does not construct a good enough initial point set. The current strategy is to shoot rays in random directions from the center of the bounding box of your object. Alternatively one might either take a random sample of points on the surface, or random rays shot from the points on the skeleton. I've put you a hacky solution on github. The first argument is your mesh, the second the grid cell size in order to sample points on the mesh.
Given 2 open Polyhedron3 made of triangles in CGAL, I want to cut the first one with the second one. That is, all intersecting triangle facets from poly2 should cut facets from poly1 and create new edges (and faces) in poly1 following the path of intersection. In the end, I need the list of edges/half edges which are part of the intersection path.
I'm using a typedef CGAL::Simple_cartesian Kernel.
While this looks like a boolean operation, it's not because there's no 'inside' or 'outside' for open meshes. The way I tried to implement it is:
build an AABB for mesh 1 (to be cut)
find intersecting faces from mesh1 cut by the first triangle of mesh 2
compute intersection infos using cgal : this returns the intersection description, but with some problems:
cgal will sometimes return 'wrong' intersections (for exemple, segments of 0 length)
the first mesh is not cut in the operation. Given the intersection description, I have to cut triangles myself (unless there's a function I've overlooked in cgal to cut a face/triangle given an intersection). This is not a trivial problem, as there are lots of corner cases
repeat with next face of poly2
My algorithm sorts of works, but I sometimes have problems : path not closed, small numerical accuracy problem, and it's not very fast.
So here's (at last) my question : what would be the recommended way to implement such an operation in a robust way ? Which kernel would you recommend?
Following sloriot comment, I've spent some time playing with the code of the CGAL polyhedron demo. Indeed it contains a plugin corefinement which does what I want. I've not finished integrating all changes in my own code base but I believe I can mark this question as answered.
All thanks to SLoriot for his suggestion!
Pascal
Background:
This problem is related with 3D tracking of object.
My system projects object/samples from known parameters (X, Y, Z) to OpenGL and
try to match with image and depth informations obtained from Kinect sensor to infer the object's 3D position.
Problem:
Kinect depth->process-> value in millimeters
OpenGL->depth buffer-> value between 0-1 (which is nonlinearly mapped between near and far)
Though I could recover Z value from OpenGL using method mentioned on http://www.songho.ca/opengl/gl_projectionmatrix.html but this will yield very slow performance.
I am sure this is the common problem, so I hope there must be some cleaver solution exist.
Question:
Efficient way to recover eye Z coordinate from OpenGL?
Or is there any other way around to solve above problem?
Now my problem is Kinect depth is in mm
No, it is not. Kinect reports it's depth as a value in a 11 bit range of arbitrary units. Only after some calibration has been applied, the depth value can be interpreted as a physical unit. You're right insofar, that OpenGL perspective projection depth values are nonlinear.
So if I understand you correctly, you want to emulatea Kinect by retrieving the content of the depth buffer, right? Then the most easy solution was using a combination of vertex and fragment shader, in which the vertex shader passes the linear depth as an additional varying to the fragment shader, and the fragment shader then overwrites the fragment's depth value with the passed value. (You could also use an additional render target for this).
Another method was using a 1D texture, projected into the depth range of the scene, where the texture values encode the depth value. Then the desired value would be in the color buffer.