Surface triangulation from constant depth planar contours - tessellation

I am trying to develop an application that converts a parallel set of input contours(polygons) with constant Z values to a tessellated surface mesh .The contours may also consist of holes
All available tessellation techniques like glu,delaunay talk about 2d triangulations only.
Can anyone suggest a way forward?
Best Regards,
Praveen

Related

2D shape detection in 3D pointcloud

I'm working on a project to detect the position and orientation of a paper plane.
To collect the data, I'm using an Intel Realsense D435, which gives me accurate, clean depth data to work with.
Now I arrived at the problem of detecting the 2D paper plane silhouette from the 3D point cloud data.
Here is an example of the data (I put the plane on a stick for testing, this will not be in the final implementation):
https://i.stack.imgur.com/EHaEr.gif
Basically, I have:
A 3D point cloud with points on the plane
A 2D shape of the plane
I would like to calculate what rotations/translations are needed to align the 2D shape to the 3D point cloud as accurate as possible.
I've searched online, but couldn't find a good way to do it. One way would be to use Iterative Closest Point (ICP) to first take a calibration pointcloud of the plane in a known orientation, and align it with the current orientation. But from what I've heard, ICP doesn't perform well if the pointclouds aren't kind of already closely aligned at the start.
Any help is appreciated! Coding language doesn't matter.
Does your 3d point cloud have outliers? How many in what way?
How did you use ICP exactly?
One way would be using ICP, with a hand-crafted initial guess using
pcl::transformPointCloud (*cloud_in, *cloud_icp, transformation_matrix);
(to mitigate the problem that ICP needs to be close to work.)
What you actually want is the plane-model that describes the position and orientation of your point-cloud right?
A good estimator of your underlying function can be found with: pcl::ransac
pcl::ransace model consensus
You can then get the computedModel coefficents.
Now finding the correct transformation is just: How to calculate transformation matrix from one plane to another?

Can a gmsh mesh be more stucturized without using transfinite option?

For my thesis I'm using a finite element flow solver to simulate the flow through a flume. The flow solver is capable of solving the flow in a 3D unstructured mesh constructed of tetrahedrons. However, the meshes I generate with Gmsh somehow seem to be too unstructured. Which leads too unsolvable and very slow runs.
At the moment I've tried to do simulations with both unstructured and structured meshes. Simulation with very coarse unstructured meshes go very well, however once I make the element size smaller, the flow solver only produces NaN values and doesn't run at all.
For the simulations with structured meshes I've used the transfinite technique to produce a very fine structured mesh. This mesh contains way more element than the unstructured one and the results are fine. However, in future runs I need to refine the mesh in certain areas which doesn't seem possible with the transfinite volume technique in 3D.
Does anyone have any what could possibly go wrong in this case? And is there a way to improve the quality of a 3D gmsh mesh? Can the structure of the mesh be improved somehow?
Thanks in advance!
Bart
I think the middle ground between a transfinite structured grid and a completely unstructured one would be one made of 8-node hexs. If your 3D case can be built from an extruded 2D case, you could try setting Mesh.Algorithm=8 to get right triangles (rather than equilaterals) and then use the Recombine Surface option to turn them into quads.

Mesh to mesh. Mesh fitting (averaging). Mesh comparison.

I have 3 sets of point cloud that represent one surface. I want to use these point clouds to construct triangular mesh, then use the mesh to represent the surface. Each set of point cloud is collected in different ways so their representation to this surface are different. For example, some sets can represent the surface with smaller "error". My questions are:
(1) What's the best way to evaluate such mesh-to-surface "error"?
(2) Is there a mature/reliable way to convert point cloud to triangular mesh? I found some software doing this but most requests extensive manual adjustment.
(3) After the conversion I get three meshes. I want to use a fourth mesh, namely Mesh4, to "fit" the three meshes, and get an "average" mesh of the three. Then I can use this Mesh4 as a representation of the underlying surface. How can I do/call this "mesh to mesh" fitting? Is it a mature technique?
Thank you very much for your time!
Please find below my answers for point 1 and 2:
as a metric for mesh-to-surface error you can use Hausdorff distance. For example, you could use Libigl to compare two meshes.
To obtain a mesh from a point cloud, have a look at PCL

Creat mesh from point cloud on a 2D grid

I am using the new Kinect v2 and I am getting the depth map of the Kinect.
After I get the depth map I convert the depth data from Depth Space to Camera Space.
As far as I understand this is done, by converting all the X,Y coordinate of each pixel to Camera Space + adding the depth value as Z coordinate (also Kinect gives the depth value in millimetres so it is also converted to hold meters).
Because of this, the point cloud is actually on 2D grid extended with the depth value. The visualization also confirms this, since it is easy to notice that the points are ordered in a grid due to the above conversation.
For visualization I am using OpenGL the old fashion way (glBegin(...) and glEnd()).
I want to create a mesh out of the points. I kind of managed to do it with GL_TRIANGLES, but then I have lot of duplicated vertices and edges. So I thought I should create a better triangulation with GL_TRIANGLE_STRIP, but I am stuck here because I can't come up with a good algorithm which can go through my 2D grid in a way that I can feed it to the GL_TRIANGLE_STRIP so it creates a nice surface.
The problems:
For each triangle's vertices I am checking the Z coordinate. If it exceeds a certain threshold I disregard the triangle => this might create holes in my 2D grid.
Some depth values are NaN, because the Kinect can't "see" there nothing (for example an object is too far or too close) => this also creates holes in the 2D grid.
Anybody has any suggestion what would be the best method to solve this issue?
If you're able to use the point cloud library, you could use the
class pcl::OrganizedFastMesh< PointInT >.
http://docs.pointclouds.org/trunk/classpcl_1_1_organized_fast_mesh.html
I use it to triangulate complete depth frames.
You can try also a delanauy triangulation in 3d and look for the tetrahedons on the exterior. An easy algorithm is the bowyer-watson with tetrahedons and circumspheres. Cgal is a good example.

How to get one non-manifod mesh with adaptive point distribution

all
I try to obtain one triangle mesh from one point cloud. The mesh is expected to be manifold, the triangles are well shaped or equilateral and the distribution of the points are adaptive in terms of the curvature.
There are valuable information provided on this website.
robust algorithm for surface reconstruction from 3D point cloud?
Mesh generation from points with x, y and z coordinates
I try Poisson reconstruction algorithm, but the triangles are not well shaped.
So I need to improve the quality of the triangles. I learn that centroidal voronoi tessellation(CVT) can achieve that, but I don't know whether the operation will introduce non-manifold vertices and self-intersection. I hope to get some information about it from you.
The mesh from the following post looks pretty good.
How to fill polygon with points regularly?
Delaunay refinement algorithm is used. Can delaunay refinement algorithm apply to triangle mesh directly? Do I first need to delaunay triangulation of the point cloud of the mesh, and then use the information from delaunay triangulation to perform delaunay refinement?
Thanks.
Regards
Jogging
I created the image in the mentioned post: You can insert all points into a Delaunay triangulation and then create a Zone object (area) consisting of these triangles. Then you call refine(pZone,...) to get a quality mesh. Other options are to create the Zone from constraint edges or as the result of a boolean operation. However, this library is made for 2D and 2.5D. The 3D version will not be released before 2014.
Do you know the BallPivoting approach?