I'm trying to use Polygonal Surface Reconstruction with building point cloud to create simplified building models.
I did first tests with this CGAL code example and got first promising results.
As an example, I used this point cloud with vertex normals correctly oriented and got the following result from PSR. Some faces are clearly inverted (dark faces with normals pointing inside the watertight mesh and therefore not visible).
I was wondering if there a way to fix this face orientation error. I've noticed orientation methods on Polygon mesh but I don't really know to apply them to the resulting PSR surface mesh. As far as logic is concerned making normal point outwards should not be too complicated I guess.
Thanks in advance for any help
You can use the function reverse_face_orientations in the Polygon mesh processing package.
Note that this package has several functions that can help you to correct/modify your mesh.
Related
I'm working on a project to detect the position and orientation of a paper plane.
To collect the data, I'm using an Intel Realsense D435, which gives me accurate, clean depth data to work with.
Now I arrived at the problem of detecting the 2D paper plane silhouette from the 3D point cloud data.
Here is an example of the data (I put the plane on a stick for testing, this will not be in the final implementation):
https://i.stack.imgur.com/EHaEr.gif
Basically, I have:
A 3D point cloud with points on the plane
A 2D shape of the plane
I would like to calculate what rotations/translations are needed to align the 2D shape to the 3D point cloud as accurate as possible.
I've searched online, but couldn't find a good way to do it. One way would be to use Iterative Closest Point (ICP) to first take a calibration pointcloud of the plane in a known orientation, and align it with the current orientation. But from what I've heard, ICP doesn't perform well if the pointclouds aren't kind of already closely aligned at the start.
Any help is appreciated! Coding language doesn't matter.
Does your 3d point cloud have outliers? How many in what way?
How did you use ICP exactly?
One way would be using ICP, with a hand-crafted initial guess using
pcl::transformPointCloud (*cloud_in, *cloud_icp, transformation_matrix);
(to mitigate the problem that ICP needs to be close to work.)
What you actually want is the plane-model that describes the position and orientation of your point-cloud right?
A good estimator of your underlying function can be found with: pcl::ransac
pcl::ransace model consensus
You can then get the computedModel coefficents.
Now finding the correct transformation is just: How to calculate transformation matrix from one plane to another?
I've been fiddling my way through vulkan, and have tried out some basic diffuse lighting, which only takes into account the surface normals. On the side of the model facing the light, things look fine -
On the opposite side of the model though, there's a part of the model which is shaded like it is illuminated even though it shouldn't be-
I know this happens because I'm only considering the surface normals and the shader doesn't care where the vertex is as long as its normal is towards the light, but how do I fix it? I feel like I need a way to do a depth test to figure out whether a part of the model should be lighted or not. How would I go about doing this if that is the case? What should I be doing if otherwise?
Sounds like you want to implement shadows.
A standard way is shadow mapping. You render the scene from the point of the light and only keep the depth buffer. You then pass that depth buffer as a texture to the fragment shader and sample that based on where the point is in the world and compare the sampled depth with the distance to the light.
However there are various caveats with this technique. Most common ones being shadow acne where quantization error leads to fragments self shadowing resulting in speckled lighting, you can fix that by adding a small offset to the depth. The next one is peter panning, where that offset you added previously leads to light bleedthrough where a thin wall meets a floor, you fix that by not having walls thin enough that the offset goes through them.
I would like to generate visually appealing surface reconstruction from the the point clouds.
I am using point cloud library. I tried creating a mesh using poisson reconstruction method but later found that it gives a water tight reconstruction.
For example: In my case I have a point cloud of a room
Using the code at http://justpaste.it/code1 , I was able to get a reconstruction like this
(source: pcl-users.org)
The above picture has the surface which is covering the top view. This was visualized using MeshLab.
Then later on the MeshLab GUI when I press points, it looks like this.
(source: pcl-users.org)
But in the second picture there are points on its surface too(Not clearly visible in the attached picture).
Can you help in creating a model that has no points on the top and just has the inside structure ?
Any other suggestions to improve the reconstruction quality ?
The point cloud of the room and generated ply file can be downloaded from https://dl.dropboxusercontent.com/u/95042389/temp_pcd_ply_files.tar.bz2
One solution that works for me is obtaining a convex/concave hull of your point cloud. Then you can use this hull to filter/crop your mesh after Poisson reconstruction. If you use the PCL you can try ConvexHull or ConcaveHull together with CropHull and test the results. Hope this solves your issue, it did for me.
As far as my experience is concerned (meshing caves), meshing with Poisson will result in watertight model/mesh, which is why your model was covered entirely. I only deal with meshes using MeshLab but I am guessing it is the same thing. What I did try is using Ball-Pivoting meshing algorithm in MeshLab which result in non-watertight model. Maybe that is what you are looking for.
I need to generate a tetrahedral (volume) mesh of thin-walled object object. Think of objects like a bottle or a plastic bowl, etc, which are mostly hollow. The volumetric mesh is needed for an FEM simulation. A surface mesh of the outside surface of the object is available from measurement, using e.g. octomap or KinectFusion. Therefore the vertex spacing is relatively regular. The inner surface of the object can be calculated from the outside surface by moving all points inside, since the wall thickness is known.
So far, I have considered the following approaches:
Create a 3D Delaunay triangulation (which would destroy the existing surface meshes) and then remove all tetrahedra which are not between the two original surfaces. For this check, it might be required to create an implicit surface representation of the 2 surfaces.
Create a 3D Delaunay triangulation and remove tetrahedra which are "inside" (in the hollow space) or "outside" (of the outer surface) with Alphashapes.
Close the outside and inside meshes and load them into tetgen as the outside hull and as a hole respectively.
These approaches seem to be a bit inelegant to me, and they still have some pitfalls. I would probably need several libraries/tools for them. For 1 and 2, probably tetgen or another FEM meshing tool would still be required to create well-conditioned tetrahedra. Does anyone have a more straight-forward solution? I guess this should also be a common problem in 3D printing.
Concerning tools/libraries, I have looked into PCL, meshlab and tetgen so far. They all seem to do only part of the job. Ideally, I would like to use only open source libraries and avoid tools which require manual intervention.
One way is to:
create triangular mesh of surface points,
extrude (move) that surface to inner for a given thickness. That produces volume (triangular prism) mesh of a wall,
each prism can be split in three tetrahedrons.
The problem I see is aspect ratio.
A single layer of tetrahedra will not reproduce shell or bending behavior very well. A single element through the thickness will already require a large mesh. Putting more than one will likely break the bank in order to keep aspect ratios and angles acceptable.
I'd prefer brick or thick shell elements to tetrahedra in this case. I think the modeling will be easier and the behavior will be more faithful to the physics.
I have a set of points in 3D that lie on a surface and I also have the normals at every point.
I would like to generate a surface triangulation with this information. In addition I could tell the algorithm to use what points lie on the boundary if that is needed.
So, I have quite a bit of information:
* points
* normals
* boundary
How do I triangulate a surface with this information using vtk?
A surface reconstruction algorithm is like using a bomb for this problem since I have all this information that I would like to use. This information comes from a simulation so I know the surface exists and that is quite smooth.
I would like the answer to be cast in terms of either what vtk function to use and if available (and that would be great) examples using this function.
Thank you so much in advance.
You can use the vtkSurfaceReconstruction filter to create a surface from a set of 3D points.
You could try the point cloud library
Point Cloud Library
Just the 3D points would be good enough. Since you know that your surface is smooth, you can perform a Delaunay triangulation of the points (vtkDelaunay3D) and apply a subdivision filter for smoothening (vtkButterflySubdivisionFilter).
Delaunay3D triangulation