Help! I'm trying to create a mesh from a point cloud (created via on-site laser-scanning), and Meshlab is giving me difficulty.
I'm able to clean up and subsample the raw point cloud in CloudCompare, and have been trying to create a mesh in Meshlab. I've assigned normals (Filters > Normals... > Compute Normals...), which seems to work.
I then used the Screen Poisson filter to create a mesh (Filters > Remeshing... > Surface Reconstruction: Screen Poisson), which produced a good result for about 2/3rds of my point cloud. The remaining 3rd of my point cloud didn't seem to be meshed at all, and the Bounding Boxes of the two layers (point cloud and mesh) are radically different, with the mesh cutting off a big chunk of the cloud.
Here's the point cloud I'm starting with:
Here's the cloud and mesh overlaid. You can clearly see the different Bounding Boxes.
And here's the mesh on its own. I have no idea why the mesh stopped where it did.
I tried to replicate the issue on a different point cloud, and produced a very similar result, albeit with a mesh that represents only about 1/5th of the point cloud this time:
Fresh attempt with a different point cloud.
Any advice on how I can avoid this?
Related
I'm trying to use Polygonal Surface Reconstruction with building point cloud to create simplified building models.
I did first tests with this CGAL code example and got first promising results.
As an example, I used this point cloud with vertex normals correctly oriented and got the following result from PSR. Some faces are clearly inverted (dark faces with normals pointing inside the watertight mesh and therefore not visible).
I was wondering if there a way to fix this face orientation error. I've noticed orientation methods on Polygon mesh but I don't really know to apply them to the resulting PSR surface mesh. As far as logic is concerned making normal point outwards should not be too complicated I guess.
Thanks in advance for any help
You can use the function reverse_face_orientations in the Polygon mesh processing package.
Note that this package has several functions that can help you to correct/modify your mesh.
I am having problems with mesh for the merged point clouds using ICP algorithm.
Firstly, I have several point clouds of different view points for the same scene. And I register them using ICP(Iterative closest point). Then I use the software such as Meshlab and CloudcompareStero to get the mesh for the registered point cloud. And I found the registered point cloud is hierarchy. It behaves as the registered point cloud have several layers in depth direction. Although the difference is very small(I set high accuracy in ICP), it affects the visual quality greatly.
Basically, I think that I have several views' point cloud and I would get a more denser and completed point cloud which contains more information than the one point cloud using ICP. And the visual quality of the registered point cloud should be better than the single views' point cloud. However, because of the layer effect, the visual quality of registered point cloud' mesh is worse than the single view point cloud's mesh.
I think the kind of question should appear in the kinect fusion or some point cloud registration process. But I don't know where to find the cue.
So I want to ask greatest as you for some advice or thoughts to reduce the hierarchy effect and improve the visual quality.
I am facing a problem on 3D reconstruction since I am a new to this filed. I have some different views' depth map(point clouds), I want to use them to reconstruct the scene to get the effect like using the kinect fusion. Is there any paper of source code to settle this problem. Or any ideas on this problem.
PS:the point cloud is stored as a file with (x,y,z), you can check here to get the data.
Thank you very much.
As you have stated that you are new to this field, I shall attempt to keep this high level. Please do comment if there is something that is not clear.
The pipeline you refer to has three key stages:
Integration
Rendering
Pose Estimation
The Integration stage takes the unprojected points from a Depth Map (Kinect image) under the current pose and "integrates" them into a spatial data structure (a Voxel Volume such as a Signed Distance Function or a hierarchical structure like an Octree), often by maintaining per Voxel running averages.
The Rendering stage takes the inverse pose for the current frame and produces an image of the visible parts of the model currently in view. For the common volumetric representations this is achieved by Raycasting. The output of this stage provides the points of the model to which the next live frame is registered (the next stage).
The Pose Estimation stage registers the previously extracted model points to those of the live frame. This is commonly achieved by the Iterative Closest Point algorithm.
With regards to pertinent literature, I would advise the following papers as a starting point.
KinectFusion: Real-Time Dense Surface Mapping and Tracking
Real-time 3D Reconstruction at Scale using Voxel Hashing
Very High Frame Rate Volumetric Integration of Depth Images on
Mobile Devices
I would like to generate visually appealing surface reconstruction from the the point clouds.
I am using point cloud library. I tried creating a mesh using poisson reconstruction method but later found that it gives a water tight reconstruction.
For example: In my case I have a point cloud of a room
Using the code at http://justpaste.it/code1 , I was able to get a reconstruction like this
(source: pcl-users.org)
The above picture has the surface which is covering the top view. This was visualized using MeshLab.
Then later on the MeshLab GUI when I press points, it looks like this.
(source: pcl-users.org)
But in the second picture there are points on its surface too(Not clearly visible in the attached picture).
Can you help in creating a model that has no points on the top and just has the inside structure ?
Any other suggestions to improve the reconstruction quality ?
The point cloud of the room and generated ply file can be downloaded from https://dl.dropboxusercontent.com/u/95042389/temp_pcd_ply_files.tar.bz2
One solution that works for me is obtaining a convex/concave hull of your point cloud. Then you can use this hull to filter/crop your mesh after Poisson reconstruction. If you use the PCL you can try ConvexHull or ConcaveHull together with CropHull and test the results. Hope this solves your issue, it did for me.
As far as my experience is concerned (meshing caves), meshing with Poisson will result in watertight model/mesh, which is why your model was covered entirely. I only deal with meshes using MeshLab but I am guessing it is the same thing. What I did try is using Ball-Pivoting meshing algorithm in MeshLab which result in non-watertight model. Maybe that is what you are looking for.
I hope to find some hints where to start with a problem I am dealing with.
I am using a Kinect sensor to capture 3d point clouds. I created a 3d object detector which is already working.
Here my task:
Lets say I have a point cloud 1. I detected a object in cloud A and I know the centroid position of my object (x1,y1,z1). Now I move my sensor around a path and create new clouds (e.g. cloud 2). In that cloud 2 I see the same object but e.g. from the side, where the object detection is not working fine.
I would like to transform the detected object form cloud 1 to cloud 2, to get the centroid also in cloud 2. For me it sound like I need a matrix (Translation, Rotation) to transform point from 1 to 2.
And ideas how I could solve my problem?
Maybe ICP? Are there better solutions?
THX!
In general, this task is called registration. It relies on having a good estimation of which points in cloud 1 correspond to which clouds in point 2 (more specifically, which given a point in cloud 1, which point in cloud 2 represents the same location on the detected object). There's a good overview in the PCL library documentation
If you have such a correspondence, you're in luck and you can directly compute a rotation and translation as demonstrated here.
If not, you'll need to estimate that correspondence. ICP does that for approximately aligned point clouds, but if your point clouds are not already fairly well aligned, you may want to start by estimating "key points" (such as book corners, distinct colors, etc) in your point clouds, computing a rotation and translation as above, and then performing ICP. As D.J.Duff mentioned, ICP works better in practice on point clouds that are already approximately aligned because it estimates correspondences using one of two metrics, minimal point-to-point distance or minimal point to plane distance, according to wikipedia, the latter works better in practice, but it does involve estimating normals, which can be tricky. If the correspondences are far off, the transforms likely will be as well.
I think what you were asking about was in particular to the Kinect Sensor and the API Microsoft released for it.
If you are not planning to do reconstruction, you can look into the AlignPointClouds function in Sensor Fusion namespace. This should take care of it automatically, in methods similar to the answer given by #pnhgiol.
On the other hand, if you are looking at doing reconstruction as well as point cloud transforms, the Reconstruction class is what you are looking for. All of which can be found out about, here.