I'm working on a project to detect the position and orientation of a paper plane.
To collect the data, I'm using an Intel Realsense D435, which gives me accurate, clean depth data to work with.
Now I arrived at the problem of detecting the 2D paper plane silhouette from the 3D point cloud data.
Here is an example of the data (I put the plane on a stick for testing, this will not be in the final implementation):
https://i.stack.imgur.com/EHaEr.gif
Basically, I have:
A 3D point cloud with points on the plane
A 2D shape of the plane
I would like to calculate what rotations/translations are needed to align the 2D shape to the 3D point cloud as accurate as possible.
I've searched online, but couldn't find a good way to do it. One way would be to use Iterative Closest Point (ICP) to first take a calibration pointcloud of the plane in a known orientation, and align it with the current orientation. But from what I've heard, ICP doesn't perform well if the pointclouds aren't kind of already closely aligned at the start.
Any help is appreciated! Coding language doesn't matter.
Does your 3d point cloud have outliers? How many in what way?
How did you use ICP exactly?
One way would be using ICP, with a hand-crafted initial guess using
pcl::transformPointCloud (*cloud_in, *cloud_icp, transformation_matrix);
(to mitigate the problem that ICP needs to be close to work.)
What you actually want is the plane-model that describes the position and orientation of your point-cloud right?
A good estimator of your underlying function can be found with: pcl::ransac
pcl::ransace model consensus
You can then get the computedModel coefficents.
Now finding the correct transformation is just: How to calculate transformation matrix from one plane to another?
Related
Suppose I have a 3D model that roughly resembles a cube or a cuboid, and I wanted to estimate a regular set of gridlines that lays on top of the 3D model.
Is there an efficient way of doing this?
A brute force way might be to first estimate the 8 corner points, then use the geodesic function in CGAL to link up the corner points to form the initial path lines. Recursively, take the halfway point in each path find the geodesic path to the halfway point of the opposing path.
But that would take up too much computing time.
Is there a faster way to do this via texture mapping texture? I need to code this out so will be thankful if a specific algorithm can be referred to, rather than using a software such as blender, maya, etc.
thanks for any help!
I'm trying to use Polygonal Surface Reconstruction with building point cloud to create simplified building models.
I did first tests with this CGAL code example and got first promising results.
As an example, I used this point cloud with vertex normals correctly oriented and got the following result from PSR. Some faces are clearly inverted (dark faces with normals pointing inside the watertight mesh and therefore not visible).
I was wondering if there a way to fix this face orientation error. I've noticed orientation methods on Polygon mesh but I don't really know to apply them to the resulting PSR surface mesh. As far as logic is concerned making normal point outwards should not be too complicated I guess.
Thanks in advance for any help
You can use the function reverse_face_orientations in the Polygon mesh processing package.
Note that this package has several functions that can help you to correct/modify your mesh.
I am developing some computer vision algorithms for vehicle applications.
I am in front of a problem and some help would be appreciated.
Let say we have a calibrated camera attached to a vehicle which captures a frame of the road forward the vehicle:
Initial frame
We apply a first filter to keep only the road markers and return a binary image:
Filtered image
Once the road lane are separated, we can approximate the lanes with linear expressions and detect the vanishing point:
Objective
But what I am looking for to recover is the equation of the normal n into the image without any prior knowledge of the rotation matrix and the translation vector. Nevertheless, I assume L1, L2 and L3 lie on the same plane.
In the 3D space the problem is quite simple. In the 2D image plane, since the camera projective transformation does not keep the angle properties more complex. I am not able to find a way to figure out the equation of the normal.
Do you have any idea about how I could compute the normal?
Thanks,
Pm
No can do, you need a minimum of two independent vanishing points (i.e. vanishing points representing the images of the points at infinity of two different pencils of parallel lines).
If you have them, the answer is trivial: express the image positions of said vanishing points in homogeneous coordinates. Then their cross product is equal (up to scale) to the normal vector of the 3D plane said pencils define, decomposed in camera coordinates.
Your information is insufficient as the others have stated. If your data is coming from a video a common way to get a road ground plane is to take two or more images, compute the associated homography then decompose the homography matrix into the surface normal and relative camera motion. You can do the decomposition with OpenCV's decomposeHomographyMatmethod. You can compute the homography by associating four or more point correspondences using OpenCV's findHomography method. If it is hard to determine these correspondences it is also possible to do it with a combination of point and line correspondences paper, however this is not implemented in OpenCV.
You do not have sufficient information in the example you provide.
If you are wondering "which way is up", one thing you might be able to do is to detect the line on the horizon. If K is the calibration matrix then KTl will give you the plane normal in 3D relative to your camera. (The general equation for backprojection of a line l in the image to a plane E through the center of projection is E=PTl with a 3x4 projection matrix P)
A better alternative might be to establish a homography to rectify the ground-plane. To do so, however, you need at least four non-collinear points with known coordinates - or four lines, no three of which may be parallel.
I am using the new Kinect v2 and I am getting the depth map of the Kinect.
After I get the depth map I convert the depth data from Depth Space to Camera Space.
As far as I understand this is done, by converting all the X,Y coordinate of each pixel to Camera Space + adding the depth value as Z coordinate (also Kinect gives the depth value in millimetres so it is also converted to hold meters).
Because of this, the point cloud is actually on 2D grid extended with the depth value. The visualization also confirms this, since it is easy to notice that the points are ordered in a grid due to the above conversation.
For visualization I am using OpenGL the old fashion way (glBegin(...) and glEnd()).
I want to create a mesh out of the points. I kind of managed to do it with GL_TRIANGLES, but then I have lot of duplicated vertices and edges. So I thought I should create a better triangulation with GL_TRIANGLE_STRIP, but I am stuck here because I can't come up with a good algorithm which can go through my 2D grid in a way that I can feed it to the GL_TRIANGLE_STRIP so it creates a nice surface.
The problems:
For each triangle's vertices I am checking the Z coordinate. If it exceeds a certain threshold I disregard the triangle => this might create holes in my 2D grid.
Some depth values are NaN, because the Kinect can't "see" there nothing (for example an object is too far or too close) => this also creates holes in the 2D grid.
Anybody has any suggestion what would be the best method to solve this issue?
If you're able to use the point cloud library, you could use the
class pcl::OrganizedFastMesh< PointInT >.
http://docs.pointclouds.org/trunk/classpcl_1_1_organized_fast_mesh.html
I use it to triangulate complete depth frames.
You can try also a delanauy triangulation in 3d and look for the tetrahedons on the exterior. An easy algorithm is the bowyer-watson with tetrahedons and circumspheres. Cgal is a good example.
I am looking for an algorithm that receives a 3d surface mesh (i.e comprised of 3d triangles that are a discretization of some manifold) and generates tetrahedra inside the mesh's volume.
i.e, I want the 3d equivalent to this 2d problem: given a closed curve, triangulate it's interior.
I am sorry if this is unclear, it's the best way I could think of explaining it.
For the 2d case there's Triangle. For a 3d case I could find none.
pygalmesh (a project of mine based on CGAL) can do just that.
pygalmesh-volume-from-surface elephant.vtu out.vtk --cell-size 1.0 --odt
https://github.com/nschloe/pygalmesh/#volume-meshes-from-surface-meshes
I found GRUMMP which seems to answer all the needs mentioned in the question, and more...
I haven't had any experience using GRUMMP, but as far as a 3D version of triangle there is tetgen. If you know the triangle switches it is built to resemble it. It also has fairly decent documentation and a python wrapper for it and triangle.
http://wias-berlin.de/software/tetgen/
http://mathema.tician.de/software/meshpy/