I executed the 3D alpha shape function with CGAL and I got unexpected results.
My input data was a set of 3D points (x, y, z) that represents one building (box) in a flat area (with some noise in the coordinates - small ones). I supposed I would get as a result only the surface triangles that represent the building (walls and roof) and the ground.
But, as a result I got triangles forming a convex hull of the surface.
I tried to change the "optimal alpha value" but it was the same.
Is there any filtering process or parameter that I can set to get the surface triangles only?
You need to find the tetrahedons on the surface of the shape first. Then you can try alpha shapes and remove the edges exceeding alpha. In CGAL you Then check all tetrahedons if they are connected with a super tetrahedon. These are the tetrahedons on the surface of the shape. Then apply alpha shapes.
Related
My source data consists of a set of (x,y,z,e) samples. It can be visualized like this, where the dots are the (x,y,z) samples in 3D space and the color reflects the e value. The (x,y,z) samples compose a surface.
I want a kind of interpolation method, that I can feed with random (x,y,z) coordinates that are close to the surface and that can output an interpolated e value.
I tried the scipy LinearNDInterpolator. It works fine, but only for input (x,y,z) points that lie inside the convex hull of the surface. When the input is only slightly outside, the interpolator returns 'nan'.
I'm a bit out of ideas how to solve this.
I can only think of iterating the each line in the grid to find the points closest to the random (x,y,z) input and do linear interpolations from these points. But if I could somehow reconstruct the surface, that would be more accurate.
I have to draw a tube, aligned along a sine wave in the (x,y) plane. The length and radius of the tube is known. I have read in the Wikipedia page https://en.wikipedia.org/wiki/Channel_surface that this surface is called a "pipe" surface and that the parametric equations are given in the 3D space by
x=x(u,v)=c(u)+R*(e1(u)cos(v)+e2(u)sin(v))
where r is the radius of the pipe, u->c(u) is the parametric equation of the curve, (e1(u),e2(u)) two vectors forming a basis of the normal plane at c(u) and v is the parameter of the circle (from 0 to 2*pi).
How can I apply this for a sinus curve in the plane and plot the resulting surface with Scilab ?
If the curve is defined by u->c(u)=(u,sin(u),0), then the vectors of the normal plane can be defined by e1(u)=(cos(u),-1,0)/sqrt(1+cos(u)^2) and e2=(0,0,1). The following code draws a tube of radius 0.5 along u->c(u) for u in [0,2*pi]:
[u,v]=meshgrid(linspace(0,2*%pi,40),linspace(0,2*%pi,40));
r=0.5;
cu=cos(u);
d=sqrt(1+cos(u).^2);
clf
mesh(u+r*cos(v).*cu./d,sin(u)-r*cos(v)./d, r*sin(v));
isoview("on")
thanks for reading this question. My title is basically what I'm trying to achieve. I did a poisson surface mesh generation using Poisson_surface_reconstruction_3(cgal). I can't figure out how to map the node identities of my resulting surface mesh into my starting point sets?
The output of my poisson surface generation is produced by the following lines:
CGAL::facets_in_complex_2_to_triangle_mesh(c2t3, output_mesh);
out << output_mesh;
In my output file, there are some x y z coordinates, followed by a set of 3 integers each line, I think they indicates which nodes form a delaunay triangle. The problem is that the output points do not correspond to my initial point set, since not any x y z value match to any of my original points. Yet I'm trying to figure out which points are forming a delaunay triangles in my original point set.
Could someone suggest me how can I do this in cgal?
Many thanks.
The poisson recontruction algorithm consist in meshing an implicit function that somehow fits you input points. In practice, it means that you input point will no belong to the set of points of the output surface, and won't even lie exactly on triangles of the output surface. However, they should not be too far from the output surface (except if you have some really sparse sampling parts).
What you can do to locate your input points with the output surface is to use the function closest_point_and_primitive() from the AABB-tree class.
Here is an example of how to build the tree from a mesh.
I am writing a program. I have, say, a grid of dots on a piece of paper. I fix one end and bend the paper toward the screen, giving me a trapezoidal shape from the camera's point of view. I have the (x,y) camera coordinate of each dot. Is there a simple way I can change these (x,y) to real life (x,y) which should give me a rectangle? I have the camera/real (x,y) of the original flat sheet of paper pre-bend if that helps.
I have looked at 3D Camera coordinates to world coordinates (change of basis?) and Transforming screen coordinates from security camera to real world coordinates.
Look up "homography". The transformation from a plane in 3D space to its image as captured by an ideal pinhole camera is a homography. It can be represented as a 3x3 matrix H that transforms the 3D coordinates X of points in the world to their corresponding homogeneous image coordinates x:
x = H * X
where X is a 3x1 vector of the world point coordinates, and x = [u, v, w]^T is the image point in homogeneous coordinates.
Given a minimum of 4 matches between world and image points (e.g. the corners of a rectangle) you can estimate the parameters of the matrix H. For details, look up "DLT algorithm". In OpenCV the routine to use is findHomography.
I'm trying to implement a geometry templating engine. One of the parts is taking a prototypical polygonal mesh and aligning an instantiation with some points in the larger object.
So, the problem is this: given 3d point positions for some (perhaps all) of the verts in a polygonal mesh, find a scaled rotation that minimizes the difference between the transformed verts and the given point positions. I also have a centerpoint that can remain fixed, if that helps. The correspondence between the verts and the 3d locations is fixed.
I'm thinking this could be done by solving for the coefficients of a transformation matrix, but I'm a little unsure how to build the system to solve.
An example of this is a cube. The prototype would be the unit cube, centered at the origin, with vert indices:
4----5
|\ \
| 6----7
| | |
0 | 1 |
\| |
2----3
An example of the vert locations to fit:
v0: 1.243,2.163,-3.426
v1: 4.190,-0.408,-0.485
v2: -1.974,-1.525,-3.426
v3: 0.974,-4.096,-0.485
v5: 1.974,1.525,3.426
v7: -1.243,-2.163,3.426
So, given that prototype and those points, how do I find the single scale factor, and the rotation about x, y, and z that will minimize the distance between the verts and those positions? It would be best for the method to be generalizable to an arbitrary mesh, not just a cube.
Assuming you have all points and their correspondences, you can fine-tune your match by solving the least squares problem:
minimize Norm(T*V-M)
where T is the transformation matrix you are looking for, V are the vertices to fit, and M are the vertices of the prototype. Norm refers to the Frobenius norm. M and V are 3xN matrices where each column is a 3-vector of a vertex of the prototype and corresponding vertex in the fitting vertex set. T is a 3x3 transformation matrix. Then the transformation matrix that minimizes the mean squared error is inverse(V*transpose(V))*V*transpose(M). The resulting matrix will in general not be orthogonal (you wanted one which has no shear), so you can solve a matrix Procrustes problem to find the nearest orthogonal matrix with the SVD.
Now, if you don't know which given points will correspond to which prototype points, the problem you want to solve is called surface registration. This is an active field of research. See for example this paper, which also covers rigid registration, which is what you're after.
If you want to create a mesh on an arbitrary 3D geometry, this is not the way it's typically done.
You should look at octree mesh generation techniques. You'll have better success if you work with a true 3D primitive, which means tetrahedra instead of cubes.
If your geometry is a 3D body, all you'll have is a surface description to start with. Determining "optimal" interior points isn't meaningful, because you don't have any. You'll want them to be arranged in such a way that the tetrahedra inside aren't too distorted, but that's the best you'll be able to do.