Assuming I have several points in a 2D plane.
I now want to create a hull that covers all points. The shape of that hull should be somehow weighted by the distance/closeness of the points, i.e., outsiders must be connected to the rest of the points by a narrow link (see target state).
Figure 2 also shows a hull that is covering all points, however, does not narrow down the link to the three outliers.
I also do not want an alpha-shape or convex hull because that would not obfuscate the location of my points, because I can retrieve the location from the vertices.
Is there any area of algorithms/optimizations that covers this kind of problem?
Related
Apologies if this is a simple question, I am quite new to CGAL and computational geometry in general.
I have a polygon defined in 3D space (i.e. a set of points in R^3 and edges connecting exactly two points). In fact, the polygon is exactly the hole of a surface mesh. My question is how to triangulate that polygon without introducing any new points in the interior of the polygon. The resulting triangulation must contain only the original points on the boundary of the polygon. Is this possible in CGAL?
Not sure if I'm supposed to ask this question here, but going to give it a try since MeshLab doesn't seem to respond to issues on GitHub fast..
When I imported a mesh consisting of 100 vertices and 75 quad faces, meshlab somehow recognizes it to have 146 faces. What is the problem here?
Please find here the OBJ file and below the screenshot:
Any help/advice would be greatly appreciated,
Thank you!
Tim
Yes, per the MeshLab homepage Stack Overflow is now the recommended place to ask questions. Github should be reserved for reporting actual bugs.
It is important to understand is that MeshLab is designed to work with large unstructured triangular meshes, and while it can do some things with quad and polygonal meshes, there are some limitations and idiosyncrasies.
MeshLab essentially treats all meshes as triangular for most operations; when a polygonal mesh is opened, MeshLab creates "faux edges" that subdivide the mesh into triangles. You can visualize the faux edges by turning "Polygonal Modality" on or off in the edge display pane. If you run "Compute Geometric Measures", it will provide different lengths for the edges both with and without the faux edges. This is why MeshLab is reporting a higher number of faces for your model; it is reporting the number of faces after triangulation, i.e. including the faux edge subdivision. As you can see, when dividing the number of quad faces (75) in half, you end up with nearly double the number of triangular faces (146), which makes sense. Unfortunately I don't know of a way to have MeshLab report the number of faces without these faux edges.
Most filters only work on triangular meshes, and if run on a polygonal mesh the faux edges will be used. A few specific filters (e.g. those in the "Polygonal and Quad Mesh" category) work with quads, and for these the faux edges should be ignored. When exporting, if you check "polygonal" the faux edges should be discarded and the mesh will be saved with the proper polygons, otherwise the mesh will be permanently triangulated per the faux edges.
Hope this helps!
I am developing some computer vision algorithms for vehicle applications.
I am in front of a problem and some help would be appreciated.
Let say we have a calibrated camera attached to a vehicle which captures a frame of the road forward the vehicle:
Initial frame
We apply a first filter to keep only the road markers and return a binary image:
Filtered image
Once the road lane are separated, we can approximate the lanes with linear expressions and detect the vanishing point:
Objective
But what I am looking for to recover is the equation of the normal n into the image without any prior knowledge of the rotation matrix and the translation vector. Nevertheless, I assume L1, L2 and L3 lie on the same plane.
In the 3D space the problem is quite simple. In the 2D image plane, since the camera projective transformation does not keep the angle properties more complex. I am not able to find a way to figure out the equation of the normal.
Do you have any idea about how I could compute the normal?
Thanks,
Pm
No can do, you need a minimum of two independent vanishing points (i.e. vanishing points representing the images of the points at infinity of two different pencils of parallel lines).
If you have them, the answer is trivial: express the image positions of said vanishing points in homogeneous coordinates. Then their cross product is equal (up to scale) to the normal vector of the 3D plane said pencils define, decomposed in camera coordinates.
Your information is insufficient as the others have stated. If your data is coming from a video a common way to get a road ground plane is to take two or more images, compute the associated homography then decompose the homography matrix into the surface normal and relative camera motion. You can do the decomposition with OpenCV's decomposeHomographyMatmethod. You can compute the homography by associating four or more point correspondences using OpenCV's findHomography method. If it is hard to determine these correspondences it is also possible to do it with a combination of point and line correspondences paper, however this is not implemented in OpenCV.
You do not have sufficient information in the example you provide.
If you are wondering "which way is up", one thing you might be able to do is to detect the line on the horizon. If K is the calibration matrix then KTl will give you the plane normal in 3D relative to your camera. (The general equation for backprojection of a line l in the image to a plane E through the center of projection is E=PTl with a 3x4 projection matrix P)
A better alternative might be to establish a homography to rectify the ground-plane. To do so, however, you need at least four non-collinear points with known coordinates - or four lines, no three of which may be parallel.
I have 3 sets of point cloud that represent one surface. I want to use these point clouds to construct triangular mesh, then use the mesh to represent the surface. Each set of point cloud is collected in different ways so their representation to this surface are different. For example, some sets can represent the surface with smaller "error". My questions are:
(1) What's the best way to evaluate such mesh-to-surface "error"?
(2) Is there a mature/reliable way to convert point cloud to triangular mesh? I found some software doing this but most requests extensive manual adjustment.
(3) After the conversion I get three meshes. I want to use a fourth mesh, namely Mesh4, to "fit" the three meshes, and get an "average" mesh of the three. Then I can use this Mesh4 as a representation of the underlying surface. How can I do/call this "mesh to mesh" fitting? Is it a mature technique?
Thank you very much for your time!
Please find below my answers for point 1 and 2:
as a metric for mesh-to-surface error you can use Hausdorff distance. For example, you could use Libigl to compare two meshes.
To obtain a mesh from a point cloud, have a look at PCL
I have some way points with longitudes and latitudes which builds a trajectory and I want to calculate a parallel trajectory at a specific distance!
I would appreciate any help!
Best regards,
Tara
You can use the Dijkstra's algorithm.
More explanation here:
http://en.wikipedia.org/wiki/Dijkstra%27s_algorithm
If you want that both the trajectory and the parallel trajectory will be composed of geodesics - you can't.
The path between two points is usually defined by a geodesic. Geodesic is a generalization of the notion of a straight line, or a straight line segment to curved spaces. Geodesic is the shortest route between two points on the Earth's surface, namely, a segment of a great circle.
Great circles divide the sphere in two equal hemispheres and all great circles intersect each other. Therefore, there are no parallel geodesics.