Why all vertices are removed after useing the option ' Remove unreferenced vertices - mesh

Im imported my pointcloud to Meshlab with normals and I would like to make a Screened Poisson Surface Reconstruction. When I try to do this I Have a communicat like ' Filters requires correct per vertes normals. E.g.it is necessary that your ALL input vertices have a proper, not-null normal. If you enconuter this error on a triangulated mesh try to use the Remove Unreferenced Vertices filters....'
When I tried use this options all my vertices disappeared. I also checked my normals and all have not-null value.
I don't understand where the problem is. Please help me.

Your input is not a triangulated mesh, so you should not call "Remove Unreferenced Vertices" filter. That filter will remove those vertex that are not in use by any triangle, which mean "every vertex" if you have no triangles.

Assuming your file is in .xyz format, you should have 6 numbers per vertex:
x coord, y coord, z coord, x normal, y normal, z normal
Most likely, your file only contains the coordinate data.
If you cannot add the normal information to the file, you can estimate it in Meshlab with:
Filters > Normals, Curvatures and Orientation > Compute normals for point sets

Related

Matplotlib Polygon gets fill outside of polygon

I have a collection of polygons, created with scipy.spatial.Voronoi (specifically, a subset of the Voronoi regions), which I'd like to plot with matplotlib. However, it seems like there are some constraints on the vertex order of the matplotlib polygons, since some of the polygons end up with the fill on the outside of the polygon rather than the inside. In these cases, reversing the order the vertices are specified seems to fix the problem, so it seems to me like a winding issue (even if the docs don't mention anything like this).
However, since some polygons are in the right order and some are in the wrong order, I can't just reverse all the vertex lists, so is there a way I can detect the incorrectly wound lists and fix only those or alternatively a way to get matplotlib to do the equivalent thing automatically?
ImportanceOfBeingErnest's comment put me on the right track, which in turn led me to How to determine if a list of polygon points are in clockwise order?. Basically, we find the bottom-rightmost point in the polygon P, the point A before P, and the point B after P. The sign of the cross product AP x PB gives the winding: positive in case of CCW winding and negative for CW winding.

Fast check if polygon contains point between dataframes

I have two dataframes. One contains a column of Polygons, taken from an image of polygon shapes. Each polygon has a set of coordinates. This dataframe also has a "segment-id" column. I have another dataframe, containing a column of Points, also with coordinates. These Points represent pixels from the same image of Polygon shapes, and therefore have the same coordinate system. I want to give every Point the "segment-id" of the Polygon which contains it. Every Polygon contains at least one Point.
Currently, I achieve this by using a nested for loop:
for i, row in enumerate(point_df.itertuples(), 0):
point = pixel_df.at[i, 'geometry']
for j in range(len(polygon_df)):
polygon = polygon_df.iat[j, 0]
if polygon.contains(point):
pixel_df.at[i, 'segment_id'] = polygon_df.at[j, 'segment_id']
else:
pass
This is extremely slow. For 100 Points, it takes around 10 seconds. I need a faster way of doing this. I have tried using apply but it is still super slow.
Hope someone can help me out, thanks very much.
For fast "is point inside polygon":
Preparation: In the code that obtains the data describing the polygons; using all the vertices, find the minimum and maximum y-coord, and minimum and maximum x-coord; and store that with the polygon's data.
1) Using the point's coords and the polygon's "minimum and maximum x and y" (pre-determined during preparation); do a "bounding box" test. This is just a fast way to find out if the point is definitely not inside the polygon (so you can skip the more expensive steps most of the time).
2) Set a "yes/no" flag to "no"
3) For each edge in the polygon; determine if a horizontal line passing through the point would intersect with the edge, and if it does determine the x-coord of the intersection. If the x-coord of the intersection is less than the point's x-coord, toggle (with NOT) the "yes/no" flag. Ignore "horizontal line passes through a vertex" during this step.
4) For each vertex, compare its y-coord with the point's y-coord. If they're the same you need to look at both edges coming from that vertex to determine if the edge's vertices are in the same y direction. If the edge's vertices are in the same y direction (if the edges form a 'V' shape or upside-down 'V' shape) ignore the vertex. Otherwise (if the edges form a '<' or '>' shape), if the vertex's x-coord is less than the point's x-coord, toggle the "yes/no" flag.
After all this is done; that "yes/no" flag will tell you if the point was in the polygon.

How to map the node identities of my resulting surface mesh generated from Poisson_surface_reconstruction_3 into my starting point sets?

thanks for reading this question. My title is basically what I'm trying to achieve. I did a poisson surface mesh generation using Poisson_surface_reconstruction_3(cgal). I can't figure out how to map the node identities of my resulting surface mesh into my starting point sets?
The output of my poisson surface generation is produced by the following lines:
CGAL::facets_in_complex_2_to_triangle_mesh(c2t3, output_mesh);
out << output_mesh;
In my output file, there are some x y z coordinates, followed by a set of 3 integers each line, I think they indicates which nodes form a delaunay triangle. The problem is that the output points do not correspond to my initial point set, since not any x y z value match to any of my original points. Yet I'm trying to figure out which points are forming a delaunay triangles in my original point set.
Could someone suggest me how can I do this in cgal?
Many thanks.
The poisson recontruction algorithm consist in meshing an implicit function that somehow fits you input points. In practice, it means that you input point will no belong to the set of points of the output surface, and won't even lie exactly on triangles of the output surface. However, they should not be too far from the output surface (except if you have some really sparse sampling parts).
What you can do to locate your input points with the output surface is to use the function closest_point_and_primitive() from the AABB-tree class.
Here is an example of how to build the tree from a mesh.

meshlab- how to transfer uvs from source .objs onto poisson reconstruction model

I've been struggling for some time to find a way in Meshlab to include or transfer UV’s onto a poisson model from source meshes. I will try to explain more of what I’m trying to accomplish below.
My source meshes have uv’s along with texture data. I need to build a fused model and include the texture data. It is for facial expression scan data reconstruction for a production pipeline which ultimately builds a facial rig for animation. Our source scan data includes marker information which we use to register, build a fused scan model which is used to generate a retopologized mesh for blendshapes.
Previously, we were using David3D. http://www.david-3d.com/en/support/downloads
David 3D used poisson surface reconstruction to create a fused model. The fused model it created brought along the uvs and optimized the source textures into 1 uv tile. I'll post a picture of the result below that I'm looking to recreate in MeshLab.
My need to find this solution in meshlab is to build tools to help automate this process. David3D version 5 does not have an development kit to program around.
Is it possible in Meshlab to apply the uvs from the regions used from the source mesh onto the poison model? Could I use a filter to transfer them? Reproject them?
Or is there another reconstruction method/ process from within Meshlab that will keep the uv’s?
Here is an image of what the resulting uv parameter looks like from David. The uvs are white on the left half of the image.
Thank You,David3D UV Layout Result
Dan
No, in MeshLab there is no direct way to transfer UV mapping between two layers.
This is because UV transfer is not, in the general case, a trivial task. It is not simply a matter of assigning to the new surface the "closest" UV of the original mesh: this would not work on UV discontinuities, which are present in the example you linked. Additionally, the two meshes should be almost coincident, otherwise you would also have problems also in defining the "closest" UV.
There are a couple ways to do it, but require manual work and a re-sampling of the texture:
create a UV mapping of the re-meshed model using whatever tool you may have, then resample the existing texture on the new parametrization using "transfer: vertex attributes to Texture (1 or 2 meshes)", using texture color as source
load the original mesh, and using the screenshot function, create "virtual" photos of the model (turn off illumination and do NOT use ortho views), adding them as raster layers, until the model surface has been fully covered. Load the new model, that should be in the same space, and texture-map it using the "parametrization + texturing " using those registered images
In MeshLab it is also possible to create a new texture from the original images, if you have a way to import the registered cameras...
TL;DR: UV coords to color channels → Vertex Attribute Transfer → Color channels back to UV coords
I have had very good results kludging it through the color channels, like this (say you are transfering from layer A to layer B):
Make sure A and B are roughly aligned with eachother (you can use the ICP filter if needed).
Select layer A, then:
Texture → Convert Per Wedge UV to Per Vertex UV (if you've got wedge coords)
Color Creation → Per Vertex Color Function, and transfer the tex coords to the color channels (assuming UV range 0-1, you'll want to tweak these if your range is larger):
func r = 255.0 * vtu
func g = 255.0 * vtv
func b = 0
Sampling → Vertex Attribute Transfer, and use this to transfer the vertex colors (which now hold texture coordinates) from layer A to layer B.
source mesh = layer A
target mesh = layer B
check Transfer Color
set distance large enough to not miss any spots
Now select layer B, which contains the mapped vertex colors, and do the opposite that you did for A:
Texture → Per Vertex Texture Function
func u = r / 255.0
func v = g / 255.0
Texture → Convert Per Vertex UV to Per Wedge UV
And that's it.
The results aren't going to be perfect, but in practice I often find them sufficient. In particular:
If the texture is not continuously mapped to layer A (e.g. maybe you've got patches of image mapped to certain areas, etc.), it's very possible for the attribute transfer to B (especially when upsampling) to have some vertices be interpolated across patch boundaries, which will probably lead to visual artifacts along patch boundaries.
UV coords may be quantized by conversion to a color channel and back. (You could maybe eliminate this by stretching U out over all three color channels, then transferring U, then repeating for V -- never tried it though.)
That said, there's a lot of cases it works in.
I may or may not add images / video to this post another day.
PS Meshlab is pretty straightforward to build from source; it might be possible to add a UV coordinate option to the Vertex Attribute Transfer filter. But, to make it more useful, you'd want to make sure that you didn't interpolate across boundary edges in the mapped UV projection. Definitely a project I'd like to work on some day... in theory. If that ever happens I'll post a link here.

Does CGAL 2D Conforming Mesh support fix points?

In my meshing application I will have to specify fix points within a domain. The idea is that, the fix points must also be the element points after the domain is being meshed.
Furthermore, the elements around the fix points should be more dense. The general concept is that for the fix points, there should exist a radius r around those points, such that the mesh size inside r is of different sizes than outside of the r. The mesh sizes inside and outside of the r should be specifiable.
Are these two things doable in CGAL 2D Mesh algorithm?
Using your wording, all the input point of the initial constrained Delaunay triangulation will be fix points, because the 2D mesh generator only insert new points in the triangulation: it never removes any point.
As for the density, you can copy, paste, and modify a criteria class, such as CGAL::Delaunay_mesh_size_criteria_2<CDT> so that the local size upper bound is smaller around the fix points.
Now, the difficulty is how to implement that new size policy. Your criteria class could store a const reference to another Delaunay_triangulation_2, that contains only the fixed points you want. Then, for each triangle query, you can call nearest_vertex and then actually check if the distance between the query point is smaller that the radius bound of your circles. For a triangle, you can either verify that for only its barycenter, or for all three points of the triangle. Then, according to the result of that/those query(s), you can modify the size bound, in the code of your copy of CGAL::Delaunay_mesh_size_criteria_2<CDT>.
Yes, no points will be removed from the triangulation by the mesher.
Note however that if you insert points too close to a constraint this will induce a refinement of the constraint while it is not Gabriel.