When loading a model with Assimp how can I get the Vertices that correspond to my materials (C++) - assimp

So what I want to do is render each material 1 at a time. Which means that each Material will have it's own vertices. Is there some kind of function within Assimp when I process a mesh that will tell me which material the vertices belong to.
Of course I would put the position, the normal and the texCoord in the vertex and I need the induces.

There is no query to get these meshes implemented in Asset-Importer-Lib right now. But you can write this easily by yourself:
Import your model
Check whether there are any meshes loaded
Loop over all meshes stored in aiScene
Sort them for meshes with the same material index
Loop over all vertices of the list of meshes.
I wrote a blog-post about that: Batch-Rendering for Assimp-Scene

Related

Corefinement of two meshes with CGAL, obtaining the intersection polylines

How could I identify the polylines of intersection, from the own meshes resulting from the corefinement using CGAL::Polygon_mesh_processing::corefine()?, but without having to use additionally CGAL::Polygon_mesh_processing::surface_intersection(), for example, what repeats the intersection calculation.
Would having the edges in both meshes sufficient?
If so, you can use the named parameter edge_is_constrained_map to mark the edges that are at the intersection. If you don't want to iterate over the edges of the mesh to collect them, you can write a property map that will collect them (IIRC, put is only called once per halfedge).
If not, it should be possible to get it but not with the public API.

meshlab- how to transfer uvs from source .objs onto poisson reconstruction model

I've been struggling for some time to find a way in Meshlab to include or transfer UV’s onto a poisson model from source meshes. I will try to explain more of what I’m trying to accomplish below.
My source meshes have uv’s along with texture data. I need to build a fused model and include the texture data. It is for facial expression scan data reconstruction for a production pipeline which ultimately builds a facial rig for animation. Our source scan data includes marker information which we use to register, build a fused scan model which is used to generate a retopologized mesh for blendshapes.
Previously, we were using David3D. http://www.david-3d.com/en/support/downloads
David 3D used poisson surface reconstruction to create a fused model. The fused model it created brought along the uvs and optimized the source textures into 1 uv tile. I'll post a picture of the result below that I'm looking to recreate in MeshLab.
My need to find this solution in meshlab is to build tools to help automate this process. David3D version 5 does not have an development kit to program around.
Is it possible in Meshlab to apply the uvs from the regions used from the source mesh onto the poison model? Could I use a filter to transfer them? Reproject them?
Or is there another reconstruction method/ process from within Meshlab that will keep the uv’s?
Here is an image of what the resulting uv parameter looks like from David. The uvs are white on the left half of the image.
Thank You,David3D UV Layout Result
Dan
No, in MeshLab there is no direct way to transfer UV mapping between two layers.
This is because UV transfer is not, in the general case, a trivial task. It is not simply a matter of assigning to the new surface the "closest" UV of the original mesh: this would not work on UV discontinuities, which are present in the example you linked. Additionally, the two meshes should be almost coincident, otherwise you would also have problems also in defining the "closest" UV.
There are a couple ways to do it, but require manual work and a re-sampling of the texture:
create a UV mapping of the re-meshed model using whatever tool you may have, then resample the existing texture on the new parametrization using "transfer: vertex attributes to Texture (1 or 2 meshes)", using texture color as source
load the original mesh, and using the screenshot function, create "virtual" photos of the model (turn off illumination and do NOT use ortho views), adding them as raster layers, until the model surface has been fully covered. Load the new model, that should be in the same space, and texture-map it using the "parametrization + texturing " using those registered images
In MeshLab it is also possible to create a new texture from the original images, if you have a way to import the registered cameras...
TL;DR: UV coords to color channels → Vertex Attribute Transfer → Color channels back to UV coords
I have had very good results kludging it through the color channels, like this (say you are transfering from layer A to layer B):
Make sure A and B are roughly aligned with eachother (you can use the ICP filter if needed).
Select layer A, then:
Texture → Convert Per Wedge UV to Per Vertex UV (if you've got wedge coords)
Color Creation → Per Vertex Color Function, and transfer the tex coords to the color channels (assuming UV range 0-1, you'll want to tweak these if your range is larger):
func r = 255.0 * vtu
func g = 255.0 * vtv
func b = 0
Sampling → Vertex Attribute Transfer, and use this to transfer the vertex colors (which now hold texture coordinates) from layer A to layer B.
source mesh = layer A
target mesh = layer B
check Transfer Color
set distance large enough to not miss any spots
Now select layer B, which contains the mapped vertex colors, and do the opposite that you did for A:
Texture → Per Vertex Texture Function
func u = r / 255.0
func v = g / 255.0
Texture → Convert Per Vertex UV to Per Wedge UV
And that's it.
The results aren't going to be perfect, but in practice I often find them sufficient. In particular:
If the texture is not continuously mapped to layer A (e.g. maybe you've got patches of image mapped to certain areas, etc.), it's very possible for the attribute transfer to B (especially when upsampling) to have some vertices be interpolated across patch boundaries, which will probably lead to visual artifacts along patch boundaries.
UV coords may be quantized by conversion to a color channel and back. (You could maybe eliminate this by stretching U out over all three color channels, then transferring U, then repeating for V -- never tried it though.)
That said, there's a lot of cases it works in.
I may or may not add images / video to this post another day.
PS Meshlab is pretty straightforward to build from source; it might be possible to add a UV coordinate option to the Vertex Attribute Transfer filter. But, to make it more useful, you'd want to make sure that you didn't interpolate across boundary edges in the mapped UV projection. Definitely a project I'd like to work on some day... in theory. If that ever happens I'll post a link here.

Meshes in 3DS Max does not have same number of vertices

I have two meshes with same vertices number in 3DS Max, but when I export it, both have not the same vertices number.
- I have to create a "ProOptimizer" modifier, to get the same number of vertices in all meshes.
- I export it as ".Obj", and uncheck all parameters, except textures, to keep it.
- I import it from Blender and I export it as ".FBX".
If I export it directly from 3DS Max, the vertices number is very different between all meshes, I do not understand.
How do I get the same vertices?
Can anyone help me please? Thank you very much.
Do both meshes have same smoothing groups applied to the same respective triangles? And are the UV mapping similar?
Both normals (smoothing groups), and UV coordinate distribution can affect how many times a single vertex need to be split in order to render correctly, or get exported to a specific format. For example one vertex can have many normals (one for each neighboring triangle, e.g. in a box), forcing the vertex to be counted several times. Or on the contrary a vertex can have a single normal, making all neighboring faces appearing "smoothed" around the vertex.

Reconstruct surface from 3D triangular meshes

I have a 3D model, which consists of the 3D triangular meshes. I want to partition the meshes into different groups. Each group represents a surface, such as a planar face, cylindrical surface. This is something like surface recognition/reconstruction.
The input is a set of 3D triangular meshes. The output is the mesh segmentations per surface.
Is there any library meets my requirement?
If you want to go into lots of mesh processing, then the point cloud library is a good idea, but I'd also suggest CGAL: http://www.cgal.org for more algorithms and loads of structures aimed at meshes.
Lastly, the problem you describe is most easily solved on your own:
enumerate all vertices
enumerate all polygons
create an array of ints with the size of the number of vertices in your "big" mesh, initialize with 0.
create an array of ints with the size of the number of polygons in your "big" mesh, initialize with 0.
initialize a counter to 0
for each polygon in your mesh, look at its vertices and the value that each has in the array.
if the values for each vertex are zero, increase counter and assign to each of the values in the vertex array and polygon array correspondingly.
if not, relabel all vertices and polygons with a higher number to the smallest, non-zero number.
The relabeling can be done quickly with a look up table.
This might save you lots of issues interfacing your code to some library you're not really interested in.
You should have a look at the PCL library, it has all these features and much more: http://pointclouds.org/

Retrieve index of nearest surface-points returned from CGAL's surface_neighbor_coordinates_3

I (relatively new to CGAL and not a C++ expert) am trying to extract the index of the nearest-neighbor 3D points returned from CGAL's surface_neighbor_coordinates_3 (which searches a 2D mesh comprised of 3D points to find natural-neighbors of a provided query-point) in this CGAL example. In other examples (3D interpolation with 3D meshes), I have been able to do this by adding info to vertex handles in the triangulation data structure. In the linked example, I simply wish to retrieve the indices of returned coords with respect to where the points in coords reside index-wise within the input list of points.
The other call-options for surface_neighbor_coordinates_3 seem to suggest this may be possible by passing-in an existing triangulation (with perhaps its info-augmented triangulation-data-structure). However, I'm not sure how to specify the info-augmented Delaunay_triangulation_3 for the case of a 2D mesh consisting of 3D points. I'm experimenting with it (using advancing-front triangulations to 2D-mesh my 3D points) but would like to know if there's some easier way to use the native capabilities of surface_neighbor_coordinates_3 if one only seeks to also have an info field associated with the returned points.
Any help would be greatly appreciated ... this has stumped me for a week.