Blender UV Sphere Exported as GLTF - blender

When I export the default UV Sphere as GLTF model, it ends up having 1940 vertices.
When I export the same model as OBJ, it has 482 vertices (the correct count).
Something not right with the blender GLTF exporter (version 2.83).

Unlike OBJ, glTF is a runtime format — designed for rendering on a GPU without much processing. Conversion of Blender's vertices to GPU vertices is complicated, and not necessarily 1:1.
If you triangulate the mesh before export and disable most vertex-level data (UVs, Normals, Vertex Colors) in export settings, you have a better chance of having the same vertex count before/after:
With those settings, the UV Sphere will have the same vertex count (482) before and after export. With other settings, or other models, there's just going to be a chance your vertex counts will come out differently. That's not necessarily a bad thing — it avoids making the realtime viewers do this work later — but of course there may be cases where you want to bring the vertex count down. If you have general questions about this topic I would suggest starting a thread somewhere like https://blenderartists.org/.

Related

Blender to Unreal Engine 4: How do i export my building from blender to unreal engine with the right collison?

I have made a house (without roof yet) with some rooms in blender 2.9 and exported it to unreal engine 4. But in Unreal engine i can't move in it with the 3d standard third person character. I can only walk on it as it would be a closed cube or something.
What do i have to do, to be able to walk in it around?
UE is automatically generating a convex (i.e. with no holes, caves, dents, openings, etc.) collision mesh that essentially wraps the whole model. Possibly just a cube. There are a couple of things you can do.
Open the mesh in UE and set the collision complexity to 'use complex as simple'. This isn't advisable unless the mesh is very simple as it uses every polygon in the mesh to query collisions against.
or
Create a set of collision meshes - one for each element of the house (walls, floor, etc.) - and bring them in with the model. These must be convex in shape. See here: Static Mesh FBX Import. You must follow the correct naming convention for the FBX import to recognise them as collision meshes.
If your house model has low enough a polygon count that you would end up with as many polygons in your collection of collision meshes, number 1 saves you the trouble of number 2 (and might even save some memory).
Don't forget everything needs to be triangulated.

Blender UV Mapping, coherence of textures

I created a connected surface (several deformed and bent planes). Long story short, the UV mapping looked like this:
I did not find any short tutorial on how to yield a connected surface. I am also aware of the mathematics of a curve, and I don't need absolute control on the look of the texture, only the way it is right now, the textures really look very roughly pixelated.
Is it possible to pull a texture over the surface and to have a more connected UV map?
I also think this question is important so others don't have to buy a whole Udemy course as there has to be a simpler way.

Conversion of fine organic surface mesh to a few patches of NURBS

I have a very fine mesh (STL) of some organic shapes (e.g., a bone) and would like to convert it to a few patches of NURBS, which will be much smoother with reasonable simplification.
I can do this manually with Solidworks ScanTo3D function, but it is not scriptable. It's a pain when I need to do hundreds of them.
Would there be a way to automate it, e.g., with some open source libraries available? I am perfectly fine with quite some loss in accuracy. I use mainly Python, but I don't mind if it is in other languages and I can work my way around it.
Note that one thing I'd like to avoid is to convert an STL of 10,000 triangles to a NURBS with 10,000 patches. I'd like to automatically (programmatically, could be with some parameter tunings) divide the mesh into a few patches and then fit it. Again, I'm perfect fine with quite some loss in accuracy.
Converting an arbitrary mesh to nurbs is not easy in general. What is a good nurbs surface for a given mesh depends on the use case. Do you want to manually edit the nurbs surface afterwards? Should symmetric structures or other features be recognized and represented correctly in the nurbs body? Is it important to keep the volume of the body? Are there boundary lines that should not be simplified as they change the appearance or angles that must be kept?
If you just want to smooth the mesh or reduce the amount of vertices there are easier ways like mesh reduction and mesh smoothing.
If you require your output to be nurbs there are different methods leading to different topologies and approximations like indicated above. A commonly used method for object simplification is to register the mesh to some handmade prototype and then perform some smaller changes to shape the specific instance. If there are for example several classes of shapes like bones, hearts, livers etc. it might be possible to model a prototype nurbs body for each class once which defines the average appearance and topology of that organ. Each instance of a class can then be converted to a nurbs by fitting the prototype to that instance. As the topology is fixed the optimization problem is reduced to the problem where we need to find the control points that approximate the mesh with the smallest error.Disadvantage of this method is that you have to create a prototype for each class. The advantage is that the topology will be nice and easily editable.
Another approach would be to first smooth the mesh and reduce the polygon count (there are libraries available for mesh reduction) and then simply converting each triangle/ quad to a nurbs patch (like the Rhino MeshToNurb Command). This method should be easier to implement but the resulting nurbs body could have an ugly topology.
If one of this methods is applicable really depends on what you want to do with your transformed data.

Vertex Color to UV Map -> Remesh -> Texture mapping

I have a polygon mesh of a room in high resolution, and I want to extract vertices color information and map them as a UV map, so I can generate a texture atlas of the room.
After that, I want to remesh the model in order to reduce the number of polygons and map the hi-res texture onto the new mesh in lower resolution.
So far I've found this link to do it in Blender, but I would like to do it programmatically. Do you know about any library/code that could help my in my task?
I guess first of all I have to segment the model (normals criterion could be helpful) and then cut each mesh segment, so only then I am able to parameterize it. About parameterization, LSCM seems to provide good results for simple models. Once having available the texture atlas, I think the problem becomes a simple task of texture mapping.
My main problem is segmentation and mesh cutting. I'm using CGAL library for that purpose, but the algorithm is too simple to cut complex shapes. Any hint about a better segmentation/cutting algorithm that performs well for room-sized models?
EDIT:
The mesh consists in a room reconstructed with a RGB-D camera, with 2.5 million vertices and 4.7 million faces. The point is to extract high resolution texture, remesh the model to reduce number of polygons and then remap the texture onto it. It's not a closed mesh, and there are holes due to reconstruction, so I'm guessing if my task is not possible to accomplish at all.
I attach a capture of the mesh.
I would suggest using the following 4-steps procedure:
Step 1: remesh
For this type of mesh that comes from computer vision, you need a remesher that is robust to holes, overlaps, skinny triangles etc... You can use my GEOGRAM software [1]. Use the following command:
vorpalite my_input.obj my_output.obj pre=false post=false pts=30000
where 30000 is the number of desired points (adapt it to the complexity of your input). Note: I am deactivating pre and post-processing (pre=false post=false) that may remove too much parts of the mesh for this type of mesh.
Step 2: segment the remesh
My favourite method is "Variational Shape Approximation" [3]. I like it because it is simple to implement and gives reasonable results in most cases.
Step 3: parameterize
Besides my LSCM method, you may use ABF++ that we developed after [4], that gives much better results in most cases. You may also try ARAP [5].
Step 4: bake the texture
Once the simplified mesh is parameterized, you need to copy the colors from the original mesh onto the new one. This means determining for each pixel of the texture where it goes in 3D, and finding the nearest point in the original 3D mesh.
Segmentation, parameterization and baking are implemented in my Graphite software [2] (use the old version 2.x, the newer version 3.x does not have all the texturing functionalities).
[1] geogram: http://alice.loria.fr/software/geogram/doc/html/index.html
[2] graphite: http://alice.loria.fr/software/graphite/doc/html/
[3] Variational Shape Approximation (Cohen-Steiner, Alliez, Desbrun, SIGGRAPH 2004): http://www.geometry.caltech.edu/pubs/CAD04.pdf
[4] ABF++: http://alice.loria.fr/index.php/publications.html?redirect=1&Paper=ABF_plus_plus#2004
[5] ARAP: cs.harvard.edu/~sjg/papers/arap.pdf
For reducing the number of polygons, I prefer using mesh decimation. My recommended workflow: (Input: High resolution mesh(mesh0) with vertex color).
Compute uv coordinates for mesh0.
Generate texture image(textureImage) by vertex color. Thus, you have a texture mesh(mesh0 with uv coordinates, textureImage).
Apply mesh decimation to mesh0, and the decimation should take uv coorindates into consideration.
I have an example about this workflow in my site, the example image: Decimation of texture mesh .
Or you can refer my site for details.

Registration of 3D surface mesh to 3D image volume

I have an accurate mesh surface model of an implant I'd like to optimally rigidly align to a computed tomography scan (scalar volume) that contains the exact same object. I've tried detecting edges in the image volume with canny filter and doing an iterative closest point alignment between the edges and the vertices of the mesh, but it's not working. I also tried voxelizing the mesh, and using image volume alignment methods (Mattes Mutual) which yields very inconsistent results.
Any other suggestions?
Thank you.
Generally, mesh and volume are two different data structures. You have to either convert mesh to volume or convert volume to mesh.
I would recommend doing a segmentation of volume data first, to segment out the issues you want to register. With canny filter might not be enough to segment the border clearly. I would like to recommend you with level-set method and active contour model. These two are frequently used in medical image processing. For these two topics, I would recommend professor Chunming Li's work.
And after you do the segmentation of volume data, you might be able to reconstruct mesh model of that volume with marching cubes. The vertexes of two mesh could be registered through a simple ICP algorithm.
However, this is just a workaround instead of real registration, it always takes too much time to do the segmentation.