Converting obj which is world coordinates system into local coords using geopy and pyproj seems inaccurate - blender

I tried to convert obj of world coords system into obj of local coords system using geopy and pyproj library.
Tried to convert epsg:5187 model into epsg:4326 but it seems a bit inaccurate.
(there are two models. green surface model and tunnel model. tunnel should be fit in green surface)
white tunnel model should be located where gray tunnel located, but it won't.
I guess it's because while mesh point is converted (world to local coords system), mesh's shape is different from original shape makes that problem.
i also thought merging objs is the best solution though i have to merge lots of objs which is large files exceed 3gb. (this should also be used in blender, but blender seems can't use it with too big file)
is this because geopy.geodesic is inaccurate, or inevitable in the process of transforming the coordinate system?
is there a way for converting models to be as it is not merging the models?

Related

Blender to Unreal Engine 4: How do i export my building from blender to unreal engine with the right collison?

I have made a house (without roof yet) with some rooms in blender 2.9 and exported it to unreal engine 4. But in Unreal engine i can't move in it with the 3d standard third person character. I can only walk on it as it would be a closed cube or something.
What do i have to do, to be able to walk in it around?
UE is automatically generating a convex (i.e. with no holes, caves, dents, openings, etc.) collision mesh that essentially wraps the whole model. Possibly just a cube. There are a couple of things you can do.
Open the mesh in UE and set the collision complexity to 'use complex as simple'. This isn't advisable unless the mesh is very simple as it uses every polygon in the mesh to query collisions against.
or
Create a set of collision meshes - one for each element of the house (walls, floor, etc.) - and bring them in with the model. These must be convex in shape. See here: Static Mesh FBX Import. You must follow the correct naming convention for the FBX import to recognise them as collision meshes.
If your house model has low enough a polygon count that you would end up with as many polygons in your collection of collision meshes, number 1 saves you the trouble of number 2 (and might even save some memory).
Don't forget everything needs to be triangulated.

Blender UV Sphere Exported as GLTF

When I export the default UV Sphere as GLTF model, it ends up having 1940 vertices.
When I export the same model as OBJ, it has 482 vertices (the correct count).
Something not right with the blender GLTF exporter (version 2.83).
Unlike OBJ, glTF is a runtime format — designed for rendering on a GPU without much processing. Conversion of Blender's vertices to GPU vertices is complicated, and not necessarily 1:1.
If you triangulate the mesh before export and disable most vertex-level data (UVs, Normals, Vertex Colors) in export settings, you have a better chance of having the same vertex count before/after:
With those settings, the UV Sphere will have the same vertex count (482) before and after export. With other settings, or other models, there's just going to be a chance your vertex counts will come out differently. That's not necessarily a bad thing — it avoids making the realtime viewers do this work later — but of course there may be cases where you want to bring the vertex count down. If you have general questions about this topic I would suggest starting a thread somewhere like https://blenderartists.org/.

Conversion of fine organic surface mesh to a few patches of NURBS

I have a very fine mesh (STL) of some organic shapes (e.g., a bone) and would like to convert it to a few patches of NURBS, which will be much smoother with reasonable simplification.
I can do this manually with Solidworks ScanTo3D function, but it is not scriptable. It's a pain when I need to do hundreds of them.
Would there be a way to automate it, e.g., with some open source libraries available? I am perfectly fine with quite some loss in accuracy. I use mainly Python, but I don't mind if it is in other languages and I can work my way around it.
Note that one thing I'd like to avoid is to convert an STL of 10,000 triangles to a NURBS with 10,000 patches. I'd like to automatically (programmatically, could be with some parameter tunings) divide the mesh into a few patches and then fit it. Again, I'm perfect fine with quite some loss in accuracy.
Converting an arbitrary mesh to nurbs is not easy in general. What is a good nurbs surface for a given mesh depends on the use case. Do you want to manually edit the nurbs surface afterwards? Should symmetric structures or other features be recognized and represented correctly in the nurbs body? Is it important to keep the volume of the body? Are there boundary lines that should not be simplified as they change the appearance or angles that must be kept?
If you just want to smooth the mesh or reduce the amount of vertices there are easier ways like mesh reduction and mesh smoothing.
If you require your output to be nurbs there are different methods leading to different topologies and approximations like indicated above. A commonly used method for object simplification is to register the mesh to some handmade prototype and then perform some smaller changes to shape the specific instance. If there are for example several classes of shapes like bones, hearts, livers etc. it might be possible to model a prototype nurbs body for each class once which defines the average appearance and topology of that organ. Each instance of a class can then be converted to a nurbs by fitting the prototype to that instance. As the topology is fixed the optimization problem is reduced to the problem where we need to find the control points that approximate the mesh with the smallest error.Disadvantage of this method is that you have to create a prototype for each class. The advantage is that the topology will be nice and easily editable.
Another approach would be to first smooth the mesh and reduce the polygon count (there are libraries available for mesh reduction) and then simply converting each triangle/ quad to a nurbs patch (like the Rhino MeshToNurb Command). This method should be easier to implement but the resulting nurbs body could have an ugly topology.
If one of this methods is applicable really depends on what you want to do with your transformed data.

Vertex Color to UV Map -> Remesh -> Texture mapping

I have a polygon mesh of a room in high resolution, and I want to extract vertices color information and map them as a UV map, so I can generate a texture atlas of the room.
After that, I want to remesh the model in order to reduce the number of polygons and map the hi-res texture onto the new mesh in lower resolution.
So far I've found this link to do it in Blender, but I would like to do it programmatically. Do you know about any library/code that could help my in my task?
I guess first of all I have to segment the model (normals criterion could be helpful) and then cut each mesh segment, so only then I am able to parameterize it. About parameterization, LSCM seems to provide good results for simple models. Once having available the texture atlas, I think the problem becomes a simple task of texture mapping.
My main problem is segmentation and mesh cutting. I'm using CGAL library for that purpose, but the algorithm is too simple to cut complex shapes. Any hint about a better segmentation/cutting algorithm that performs well for room-sized models?
EDIT:
The mesh consists in a room reconstructed with a RGB-D camera, with 2.5 million vertices and 4.7 million faces. The point is to extract high resolution texture, remesh the model to reduce number of polygons and then remap the texture onto it. It's not a closed mesh, and there are holes due to reconstruction, so I'm guessing if my task is not possible to accomplish at all.
I attach a capture of the mesh.
I would suggest using the following 4-steps procedure:
Step 1: remesh
For this type of mesh that comes from computer vision, you need a remesher that is robust to holes, overlaps, skinny triangles etc... You can use my GEOGRAM software [1]. Use the following command:
vorpalite my_input.obj my_output.obj pre=false post=false pts=30000
where 30000 is the number of desired points (adapt it to the complexity of your input). Note: I am deactivating pre and post-processing (pre=false post=false) that may remove too much parts of the mesh for this type of mesh.
Step 2: segment the remesh
My favourite method is "Variational Shape Approximation" [3]. I like it because it is simple to implement and gives reasonable results in most cases.
Step 3: parameterize
Besides my LSCM method, you may use ABF++ that we developed after [4], that gives much better results in most cases. You may also try ARAP [5].
Step 4: bake the texture
Once the simplified mesh is parameterized, you need to copy the colors from the original mesh onto the new one. This means determining for each pixel of the texture where it goes in 3D, and finding the nearest point in the original 3D mesh.
Segmentation, parameterization and baking are implemented in my Graphite software [2] (use the old version 2.x, the newer version 3.x does not have all the texturing functionalities).
[1] geogram: http://alice.loria.fr/software/geogram/doc/html/index.html
[2] graphite: http://alice.loria.fr/software/graphite/doc/html/
[3] Variational Shape Approximation (Cohen-Steiner, Alliez, Desbrun, SIGGRAPH 2004): http://www.geometry.caltech.edu/pubs/CAD04.pdf
[4] ABF++: http://alice.loria.fr/index.php/publications.html?redirect=1&Paper=ABF_plus_plus#2004
[5] ARAP: cs.harvard.edu/~sjg/papers/arap.pdf
For reducing the number of polygons, I prefer using mesh decimation. My recommended workflow: (Input: High resolution mesh(mesh0) with vertex color).
Compute uv coordinates for mesh0.
Generate texture image(textureImage) by vertex color. Thus, you have a texture mesh(mesh0 with uv coordinates, textureImage).
Apply mesh decimation to mesh0, and the decimation should take uv coorindates into consideration.
I have an example about this workflow in my site, the example image: Decimation of texture mesh .
Or you can refer my site for details.

Tetrahedralization from surface mesh of thin-walled object

I need to generate a tetrahedral (volume) mesh of thin-walled object object. Think of objects like a bottle or a plastic bowl, etc, which are mostly hollow. The volumetric mesh is needed for an FEM simulation. A surface mesh of the outside surface of the object is available from measurement, using e.g. octomap or KinectFusion. Therefore the vertex spacing is relatively regular. The inner surface of the object can be calculated from the outside surface by moving all points inside, since the wall thickness is known.
So far, I have considered the following approaches:
Create a 3D Delaunay triangulation (which would destroy the existing surface meshes) and then remove all tetrahedra which are not between the two original surfaces. For this check, it might be required to create an implicit surface representation of the 2 surfaces.
Create a 3D Delaunay triangulation and remove tetrahedra which are "inside" (in the hollow space) or "outside" (of the outer surface) with Alphashapes.
Close the outside and inside meshes and load them into tetgen as the outside hull and as a hole respectively.
These approaches seem to be a bit inelegant to me, and they still have some pitfalls. I would probably need several libraries/tools for them. For 1 and 2, probably tetgen or another FEM meshing tool would still be required to create well-conditioned tetrahedra. Does anyone have a more straight-forward solution? I guess this should also be a common problem in 3D printing.
Concerning tools/libraries, I have looked into PCL, meshlab and tetgen so far. They all seem to do only part of the job. Ideally, I would like to use only open source libraries and avoid tools which require manual intervention.
One way is to:
create triangular mesh of surface points,
extrude (move) that surface to inner for a given thickness. That produces volume (triangular prism) mesh of a wall,
each prism can be split in three tetrahedrons.
The problem I see is aspect ratio.
A single layer of tetrahedra will not reproduce shell or bending behavior very well. A single element through the thickness will already require a large mesh. Putting more than one will likely break the bank in order to keep aspect ratios and angles acceptable.
I'd prefer brick or thick shell elements to tetrahedra in this case. I think the modeling will be easier and the behavior will be more faithful to the physics.