I want to create a structured mesh with boundary layers over an rather complex geometry in GMSH. For this reason I need to seperate my geometry to smaller portions, one of them is pictured:
Since structured meshes can only be created by extrusion, i would like to know wether it is possible to extrude a rectangle with a dilation, or if there is any other workaround known to generate a similar shape with a structured mesh.
In gmsh structured meshs can also be generated with the 'transfinite' commands.
See examples/simple_geo/hex.geo in the gmsh installation directory for an example.
Related
I'm trying to use Polygonal Surface Reconstruction with building point cloud to create simplified building models.
I did first tests with this CGAL code example and got first promising results.
As an example, I used this point cloud with vertex normals correctly oriented and got the following result from PSR. Some faces are clearly inverted (dark faces with normals pointing inside the watertight mesh and therefore not visible).
I was wondering if there a way to fix this face orientation error. I've noticed orientation methods on Polygon mesh but I don't really know to apply them to the resulting PSR surface mesh. As far as logic is concerned making normal point outwards should not be too complicated I guess.
Thanks in advance for any help
You can use the function reverse_face_orientations in the Polygon mesh processing package.
Note that this package has several functions that can help you to correct/modify your mesh.
For my thesis I'm using a finite element flow solver to simulate the flow through a flume. The flow solver is capable of solving the flow in a 3D unstructured mesh constructed of tetrahedrons. However, the meshes I generate with Gmsh somehow seem to be too unstructured. Which leads too unsolvable and very slow runs.
At the moment I've tried to do simulations with both unstructured and structured meshes. Simulation with very coarse unstructured meshes go very well, however once I make the element size smaller, the flow solver only produces NaN values and doesn't run at all.
For the simulations with structured meshes I've used the transfinite technique to produce a very fine structured mesh. This mesh contains way more element than the unstructured one and the results are fine. However, in future runs I need to refine the mesh in certain areas which doesn't seem possible with the transfinite volume technique in 3D.
Does anyone have any what could possibly go wrong in this case? And is there a way to improve the quality of a 3D gmsh mesh? Can the structure of the mesh be improved somehow?
Thanks in advance!
Bart
I think the middle ground between a transfinite structured grid and a completely unstructured one would be one made of 8-node hexs. If your 3D case can be built from an extruded 2D case, you could try setting Mesh.Algorithm=8 to get right triangles (rather than equilaterals) and then use the Recombine Surface option to turn them into quads.
I have a very fine mesh (STL) of some organic shapes (e.g., a bone) and would like to convert it to a few patches of NURBS, which will be much smoother with reasonable simplification.
I can do this manually with Solidworks ScanTo3D function, but it is not scriptable. It's a pain when I need to do hundreds of them.
Would there be a way to automate it, e.g., with some open source libraries available? I am perfectly fine with quite some loss in accuracy. I use mainly Python, but I don't mind if it is in other languages and I can work my way around it.
Note that one thing I'd like to avoid is to convert an STL of 10,000 triangles to a NURBS with 10,000 patches. I'd like to automatically (programmatically, could be with some parameter tunings) divide the mesh into a few patches and then fit it. Again, I'm perfect fine with quite some loss in accuracy.
Converting an arbitrary mesh to nurbs is not easy in general. What is a good nurbs surface for a given mesh depends on the use case. Do you want to manually edit the nurbs surface afterwards? Should symmetric structures or other features be recognized and represented correctly in the nurbs body? Is it important to keep the volume of the body? Are there boundary lines that should not be simplified as they change the appearance or angles that must be kept?
If you just want to smooth the mesh or reduce the amount of vertices there are easier ways like mesh reduction and mesh smoothing.
If you require your output to be nurbs there are different methods leading to different topologies and approximations like indicated above. A commonly used method for object simplification is to register the mesh to some handmade prototype and then perform some smaller changes to shape the specific instance. If there are for example several classes of shapes like bones, hearts, livers etc. it might be possible to model a prototype nurbs body for each class once which defines the average appearance and topology of that organ. Each instance of a class can then be converted to a nurbs by fitting the prototype to that instance. As the topology is fixed the optimization problem is reduced to the problem where we need to find the control points that approximate the mesh with the smallest error.Disadvantage of this method is that you have to create a prototype for each class. The advantage is that the topology will be nice and easily editable.
Another approach would be to first smooth the mesh and reduce the polygon count (there are libraries available for mesh reduction) and then simply converting each triangle/ quad to a nurbs patch (like the Rhino MeshToNurb Command). This method should be easier to implement but the resulting nurbs body could have an ugly topology.
If one of this methods is applicable really depends on what you want to do with your transformed data.
I have 3 sets of point cloud that represent one surface. I want to use these point clouds to construct triangular mesh, then use the mesh to represent the surface. Each set of point cloud is collected in different ways so their representation to this surface are different. For example, some sets can represent the surface with smaller "error". My questions are:
(1) What's the best way to evaluate such mesh-to-surface "error"?
(2) Is there a mature/reliable way to convert point cloud to triangular mesh? I found some software doing this but most requests extensive manual adjustment.
(3) After the conversion I get three meshes. I want to use a fourth mesh, namely Mesh4, to "fit" the three meshes, and get an "average" mesh of the three. Then I can use this Mesh4 as a representation of the underlying surface. How can I do/call this "mesh to mesh" fitting? Is it a mature technique?
Thank you very much for your time!
Please find below my answers for point 1 and 2:
as a metric for mesh-to-surface error you can use Hausdorff distance. For example, you could use Libigl to compare two meshes.
To obtain a mesh from a point cloud, have a look at PCL
I need to generate a tetrahedral (volume) mesh of thin-walled object object. Think of objects like a bottle or a plastic bowl, etc, which are mostly hollow. The volumetric mesh is needed for an FEM simulation. A surface mesh of the outside surface of the object is available from measurement, using e.g. octomap or KinectFusion. Therefore the vertex spacing is relatively regular. The inner surface of the object can be calculated from the outside surface by moving all points inside, since the wall thickness is known.
So far, I have considered the following approaches:
Create a 3D Delaunay triangulation (which would destroy the existing surface meshes) and then remove all tetrahedra which are not between the two original surfaces. For this check, it might be required to create an implicit surface representation of the 2 surfaces.
Create a 3D Delaunay triangulation and remove tetrahedra which are "inside" (in the hollow space) or "outside" (of the outer surface) with Alphashapes.
Close the outside and inside meshes and load them into tetgen as the outside hull and as a hole respectively.
These approaches seem to be a bit inelegant to me, and they still have some pitfalls. I would probably need several libraries/tools for them. For 1 and 2, probably tetgen or another FEM meshing tool would still be required to create well-conditioned tetrahedra. Does anyone have a more straight-forward solution? I guess this should also be a common problem in 3D printing.
Concerning tools/libraries, I have looked into PCL, meshlab and tetgen so far. They all seem to do only part of the job. Ideally, I would like to use only open source libraries and avoid tools which require manual intervention.
One way is to:
create triangular mesh of surface points,
extrude (move) that surface to inner for a given thickness. That produces volume (triangular prism) mesh of a wall,
each prism can be split in three tetrahedrons.
The problem I see is aspect ratio.
A single layer of tetrahedra will not reproduce shell or bending behavior very well. A single element through the thickness will already require a large mesh. Putting more than one will likely break the bank in order to keep aspect ratios and angles acceptable.
I'd prefer brick or thick shell elements to tetrahedra in this case. I think the modeling will be easier and the behavior will be more faithful to the physics.