OpenCascade: How to subdivide elongated triangles? - mesh

I am using OpenCascade to import STEP/IGES as meshes in my software. Works nicely.
But I need small triangles, and the one I get are sometimes very large (in flat area), or very elongated (eg. when meshing a cylinder). The best would be to split triangle's edge bigger than some absolute value. Avoiding T vertices, too.
I was'nt able to google anything about that... So, currently, I pass the mesh to OpenMesh, apply the OpenMesh::Subdivider::Uniform::LongestEdgeT operator, then pass it back to my software. Tedious and costly when I manage several M triangles...
Questions:
Is there an equivalent in OpenCascade ?
Or a simple code snipet to implement my own loop to do so ?
Thanks !

The meshing algorithm BRepMesh_IncrementalMesh coming with Open CASCADE Technology is mainly focused on two usage scenarios:
Visualization in 3D Viewer. Large and prolonged triangles make no harm to presentation, as vertex normals ensure proper smooth shading. Deflection parameters allows managing presentation quality.
Computing Algorithms using triangulation as approximation (to speed up calculations compared to the same algorithm working on exact geometry). In this case, deflection parameters determine the target precision of the algorithm. Large and prolonged triangles should not cause problems here, as deviation from exact geometry is controlled by meshing parameters.
There are, however, some categories of algorithms, where shape of mesh element is very important. Solvers (numerical simulation) make one of such categories, where unfortunate mesh elements may cause numerical instability or other issues. What exactly matters / cause issues depend on specific algorithm - this may include element skewness, element aspect ratio, element size and elements grid. Some solvers work much better on quads rather than on triangles.
If you take a look onto meshing result of BRepMesh_IncrementalMesh algorithm, you may notice that not only large prolonged triangles, but entire mesh structure is somewhat suboptimal for solver algorithms:
There are several options you may consider:
Triangulation refinement algorithm. Such algorithm processes existing triangulation and tries healing some properties like skewness. This what does OpenMesh from your question, I suppose. Such postprocessing algorithm might give satisfactory results at good performance, but final result will dramatically depend on properties of original meshing algorithm. For the moment, OCCT doesn't have any refinement tool, although it is possible writing such algorithm on your own (I cannot give you a small code snippet, because such algorithm is not that small an trivial as it may look from a first glance).
Consider alternative meshing algorithm. Probably incomplete list:
Express Mesh by Open Cascade (hence, working directly on OCCT shapes). This tool generates triangulation having nice grid-alike structure (for smooth surfaces), configurable element size and quad-dominant generation option. This is a commercial product though.
Netgen mesher. This open source tool provides bindings to OCCT, and although it is focused on 3D tetrahedral mesh generation, it may be also used for generating a common triangular mesh. I cannot say something good about this tool - it was rather slow and unstable when I've seen its work many years ago.
MeshGems surface meshing. Another commercial tool providing an interface to OCCT. Never worked with this product, so cannot share any opinion on it.
Consider improving BRepMesh_IncrementalMesh. As OCCT is an open source framework, you may consider extending its meshing algorithm with more parameters and contribute to the project.

Related

Parameterization of Triangulated Surface Meshes

I’d like to perform a surface parametrization of a triangle mesh (for the purpose of texture mapping).
I tried using some of CGAL’s algorithms, e.g. ARAP, Discrete Conformal Map etc.
The problem is that the surface parameterization methods proposed by CGAL only deal with meshes which are homeomorphic (topologically equivalent) to discs.
Meshes with arbitrary topology can be parameterized, provided that the user specifies a cut graph (a set of edges), which defines the border of a topological disc.
So the problem now becomes – how to cut the graph properly (using CGAL’s interface).
I found a similar question from 3 years ago that went unanswered.
P.S.
If someone can point me to a different library that can do the job, that’ll be just as great.
Thanks.

Conversion of fine organic surface mesh to a few patches of NURBS

I have a very fine mesh (STL) of some organic shapes (e.g., a bone) and would like to convert it to a few patches of NURBS, which will be much smoother with reasonable simplification.
I can do this manually with Solidworks ScanTo3D function, but it is not scriptable. It's a pain when I need to do hundreds of them.
Would there be a way to automate it, e.g., with some open source libraries available? I am perfectly fine with quite some loss in accuracy. I use mainly Python, but I don't mind if it is in other languages and I can work my way around it.
Note that one thing I'd like to avoid is to convert an STL of 10,000 triangles to a NURBS with 10,000 patches. I'd like to automatically (programmatically, could be with some parameter tunings) divide the mesh into a few patches and then fit it. Again, I'm perfect fine with quite some loss in accuracy.
Converting an arbitrary mesh to nurbs is not easy in general. What is a good nurbs surface for a given mesh depends on the use case. Do you want to manually edit the nurbs surface afterwards? Should symmetric structures or other features be recognized and represented correctly in the nurbs body? Is it important to keep the volume of the body? Are there boundary lines that should not be simplified as they change the appearance or angles that must be kept?
If you just want to smooth the mesh or reduce the amount of vertices there are easier ways like mesh reduction and mesh smoothing.
If you require your output to be nurbs there are different methods leading to different topologies and approximations like indicated above. A commonly used method for object simplification is to register the mesh to some handmade prototype and then perform some smaller changes to shape the specific instance. If there are for example several classes of shapes like bones, hearts, livers etc. it might be possible to model a prototype nurbs body for each class once which defines the average appearance and topology of that organ. Each instance of a class can then be converted to a nurbs by fitting the prototype to that instance. As the topology is fixed the optimization problem is reduced to the problem where we need to find the control points that approximate the mesh with the smallest error.Disadvantage of this method is that you have to create a prototype for each class. The advantage is that the topology will be nice and easily editable.
Another approach would be to first smooth the mesh and reduce the polygon count (there are libraries available for mesh reduction) and then simply converting each triangle/ quad to a nurbs patch (like the Rhino MeshToNurb Command). This method should be easier to implement but the resulting nurbs body could have an ugly topology.
If one of this methods is applicable really depends on what you want to do with your transformed data.

Effort Required to make 3D Game Engine?

For the sake of theory (and general understanding),
I would like to understand in a moderately exhaustive list of all the things that must be done in order to create a "modern" 3D Game Engine (from a coder's perspective)
I seem to have a hard time finding this information anywhere else, so I think that you guys at Stack overflow will have the knowledge I seek.
In terms of "moderately exhaustive", I mean such things as a general explanation of the design stages of such engine, such as Binary Space Partitioning, then actual implementation of such an engine, and the uses of the software ( it would be helpful if the means of rendering other than BSP could be explained).
I don't want to make a 3D Engine, but simply understand what sheer amount of effort is required to make one.
Focusing on 3D rendering alone:
Binary space partitioning, like many elements of 3d rendering, is optional. In this case, it is an optimization, allowing the computer to do less work to render a scene, by cutting out invisible sections.
At its core, rendering is simply a five stage process. First, a list of triangles is generated. Next, the triangles are converted from 3-space to 2-space using matrix multiplication. Next, the triangles are filled in with pixels and meta information. Finally, the pixels are shaded individually using the meta-information. Extra finally, the pixels are drawn to the screen.
Most of those steps are partially or wholly done by a graphics card, meaning the programmer's job is to tell the card which step to perform and provide the input data.
This bare bones engine is not even close to a modern engine, however. Modern engines will be filled with optimizations like binary space partitioning, mesh simplification, background loading and texture compression. They will also be filled with special features like shadows, mirrors, mist and particle effects.
Modern engines have to be able to load and interpret textures and meshes, and in some cases, deform and modify both at runtime. The most common example would be interpolating between keyframes.
Engines may need to interact with game logic modules in order to reuse data for collision detection. Collision detection being the thing that determines if bullets hit something and also the thing that makes makes walls and floors real.

Computational complexity and shape nesting

I have SVG abirtrary paths which i need to pack as efficiently as possible within a given rectangle(as less waste of space as possible). After some research i found the bin packing algorithms which seems to be dealing with boxes and not curved random shapes(my SVG shapes are quite complex and include beziers etc.).
AFAIK, there is no deterministic algorithm for actually packing abstract shapes.
I wish to be proven wrong here which would be ideal(having a mathematical deterministic method for packing them). In case I am right however and there is not, what would be the best approach to this problem
The subject name is Shape Nesting, Nesting Problem or Nesting Process.
In Shape Nesting there is no single/uniform algorithm or mathematical method for nesting shapes and getting the least space waste possible.
The 1st method is the packing algorithm(creates an imaginary bounding
box for each shape and uses a rectangular 2D algorithm to pack the
bounding boxes).
This method is fast but the least efficient in regards to space
waste.
The 2nd method is some kind of incremental rotation. The algorithm
rotates the shape at incremental steps and checks if it fits in the
space. This is better than the packing method in regards to space
waste but it is painstakingly slow,
What are some other classroom examples for achieving a solution to this problem?
[Edit1] new answer
as mentioned before bin-packing is NP complete (hard) so forget about algebraic solution
known approaches are:
generate and test
either you test all possibility of the problem and remember the best solution or incrementally add items (not all at once) one by one with the same way. It is basically what you are doing now without proper heuristic is unusably slow. But has the best space efficiency (the first one is much better but much slower) O(N!)
take advantage of sorting items by size
something like this it is much faster almost O(N.log(N)) (according to used sorting algorithm). Space efficiency strongly depends on the items size range and count. For rectangular shapes is this the best approach (fastest and usable even for N>1000). For complex shapes is this not a good way but look at it anyway maybe you get some idea ...
use of Neural network
This is extremly vague approach without any warrant of solution but possible best space efficiency/runtime ratio
I think there could be some field approach out there
I sow a few for generating graph layouts. All items create fields (booth attractive and repulsive) so they are moving to semi-stable state.
At first all items are at random locations
When the movement stop remember best solution and shake all items a little or randomize their position again.
Cycle this few times
This approach is much faster then genere and test and can provide very close solution to it but it can hang in local min/max or oscillate if the fields are not optimally choosed. For example all items can have constant attractive force to each other and repulsive force getting stronger only when the items are very close. You have to prevent overlapping of items (either by stronger repulsion or by collision tests). You have also to create some rotation moment for example with that repulsive force. It differs on any vertex so it creates a rotation moment (that can automatically align similar sides closer together). Also you can have semi-stable state with big distances between items and after finding best solution just turn off repulsion fields so they stick together. Sometimes it can have better results some times not ... here is nice example for graph layout computation
Logic to strategically place items in a container with minimum overlapping connections
Demo from the same QA
And here solver for placing sliders in 2D:
How to implement a constraint solver for 2-D geometry?
[Edit0] old answer before reformulating the question
I am not clear what you want to achieve.
have SVG picture and want to separate its parts to rectangular regions
as filled as can be
least empty space in them
no shape change in picture
have svg picture and want to change its shapes according to some purpose
if this is the case some additional info is needed
[solution for 1]
create a list of points for whole SVG in global SVG space (all points are transformed)
for line you need add 2 points
for rectangles 4 points
circle/elipse/bezier/eliptic arc 8 points
find local centres of mass
use classical approach
or can speed things up by computing the average density of points per x,y axis separately and after that just check all combinations of found positions of local max of densities if they really are sub cluster center or not.
all sub cluster center is the center of your region
now find the most far points which are still part of your cluster (the are close enough to neighbour points)
create rectangular area that cover all points from sub cluster.
you also can remove all used points from list
repeat fro all valid sub clusters
until all points are used
another not precise but simpler approach is:
find SVG size
create planar map of svg with some precision for example int map[256][256].
size of map can be constant or with the same aspect as SVG
clear map with 0
for any point of SVG set related map point to 1 (or inc or whatever)
now just segmentate map and you will have find your objects
after segmentation you have position and size of all objects
so finding of bounding boxes should be easy
You can start with a variant of the rectangle bin-packing algorithm and add rotation. There is a method "Guillotine bin packer" and you can download a paper and a library at github.

open scence graph non-uniform terrain support

I would like to add terrain to my project, which uses OSG.
I've read osgTerrain documentation. As I understand from it's interface, it treats data as uniform height field -- grid of heights.
I want terrain to be non-uniform. It would be represented as triangulation wuth height specified at vertices.
Does osgTerrain supports this out of the box? Or should I implement myself, deriving from Layer? Where to find extensive docs? Where to start from?
osgTerrain at one point, through the VPB tool, supported irregular triangulated terrain models. There's nothing in OSG itself that prevents you from doing this still. However, I must question your reasons for doing so. Are you looking for performance? The reason osg uses regular heightfields now is that with modern GPUs, they're just as fast as the old indexed triangles. Are you planning on doing some modifications to the terrain at runtime that requires a irregular mesh?
Also, you might consider osgEarth. It is sort of the replacement terrain subsystem for OSG. It is much more feature-filled than osgTerrain. It uses quadtrees of regular grids too though.