I am currently trying to do a magnetostatic FEM simulation and I want to mesh my geometry using GMSH. The geometry is shown below: in
I create the geometry using FreeCAD and import into GMSH as a .STEP File. In GMSH I define 3 physical groups, resulting in the following script:
Merge "yoke_simulation.step";
Physical Volume("iron") = {1, 7, 9, 6, 3, 2, 4};
//+
Physical Volume("current") = {5};
//+
Physical Volume("air") = {8};
When I create the Mesh, I get the following result:
The problem is that GMSH seems to be creating a seperate mesh for each body without connecting these meshes with one another. If one looks at the region between the two cones for example, it is evident that the mesh of the two cones is not connected to the mesh of the air:
How can I get GMSH to create a single, connected mesh for all bodies?
It seems that right now the Air volume 8 is just the overall bounding box, without necessary subtraction of volumes for Iron and Current. Thus, it creates a tetrahedral mesh for the entire bounding box without taking other bodies into the account.
I am not a FreeCAD expert, so I don't really know how to setup it properly there. Possibly, try specifying the Air volume there making sure it does not contain your detail.
Another approach could involve slight modifications at the GMSH level. For example, creating the proper Air volume before making it physical. You have volumes 1, 7, 9, 6, 3, 2, 4, 5 which you want to subtract from volume 8. That can be achieved by
BooleanDifference(100) = { Volume{8}; Delete; }{ Volume{1,7,9,6,3,2,4,5}; };
Physical Volume("air") = {100};
Notice, that the previous code will work only if OpenCASCADE kernel inside GMSH is used.
Please, see the following sample code in GMSH for reference:
SetFactory("OpenCASCADE");
Box(1) = {0,0,0, 1,1,1};
Box(2) = {0.1,0.1,0.1, 0.2,0.2,0.2};
Box(3) = {0.5,0.5,0.5, 0.2,0.2,0.2};
BooleanDifference(100) = { Volume{1}; Delete; }{ Volume{2,3}; };
Physical Volume ("air") = {100};
Physical Volume ("iron") = {2,3};
Dropping the command Coherence; after the merge line will force GMSH to form a coherent mesh without overlapping volumes.
I have created a set of free-and-open-source tools to generate partitioned meshes for multi-material FEM.
They are available here
github.com/NH89/SOFA_mesh_partitioning_tools
The are based on the CGAL geometry library, and generate a partitioned tetrahedral mesh from arbitrary intersecting triangulated surface meshes.
They are conceived for use with the SOFA real-time soft matter FEM framework, but could be used for any partitioned FEM application.
Related
I'm trying to automate SoftBody creation in Godot. Everything is working perfect except for the part were I'm supposed to supply pinned points for mesh.
var node = SoftBody.new()
node.pinned_points = [] #This is the array where I'm supposed to supply vertex id like: [1, 45, 1142]
Those are the vertex points that keep mesh hold position. Now since I'll use script for multiple models I can't use Godot editor for that purpose. So I thought I'll make an additional group in blender vertex group and access those vertices in Godot but I don't know the HOW part. Is there any alternative way? I'm open to ideas. Thanks!
What is a clean way to create a procedural mesh in blender with a script? Ideally I'd like to create an empty mesh and start adding vertices and triangles, like this:
mesh = create_mesh()
a = mesh.add_vertex([0, 0, 0])
b = mesh.add_vertex([1, 0, 0])
c = mesh.add_vertex([1, 1, 0])
t = mesh.add_triangle(a, b, c)
Blenders bmesh module is designed for creating and editing mesh data, although you can also turn python arrays into mesh data.
You can find some examples included with blender, all addons are made using python, one helpful addon would be add_mesh_extra_objects which has several examples of creating different types of objects.
There is a blender specific stackexchange site where you will get more help with blender specific scripting. You should also find some existing examples there.
I've been struggling for some time to find a way in Meshlab to include or transfer UV’s onto a poisson model from source meshes. I will try to explain more of what I’m trying to accomplish below.
My source meshes have uv’s along with texture data. I need to build a fused model and include the texture data. It is for facial expression scan data reconstruction for a production pipeline which ultimately builds a facial rig for animation. Our source scan data includes marker information which we use to register, build a fused scan model which is used to generate a retopologized mesh for blendshapes.
Previously, we were using David3D. http://www.david-3d.com/en/support/downloads
David 3D used poisson surface reconstruction to create a fused model. The fused model it created brought along the uvs and optimized the source textures into 1 uv tile. I'll post a picture of the result below that I'm looking to recreate in MeshLab.
My need to find this solution in meshlab is to build tools to help automate this process. David3D version 5 does not have an development kit to program around.
Is it possible in Meshlab to apply the uvs from the regions used from the source mesh onto the poison model? Could I use a filter to transfer them? Reproject them?
Or is there another reconstruction method/ process from within Meshlab that will keep the uv’s?
Here is an image of what the resulting uv parameter looks like from David. The uvs are white on the left half of the image.
Thank You,David3D UV Layout Result
Dan
No, in MeshLab there is no direct way to transfer UV mapping between two layers.
This is because UV transfer is not, in the general case, a trivial task. It is not simply a matter of assigning to the new surface the "closest" UV of the original mesh: this would not work on UV discontinuities, which are present in the example you linked. Additionally, the two meshes should be almost coincident, otherwise you would also have problems also in defining the "closest" UV.
There are a couple ways to do it, but require manual work and a re-sampling of the texture:
create a UV mapping of the re-meshed model using whatever tool you may have, then resample the existing texture on the new parametrization using "transfer: vertex attributes to Texture (1 or 2 meshes)", using texture color as source
load the original mesh, and using the screenshot function, create "virtual" photos of the model (turn off illumination and do NOT use ortho views), adding them as raster layers, until the model surface has been fully covered. Load the new model, that should be in the same space, and texture-map it using the "parametrization + texturing " using those registered images
In MeshLab it is also possible to create a new texture from the original images, if you have a way to import the registered cameras...
TL;DR: UV coords to color channels → Vertex Attribute Transfer → Color channels back to UV coords
I have had very good results kludging it through the color channels, like this (say you are transfering from layer A to layer B):
Make sure A and B are roughly aligned with eachother (you can use the ICP filter if needed).
Select layer A, then:
Texture → Convert Per Wedge UV to Per Vertex UV (if you've got wedge coords)
Color Creation → Per Vertex Color Function, and transfer the tex coords to the color channels (assuming UV range 0-1, you'll want to tweak these if your range is larger):
func r = 255.0 * vtu
func g = 255.0 * vtv
func b = 0
Sampling → Vertex Attribute Transfer, and use this to transfer the vertex colors (which now hold texture coordinates) from layer A to layer B.
source mesh = layer A
target mesh = layer B
check Transfer Color
set distance large enough to not miss any spots
Now select layer B, which contains the mapped vertex colors, and do the opposite that you did for A:
Texture → Per Vertex Texture Function
func u = r / 255.0
func v = g / 255.0
Texture → Convert Per Vertex UV to Per Wedge UV
And that's it.
The results aren't going to be perfect, but in practice I often find them sufficient. In particular:
If the texture is not continuously mapped to layer A (e.g. maybe you've got patches of image mapped to certain areas, etc.), it's very possible for the attribute transfer to B (especially when upsampling) to have some vertices be interpolated across patch boundaries, which will probably lead to visual artifacts along patch boundaries.
UV coords may be quantized by conversion to a color channel and back. (You could maybe eliminate this by stretching U out over all three color channels, then transferring U, then repeating for V -- never tried it though.)
That said, there's a lot of cases it works in.
I may or may not add images / video to this post another day.
PS Meshlab is pretty straightforward to build from source; it might be possible to add a UV coordinate option to the Vertex Attribute Transfer filter. But, to make it more useful, you'd want to make sure that you didn't interpolate across boundary edges in the mapped UV projection. Definitely a project I'd like to work on some day... in theory. If that ever happens I'll post a link here.
I am new to VTK and am trying to compute the Dice Similarity Coefficient (DSC), starting from 2 meshes.
DSC can be computed as 2 Vab / (Va + Vb), where Vab is the overlapping volume among mesh A and mesh B.
To read a mesh (i.e. an organ contour exported in .vtk format using 3D Slicer, https://www.slicer.org) I use the following snippet:
string inputFilename1 = "organ1.vtk";
// Get all data from the file
vtkSmartPointer<vtkGenericDataObjectReader> reader1 = vtkSmartPointer<vtkGenericDataObjectReader>::New();
reader1->SetFileName(inputFilename1.c_str());
reader1->Update();
vtkSmartPointer<vtkPolyData> struct1 = reader1->GetPolyDataOutput();
I can compute the volume of the two meshes using vtkMassProperties (although I observed some differences between the ones computed with VTK and the ones computed with 3D Slicer).
To then intersect 2 meshses, I am trying to use vtkIntersectionPolyDataFilter. The output of this filter, however, is a set of lines that marks the intersection of the input vtkPolyData objects, and NOT a closed surface. I therefore need to somehow generate a mesh from these lines and compute its volume.
Do you know which can be a good, accurate way to generete such a mesh and how to do it?
Alternatively, I tried to use ITK as well. I found a package that is supposed to handle this problem (http://www.insight-journal.org/browse/publication/762, dated 2010) but I am not able to compile it against the latest version of ITK. It says that ITK must be compiled with the (now deprecated) ITK_USE_REVIEW flag ON. Needless to say, I compiled it with the new Module_ITKReview set to ON and also with backward compatibility but had no luck.
Finally, if you have any other alternative (scriptable) software/library to solve this problem, please let me know. I need to perform these computation automatically.
You could try vtkBooleanOperationPolyDataFilter
http://www.vtk.org/doc/nightly/html/classvtkBooleanOperationPolyDataFilter.html
filter->SetOperationToIntersection();
if your data is smooth and well-behaved, this filter works pretty good. However, sharp structures, e.g. the ones originating from binary image marching cubes algorithm can make a problem for it. That said, vtkPolyDataToImageStencil doesn't necessarily perform any better on this regard.
I had once impression that the boolean operation on polygons is not really ideal for "organs" of size 100k polygons and more. Depends.
If you want to compute a Dice Similarity Coefficient, I suggest you first generate volumes (rasterize) from the meshes by use of vtkPolyDataToImageStencil.
Then it's easy to compute the DSC.
Good luck :)
Supposing, for example, that I had a 10x10 "cloth" mesh, with each square being two triangles. Now, if I wanted to animate this, I could do the spring calculations on the CPU. Each vertex would have its own "spring" data and would, hopefully, bounce like whatever type of "cloth" it was supposed to represent.
However, that would involve a minimum of about 380? spring calculations per frame. Happily, the per-vertex calculations are "embarrassingly parallel" - Had I one CPU per vertex, each vertex could be run on a single CPU. GPUs, therefore, are theoretically an excellent choice for running such calculations on.
Except (and this is using DirectX/SlimDX) - I have no idea/am not sure how I would/should:
1) Send all this vertex data to the graphics card (yes, I know how to render stuff and have even written my own per-pixel and texture-blending global lighting effect file; however, it is necessary for each vertex to be able to access the position data of at least three other vertices). I suppose I could stick the relevant vertex positions and number of vertex positions in TextureCoords, but there may be a different, standard solution.
2) Read all the vertex data afterwards, so I can update the mesh in memory. Otherwise, each update will act on the exact same data to the exact same result, rather like adding 2 + 3 = 5, 2 + 3 = 5, 2 + 3 = 5 when you want is 2 + 3 = 5 - 2 = 3 + 1.5 = 4.5.
And it may be that I'm looking in the wrong direction to do this.
Thanks.
You could use the approach you have described to pack the data into textures and write special HLSL shaders to compute spring forces and then update vertex positions. That approach is totally valid but can be troublesome when you try to debug problems because you are using texture pixels in an unconventional way (you can draw the texture and maybe write some code to watch the values in a given pixel). It is probably easier in the long run to use something like CUDA, DirectCompute, or OpenCL. CUDA allows you to "bind" the DirectX vertex-buffer for access in CUDA. Then in a CUDA kernel you can use the positions to calculate forces and then write new positions to the vertex-buffer (in parallel on the GPU) before you render the updated positions.
There is a cloth demo that uses DirectCompute in the DirectX 10/11 DirectX SDK.