Remeshing depending on physical quantity values - mesh

I am performing Code Saturne simulations of fluid flows. I mesh the computational domain with gmsh. I would like to adapt my mesh by refining regions of my investigated case depending on the results of a previous simulation. Here is the process:
Step 1 : Mesh -> first_mesh.msh
Step 2 : Simulation with first_mesh.msh
Step 3 : Mesh adaptation -> if a quantity extracted from the simulation field is above a given value in a particular area then I would like to refine this area and produce a new mesh called adaptated_mesh.msh for instance
Step 4 : Simulation with adaptated_mesh.msh
Is that possible ?
If yes, could you give me a simple example to see how it's work ?
Thanks a lot

Related

For a buckling analysis, must all forces be multiplied by the resulting eigenvalue, or only the compressive load?

I am trying to do a linear buckling analysis (sol 105) with Nastran on a cylindrical shell structure. My understanding is that the compressive load that I apply to the structure must be multiplied by the resulting eigenvalue to get the buckling load. This gives me results that I expect.
However, now I apply a single perturbation load (SPL), a small transverse force acting midway along the cylinder on a single grid point. My understanding is that the magnitude of the SPL stays the way it is, (Unlike the compressive load where I multiply it with the eigenvalue to obtain buckling load.) The results I obtain are not what I expect, as the buckling load should not reduce so much as the SPL increases, according to the theory on this topic.
I am wondering if anyone knows what I am doing wrong. I feel like my mistake is probably very easy, but I haven't been able to solve it yet. Here is some more information on my implementation:
Axial compressive force spread over top grid points of cylinder.
Both SPL (the transverse point load) and axial loads are added to the static analysis subcase. Then the buckling subcase uses the static subcase for its analysis. This is how I understand it should be done.
boundary conditions:
SPC1 restraining 123 (xyz) directions at bottom grid points.
SPC1 restraining 12 (xy) directions at top grid points.
I'm not a Nastran user but I've done a lot of buckling analysis with Cast3M software.
The linear buckling analysis does not need perturbation loading, but only your main axial loading (F^0).
To recap,
Solve the linear problem for axial loading :
solve for u^0 : [K] * u^0 = F^0
get the linear stresses from the Hooke law : \sigma^0 = D * B * u^0
Solve the eigenvalue buckling problem :
[ K + \lambda Kgeo(\sigma^0)] * X = 0
Then, if you want to perform a non-linear (large displacement) post-buckling analysis, it is recommended to introduce a small perturbation which "excites" the buckling mode.
If you introduce the perturbation loading before the linear buckling analysis, maybe Nastran is adding it to F^0 and it is then logical that the result of buckling changes.
Hope this can help you.
There is a way to scale some loads and hold others constant. Create 2 Static Subcases with 2 (different) sets of loads:
Constant loads (that are not scaled, like a preload or internal pressure)
Those that will be scaled by the eigenvalue
Order does not matter
Use the Nastran STATSUB entry to define. It looks like this:
SUBCASE 100
LOAD = 1 $ Static pre-load
SUBCASE 200
LOAD = 2 $ Varying buckling load
$ -------------
SUBCASE 1000
STATSUB(PRELOAD) = 100
STATSUB(BUCKLING) = 200
METHOD = 10
The eigensolution is modified to include influence of static and varying loads.

How to refine the Graphcut cmex code based on a specific energy functions?

I download the following graph-cut code:
https://github.com/shaibagon/GCMex
I compiled the mex files, and ran it for pre-defined image in the code (which is rgb image)
I wanna optimize the image segmentation results,
I have probability map of the image, which its dimension is (width,height, 5). Five probability distribution over the image dimension are stacked together. each relates to one the classes.
My problem is which parts of code should according to the probability image.
I want to define Data and Smoothing terms based on my application.
My question is:
1) Has someone refined the code according to the defining different energy function (I wanna change Unary and pair-wise formulation).
2) I have a stack of 3D images. I wanna define 6-neighborhood system, 4 neighbors in current slice and the other two from two adjacent slices. In which function and part of code can I do the refinements?
Thanks

Is multiple regression the best approach for optimization?

I am being asked to take a look at a scenario where a company has many projects that they wish to complete, but with any company budget comes into play. There is a Y value of a predefined score, with multiple X inputs. There are also 3 main constraints of Capital Costs, Expense Cost and Time for Completion in Months.
The ask is could an algorithmic approach be used to optimize which projects should be done for the year given the 3 constraints. The approach also should give different results if the constraint values change. The suggested method is multiple regression. Though I have looked into different approaches in detail. I would like to ask the wider community, if anyone has dealt with a similar problem, and what approaches have you used.
Fisrt thing we should understood, a conclution of something is not base on one argument.
this is from communication theory, that every human make a frame of knowledge (understanding conclution), where the frame construct from many piece of knowledge / information).
the concequence is we cannot use single linear regression in math to create a ML / DL system.
at least we should use two different variabel to make a sub conclution. if we push to use single variable with use linear regression (y=mx+c). it's similar to push computer predict something with low accuration. what ever optimization method that you pick...it's still low accuracy..., why...because linear regresion if you use in real life, it similar with predict 'habbit' base on data, not calculating the real condition.
that's means...., we should use multiple linear regression (y=m1x1+m2x2+ ... + c) to calculate anything in order to make computer understood / have conclution / create model of regression. but, not so simple like it. because of computer try to make a conclution from data that have multiple character / varians ... you must classified the data and the conclution.
for an example, try to make computer understood phitagoras.
we know that phitagoras formula is c=((a^2)+(b^2))^(1/2), and we want our computer can make prediction the phitagoras side (c) from two input values (a and b). so to do that, we should make a model or a mutiple linear regresion formula of phitagoras.
step 1 of course we should make a multi character data of phitagoras.
this is an example
a b c
3 4 5
8 6 10
3 14 etc..., try put 10 until 20 data
try to make a conclution of regression formula with multiple regression to predic the c base on a and b values.
you will found that some data have high accuration (higher than 98%) for some value and some value is not to accurate (under 90%). example a=3 and b=14 or b=15, will give low accuration result (under 90%).
so you must make and optimization....but how to do it...
I know many method to optimize, but i found in manual way, if I exclude the data that giving low accuracy result and put them in different group then, recalculate again to the data group that excluded, i will get more significant result. do again...until you reach the accuracy target that you want.
each group data, that have a new regression, is a new class.
means i will have several multiple regression base on data that i input (the regression come from each group of data / class) and the accuracy is really high, 99% - 99.99%.
and with the several class, the regresion have a fuction as a 'label' of the class, this is what happens in the backgroud of the automation computation. but with many module, the user of the module, feel put 'string' object as label, but the truth is, the string object binding to a regresion that constructed as label.
with some conditional parameter you can get the good ML with minimum number of data train.
try it on excel / libreoffice before step more further...
try to follow the tutorial from this video
and implement it in simple data that easy to construct in excel, like pythagoras.
so the answer is yes...the multiple regression is the best approach for optimization.

Cutting an open Polyhedron3 with another open Polyhedron3

Given 2 open Polyhedron3 made of triangles in CGAL, I want to cut the first one with the second one. That is, all intersecting triangle facets from poly2 should cut facets from poly1 and create new edges (and faces) in poly1 following the path of intersection. In the end, I need the list of edges/half edges which are part of the intersection path.
I'm using a typedef CGAL::Simple_cartesian Kernel.
While this looks like a boolean operation, it's not because there's no 'inside' or 'outside' for open meshes. The way I tried to implement it is:
build an AABB for mesh 1 (to be cut)
find intersecting faces from mesh1 cut by the first triangle of mesh 2
compute intersection infos using cgal : this returns the intersection description, but with some problems:
cgal will sometimes return 'wrong' intersections (for exemple, segments of 0 length)
the first mesh is not cut in the operation. Given the intersection description, I have to cut triangles myself (unless there's a function I've overlooked in cgal to cut a face/triangle given an intersection). This is not a trivial problem, as there are lots of corner cases
repeat with next face of poly2
My algorithm sorts of works, but I sometimes have problems : path not closed, small numerical accuracy problem, and it's not very fast.
So here's (at last) my question : what would be the recommended way to implement such an operation in a robust way ? Which kernel would you recommend?
Following sloriot comment, I've spent some time playing with the code of the CGAL polyhedron demo. Indeed it contains a plugin corefinement which does what I want. I've not finished integrating all changes in my own code base but I believe I can mark this question as answered.
All thanks to SLoriot for his suggestion!
Pascal

Kinect normalize depth

I have some Kinect data of somebody standing (reasonably) still and performing sets of punches. I am given it in the format of an x,y,z co-ordinate for each joint of which they are 20, so I have 60 data points per frame.
I'm trying to perform a classification task on the punches however I'm having some problems normalising my data. As you can see from the graph there are sections with much higher 'amplitude' than the others, my belief is that this is due to how close that person was to the kinect sensor when the readings were taken. (The graph is actually the first principal coefficient obtained by PCA for each frame, multiple sequences of the same punch are strung together in this graph)
Looking back at the data files it looks like those that are 'out' have a z co-ordinate (depth from sensor) of ~2.7 where as the others tent to hover around 3.3-3.6.
How can I perform a normalization with the depth values to make them closer to each other for each sequence? I've already tried differentiation to get the velocity, although it helps to normalise the output actually ends up too similar and makes it very hard to classify.
Edit: I should mention I am already using a normalization method by subtracting the hip position from each joint in an attempt to make the co-ordinates relative.
The Kinect can output some strange values when the person that is tracked is standing near the edges of the view of the Kinect. I would either completly ignore these data or just replace the data with an average of the previous 2 and next 2.
For example:
1,2,1,12,1,2,3
Replace 12 with (2 + 1 + 1 + 2) / 4 = 1.5
You can basically do this with the whole array of values you have, this way you have a more normalised line/graph.
You can also use the clippedEdges value to determine if one or more joints is outside the view.