How to make Mean curvature flow skeleton deterministic after every run? - cgal

When I run mean curvature flow skeletonization from cgal the resulting skeleton is always a bit different. I was wondering if there is a way to make the skeleton exact after each run (with fixed skeleton parameters ofcourse). Can some one point me to piece of code where this happens and I would like to fix this issue.
I saw someone post similar question earlier but the answer did not contain where exactly in cgal causes this problem. Here's a link to that discussion.

Related

Information about CGAL and alternatives

I'm working on a problem that will eventually run in an embedded microcontroller (ESP8266). I need to perform some fairly simple operations on linear equations. I don't need much, but do need to be able work with points and linear equations to:
Define an equations for lines either from two known points, or one
point and a gradient
Calculate a new x,y point on an equation line that is a specific distance from another point on that equation line
Drop a perpendicular onto an equation line from a point
Perform variations of cosine-rule calculations on points and triangle sides defined as equations
I've roughed up some code for this a while ago based on high school "y = mx + c" concepts, but it's flawed (it fails with infinities when lines are vertical), and currently in Scala. Since I suspect I'm reinventing a wheel that's not my primary goal, I'd like to use someone else's work for this!
I've come across CGAL, and it seems very likely it's capable of all this and more, but I have two questions about it (given that it seems to take ages to get enough understanding of this kind of huge library to actually be able to answer simple questions!)
It seems to assert some kind of mathematical perfection in it's calculations, but that's not important to me, and my system will be severely memory constrained. Does it use/offer memory efficient approximations?
Is it possible (and hopefully easy) to separate out just a limited subset of features, or am I going to find the entire library (or even a very large subset) heading into my memory limited machine?
And, I suppose the inevitable follow up: are there more suitable libraries I'm unaware of?
TIA!
The problems that you are mentioning sound fairly simple indeed, so I'm wondering if you really need any library at all. Maybe if you post your original code we could help you fix it--your problem sounds like you need to redo a calculation avoiding a division by zero.
As for your point (2) about separating a limited number of features from CGAL, giving the size and the coding style of that project, from my experience that will be significantly more complicated (if at all possible) than fixing your own code.
In case you want to try a simpler library than CGAL, maybe you could try Boost.Geometry
Regards,

ANSYS Meshing Issue - How To Mesh Complicated Geometry (~80,000 Faces)?

I am attempting to mesh a complicated design (~80,000 faces) for a microchannel heat sink, as pictured, and I would appreciate some advice. I have tried a range of different mesh controls (especially face sizing and body sizing), mesh settings and element sizes, and all have failed to produce a working mesh. The most common errors are shown in the linked picture, in particular the one regarding "The following surfaces cannot be meshed with acceptable quality. Try using a different element size or virtual topology." However, I have already reduced the element size to 2x10^-6 m, which takes two days to resolve before failure.
Unfortunately I cannot alter the geometry significantly, as it is imported from generation in SolidWORKS as either a STEP or an x.t file. As such, any advice for how I can successfully mesh the geometry for CFD analysis in FLUENT would be greatly appreciated.
I can provide more details or the geometry file itself if required.
Thanks in advance.
Meshing Attempt
Probably your cad design is not clean at all. But it is impossible to notice from this image. If you don't have control over the geometry source it is trouble. Because you might ask somebody else about check and fix something. First check you can do with your model it's trying to reduce the number of elements until the minimum possible value. Then if the mesh runs properly you can relay in the surfaces of your cad model. After that, you can refine the mesh, but the refining process is something that you have to do following some error criteria. If you are also the designer why not try to simplify a bit the geometry if you consider it is really hard to mesh? Meshing properly is a hard task, you should go step-by-step until you reach some solution. Also, you must not allow the preprocessor mesh automatically, without giving some criteria. Probably the first thing you have to answer even before apply any mesh is, what is your Reynolds number? And what is the most valuable result in which you can base the goodness of your discretization?
Thank you for your suggestions. In the end I solved the issue by importing the original mesh generated by COMSOL into SpaceClaim, then employing both the "Smooth" and "Reduce Faces" tools in tandem to simplify the geometry, before finally using SolidWORKS to turn the smoothed mesh into a solid body. This body retained many of the same features as the original, but was much less complex, having two orders of magnitude fewer faces. In turn, this permitted both meshing and heat transfer analysis in FLUENT.

Triangulated Surface Mesh Skeletonization Package - How to keep the skeleton inside the surface mesh?

I tried executing the example in http://doc.cgal.org/latest/Surface_mesh_skeletonization/index.html to get the skeleton of a surface mesh.
I tried using a mesh model of blood vessels with thin structures. However, no matter how refined my meshes are, parts of the skeletons seems to always be outside the mesh models.
In the sample code, there seems to be no parameters which I am able to play around with, so I am asking if there is anything i can do to make sure the skeleton stays within the mesh model.
I have tried to refined the meshes, till the program crashes. Thanks for any help provided. thanks!
I guess you have used the free function setting all parameters to their default. In case you want to tune the parameters you need to use the class Mean_curvature_flow_skeletonization.
It has 3 parameters that need to be fine tuned so that your skeleton lies within the mesh:
quality_speed_tradeoff
medially_centered_speed_tradeoff
is_medially_centered
Note that the polyhedron demo includes a plugin where you can try the effect of the different parameters.
If you can share the mesh with me, I can also have a look.

Synopsys: Repeated compiles produce different results. How to automate iterated compile?

I'm new to using Design Compiler. In the past, I've done mostly FPGA work. Right now, I'm using Synopsys to determine the minimum are necessary to represent some circuits (using the Nangate 45nm library). I'm not doing P&R right now; I'm just trying to determine transistor area.
My only optimization constraint is to minimize area. I've noticed that if I tell DC to compile more than one time in a row, it produces different (and usually smaller) results each time.
I've looked and looked and failed to see if this is mentioned in a manual or anywhere in any discussion. Is it meant to work this way?
This suggests that optimization is stopping earlier than it could, so it's not REALLY minimizing area. Any idea why?
Is there a way I can tell it to increase the effort and/or tell it to automatically iterate compiles so that it will converge on the smallest design?
I'm guessing that DC is expecting to meet timing constraints, but I've given it a purely combinatorial block and no timing constraint. Did they never consider the usage scenario when all you want to do is work out the minimum gate area for a combinatorial circuit?
On a pure combinatorial circuit you can use a set_max_delay constraint and DC will attempt to meet that.
For reduced area you can use -map_effort high or -map_effort ultra to get it to work harder.
DC is a funny beast, and the algorithms it uses change as processes advance and make certain activities more or less useful. A lot of pre-layout optimization is less useful since the whole situation can change once the gates are actually placed and routed.
I filed a support ticket with Synopsys. I was using a 2010 version of design compiler. Apparently, area optimization has been improved since then, and the 2014 version will minimize area in one compiler pass.

Machine code alignment

I am trying to understand the principles of machine code alignment. I have an assembler implementation which can generate machine code in run-time. I use 16-bytes alignment on every branch destination, but looks like it is not the optimal choice, since I've noticed that if I remove alignment than sometimes same code works faster. I think that something to do with cache line width, so that some commands are cut by a cache line and CPU experiences stalls because of that. So if some bytes of alignment inserted at one place, it will move instructions somewhere further pass the cache border line...
I was hoping to implement an automatic alignment procedure, which can process a code as a whole and insert alignment according to the specification of the CPU (cache line width, 32/64 bits and so on)...
Can someone give some hints about this procedure? As an example the target CPU could be Intel Core i7 CPU 64-bit platform.
Thank you.
I'm not qualified to answer your question because this is such a vast and complicated topic. There are probably many more mechanisms in play here, other than cache line size.
However, I would like to point you to Agner Fog's site and the optimization manuals for compiler makers that you can find there. They contain a plethora of information on these kind of subjects - cache lines, branch prediction and data/code alignment.
Paragraph (16-byte) alignment is usually the best. However, it can force some "local" JMP instructions to no longer be local (due to code size bloat). May also result in not as much code being cached. I would only align major segments of code, I would not align every tiny subroutine/JMP section.
Not an expert, however... Branches to places that are not going to be in the instruction cache should benefit from alignment the most because you'll read whole cache-line of instructions to fill the pipeline. Given that statement, forward branches will benefit on the first run of a function. Backward branches ("for" and "while" loops for example) will probably not benefit because the branch target and following instructions have been read into cache already. Do follow the links in Martins answer.
As mentioned previously this is a very complex area. Agner Fog seems like a good place to visit. As to the complexities I ran across the article here Torbjörn Granlund on "Improved Division by Invariant Integers" and in the code he uses to illustrate his new algorithm the first instruction at - I guess - the main label is nop - no operation. According to the commentary it improves performance significantly. Go figure.