VTK subject of expertise - mesh

I have a simple question/problem. I have a 20 points and I want to mesh using this points. Question 1) How should keep these points in memory? Question 2) which filter should I use for mesh?

Related

In CGAL, can one convert a triangulation in more than three dimensions to a polytope?

If this question would be more appropriate on a related site, let me know, and I'd be happy to move it.
I have 165 vertices in ℤ11, all of which are at a distance of √8 from the origin and are extreme points on their corresponding convex hull. CGAL is able to calculate their d-dimensional triangulation in only 133 minutes on my laptop using just under a gigabyte of RAM.
Magma manages a similar 66 vertex case quite quickly, and, crucially for my application, it returns an actual polytope instead of a triangulation. Thus, I can view each d-dimensional face as a single object which can be bounded by an arbitrary number of vertices.
Additionally, although less essential to my application, I can also use Graph : TorPol -> GrphUnd to calculate all the topological information regarding how those faces are connected, and then AutomorphismGroup : Grph -> GrpPerm, ... to find the corresponding automorphism group of that cell structure.
Unfortunately, when applied to the original polytope, Magma's AutomorphismGroup : TorPol -> GrpMat only returns subgroups of GLd(ℤ), instead of the full automorphism group G, which is what I'm truly hoping to calculate. As a matrix group, G ∉ GL11(ℤ), but is instead ∈ GL11(𝔸), where 𝔸 represents the algebraic numbers. In general, I won't need the full algebraic closure of the rationals, ℚ̅, but just some field extension. However, I could make use of any non-trivially powerful representation of G.
With two days of calculation, Magma can manage the 165 vertex case, but is only able to provide information about the polytope's original 165 vertices, 10-facets, and volume. However, attempting to enumerate the d-faces, for any 2 ≤ d < 10, quickly consumes the 256 GB of RAM I have at my disposal.
CGAL's triangulation, on the other hand, only calculates collections of d-simplices, all of which have d + 1 vertices. It seems possible to derive the same facial information from such a triangulation, but I haven't thought of an easy way to code that up.
Am I missing something obvious in CGAL? Do you have any suggestions for alternative ways to calculate the polytope's face information, or to find the full automorphism group of my set of points?
You can use the package Combinatorial maps in CGAL, that is able to represent polytopes in nD. A combinatorial map describes all cells and all incidence and adjacency relations between the cells.
In this package, there is an undocumented method are_cc_isomorphic allowing to test if an isomorphism exist from two starting points. I think you can use this method from all possible pair of starting points to find all automorphisms.
Unfortunatly, there is no method to build a combinatorial map from a dD triangulation. Such method exists in 3D (cf. this file). It can be extended in dD.

VTK / ITK Dice Similarity Coefficient on Meshes

I am new to VTK and am trying to compute the Dice Similarity Coefficient (DSC), starting from 2 meshes.
DSC can be computed as 2 Vab / (Va + Vb), where Vab is the overlapping volume among mesh A and mesh B.
To read a mesh (i.e. an organ contour exported in .vtk format using 3D Slicer, https://www.slicer.org) I use the following snippet:
string inputFilename1 = "organ1.vtk";
// Get all data from the file
vtkSmartPointer<vtkGenericDataObjectReader> reader1 = vtkSmartPointer<vtkGenericDataObjectReader>::New();
reader1->SetFileName(inputFilename1.c_str());
reader1->Update();
vtkSmartPointer<vtkPolyData> struct1 = reader1->GetPolyDataOutput();
I can compute the volume of the two meshes using vtkMassProperties (although I observed some differences between the ones computed with VTK and the ones computed with 3D Slicer).
To then intersect 2 meshses, I am trying to use vtkIntersectionPolyDataFilter. The output of this filter, however, is a set of lines that marks the intersection of the input vtkPolyData objects, and NOT a closed surface. I therefore need to somehow generate a mesh from these lines and compute its volume.
Do you know which can be a good, accurate way to generete such a mesh and how to do it?
Alternatively, I tried to use ITK as well. I found a package that is supposed to handle this problem (http://www.insight-journal.org/browse/publication/762, dated 2010) but I am not able to compile it against the latest version of ITK. It says that ITK must be compiled with the (now deprecated) ITK_USE_REVIEW flag ON. Needless to say, I compiled it with the new Module_ITKReview set to ON and also with backward compatibility but had no luck.
Finally, if you have any other alternative (scriptable) software/library to solve this problem, please let me know. I need to perform these computation automatically.
You could try vtkBooleanOperationPolyDataFilter
http://www.vtk.org/doc/nightly/html/classvtkBooleanOperationPolyDataFilter.html
filter->SetOperationToIntersection();
if your data is smooth and well-behaved, this filter works pretty good. However, sharp structures, e.g. the ones originating from binary image marching cubes algorithm can make a problem for it. That said, vtkPolyDataToImageStencil doesn't necessarily perform any better on this regard.
I had once impression that the boolean operation on polygons is not really ideal for "organs" of size 100k polygons and more. Depends.
If you want to compute a Dice Similarity Coefficient, I suggest you first generate volumes (rasterize) from the meshes by use of vtkPolyDataToImageStencil.
Then it's easy to compute the DSC.
Good luck :)

Solidworks Feature Recognition on a fill pattern/linear pattern

I am currently creating a feature and patterning it across a flat plane to get the maximum number of features to fit on the plane. I do this frequently enough to warrant building some sort of marcro for this if possible. The issue that I run into is I still have to manually set the spacing between the parts. I want to be able to create a feature and have it determine "best" fit spacing given an area while avoiding overlaps. I have had very little luck finding any resources describing this. Any information or links to potentially helpful resources on this would be much appreciated!
Thank you.
Before, you start the linear pattern bit:
Select the face2 of that feature2, get the outer most loop2 of edges. You can test for that using loop2.IsOuter.
Now:
if the loop has one edge: that means it's a circle and the spacing must superior to the circle's radius
if the loop has more that one edge, that you need to calculate all the distances between the vertices and assume that the largest distance is the safest spacing.
NOTA: If one of the edges is a spline, then you need a different strategy:
You would need to convert the face into a sketch and finds the coordinates of that spline to calculate the highest distances.
Example: The distance between the edges is lower than the distance between summit of the splines. If the linear pattern has the a vertical direction, then spacing has to be superior to the distance between the summit.
When I say distance, I mean the distance projected on the linear pattern direction.

How to create a synthetic dataset

I want to run some Machine Learning clustering algorithms on some big data.
The problem is that I'm having troubles to find interesting data for this purpose on the web.Also, usually this data might be inconvenient to use because the format won't fit for me.
I need a txt file which each line represents a mathematical vector, each element seperated by space, for example:
1 2.2 3.1
1.12 0.13 4.46
1 2 54.44
Therefore, I decided to first run those algorithms on some synthetic data which I'll create by my self. How can I do this in a smart way with numpy?
In smart way, I mean that it won't be generated uniformly, because it's a little bit boring. How can I generate some interesting clusters?
I want to have 5GB / 10GB of data at the moment.
You need to define what you mean by "clusters", but I think what you are asking for is several random-parameter normal distributions combined together, for each of your coordinate values.
From http://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.random.randn.html#numpy.random.randn:
For random samples from N(\mu, \sigma^2), use:
sigma * np.random.randn(...) + mu
And use <range> * np.random.rand(<howmany>) for each of sigma and mu.
There is no one good answer for such question. What is interesting? For clustering, unfortunately, there is no such thing as an interesting or even well posed problem. Clustering as such has no well defineid evaluation, consequently each method is equally good/bad, as long as it has well defined internal objective. So k-means will always be good one to minimize inter-cluster euclidean distance and will struggle with sparse data, non-convex, imbalanced clusters. DBScan will always be the best in greedy density based sense and will strugle with diverse density clusters. GMM will be always great fitting on gaussian mixtures, and will strugle with clusters which are not gaussians (for example lines, squares etc.).
From the question one could deduce that you are at the very begining of work with clustering and so need "just anything more complex than uniform", so I suggest you take a look at datasets generators, in particular accesible in scikit-learn (python) http://scikit-learn.org/stable/datasets/ or in clusterSim (R) http://www.inside-r.org/packages/cran/clusterSim/docs/cluster.Gen or clusterGeneration (R) https://cran.r-project.org/web/packages/clusterGeneration/clusterGeneration.pdf

Monte Carlo Integration

Does anyone have any ideas how to implement a monte carlo integration simulator in vb.net.
I have looked around the internet with no luck.
Any code, or ideas as to how to start it would be of help.
Well i guess we are talking about a 2 dimensional problem. I assume you have a polygon of which you want to calculate the area.
1) First you need a function to check if a point is inside the polygon.
2) Now you define an area with a known size around the polygon.
3) Now you need random points inside your known area, some of them will be in your polygon, some will be outside, count them!
4) Now you have two relations: First the relations of all points to points inside your polygon. Second the area around your polygon which you know, to the area of the polygon you don't know.
5) The relations is the same --> you can calculate the area of your polygon! (Area of polygon should be: points in you polygon / all your points * size of known area)
Example: 3 points hits hit the polygon, 20 points where "shot", the area of the polygon is 0.6m²
NOTE: This area is only an approach! The more points you have, the better the approach gets.
You can implement a fancy method to display this in your vb program of course. Was this what you needed? Is my assumption about the polygon correct? Do you need help with the "point inside polygon" algorithm?
There is nothing specific to VB.net with this problem, except maybe for the choice of a random number generator from the library.
Numerically solving integrals of a function f(x_1,...,x_n) by using can become infeasible (in acceptable time) for high dimensions n, because the number of sample points needed for a given sampling distance grows exponentially with the dimension of the problem. The fundamental idea with Monte Carlo Integration is to replace the uniform sampling of the variables x_1,...,x_n with random sampling, taking n random numbers per sample. With these samples, estimate the integral. The more samples, the better the estimate. And the major benefit of MC integration is, that you can use standard statistical methods to estimate the error of your result.
So, how to start: Implement integration by uniform sampling of the integration space, then go to random sampling and add error estimation.