How can I input a valid Triangulation in right format through the function of input_file()? - cgal

I want to use the function of Triangulation_3 by my own data include vertexs and cells. So I have to initialize a Triangulation_3 throught the function input_file().
My question is how can I use this funtiom in a right way?
https://doc.cgal.org/latest/Triangulation_3/group__PkgIOTriangulation3.html#gabb84b5cde2cbb8c580790c10f3f0ddbb,the funtion short introduction can be found here.
As dercribed in user manual,"
A triangulation is a collection of vertices and cells that are linked together through incidence and adjacency relations. Each cell gives access to its four incident vertices and to its four adjacent cells. Each vertex gives access to one of its incident cells."
I think that the hard one is the input of four adjacent cells of each cells.
In brief, I appreciate a demo to tell me how to input a triangulation_3 in right way.
Thank you !
The description of funtion file_input()
The information in the iostream is: the dimension, the number of finite vertices, the non-combinatorial information about vertices (point, etc; note that the infinite vertex is numbered 0), the number of cells, the indices of the vertices of each cell, plus the non-combinatorial information about each cell, then the indices of the neighbors of each cell, where the index corresponds to the preceding list of cells.
When dimension < 3, the same information is stored for faces of maximal dimension instead of cells.
istream & CGAL::Triangulation_3< Traits, TDS, SLDS >::operator>> (istream &is, Triangulation_3 &t)
Reads the underlying combinatorial triangulation from is by calling the corresponding input operator of the triangulation data structure class (note that the infinite vertex is numbered 0), and the non-combinatorial information by calling the corresponding input operators of the vertex and the cell classes (such as point coordinates), which are provided by overloading the stream operators of the vertex and cell types. More...
ostream & CGAL::Triangulation_3< Traits, TDS, SLDS >::operator<< (ostream &os, const Triangulation_3 &t)
Writes the triangulation t into os.
template<typename Tr_src , typename ConvertVertex , typename ConvertCell >
std::istream & CGAL::Triangulation_3< Traits, TDS, SLDS >::file_input (std::istream &is, ConvertVertex convert_vertex=ConvertVertex(), ConvertCell convert_cell=ConvertCell())
The triangulation streamed in is, of original type Tr_src, is written into the triangulation. More...

Related

Show 2D celldata fields in 3D domains with Paraview

I have a .vtu file composed of tetrahedral and triangular elements (located on an outer surface). I also have a celldata field (for example, nrc1) defined on the triangular elements and being zero in the tetrahedral ones. When I select to plot this field in Paraview, I only see a zero field, corresponding with the 3D elements, but no trace of the field in the 2D elements.
Is there a way to show that 2D field in Paraview?
P.D.: I cannot interpolate the 2D celldata field into a pointdata one, since part of the information (discontinuities,...) would be lost.
There is indeed a conflict between the information on the 3D cells (zeroes) and information on the 2D cells (actual information), where the 2D cells and the 3D cells overlap.
Even though your dataset is valid, mixed dimension dataset are not easy to manage correctly, hence your issue.
In any case, you should extract your 2D cells to be able to visualize your data correctly, here is how I would do it :
Create a new view, click on Spreadsheet view
show your dataset in the spreadsheet view
order by CellType
Manually select all 2D CellType has they will be located together
Add an Extract Selection filter, Apply
You can now visualize your data on this 2D cells only dataset
You could also use Edit->Find Data and select by ID since your cells seems to be ordoned.
Finally, you could write a small Python Programmable Filter to do all that for you completelly automatically, but it is not very easy to implement.

isosurface tracking in high dimensions

how to trace isosurface on a higher dimensional space efficiently
You have a scalar cost function in N dimensions,
f(y0, y1, .., yN) ∊ ℝ, y ∊ ℝ
but sampled only in a regular rectangular grid,
yk = Ψk + ψk xk, constants Ψk ∊ ℝ and ψk ∊ ℝ, and grid coordinates xk ∊ ℕ
and the problem is to locate the isosurface(s) i,
f(y0, y1, .., yN) = Ci
The direct approach would be to just loop over each cell in the grid, and check if the current isosurface intersects the current cell, and if so, describe the part of the isosurface within the current cell. (Marching Cubes is one approach to describing how the isosurface intersects each grid cell.)
The restriction here is to use a neighborhood based search instead of examining every single cell.
OP had a previous question specifically for the 3D case, to which I posted a link to example code, grid.h and grid.c (at Pastebin.com, because they were too long to include inline).
That implementation is completely different to OP's slicing method. Mine is a direct, simple walk over the grid cells intersecting the current isosurface. It caches the grid samples, and uses a separate map (one char per grid cell) to keep track which grid cells have been cached, walked, and/or pushed to a stack to be walked later. This approach is easily extended to more than three dimensions. Although the code is written for exactly three dimensions, the approach itself is not specific to three dimensions at all; all you need to do is to adjust the data structures to accommodate any (sensible) number of dimensions.
The isosurface walk itself is trivial. You start from any grid cell the isosurface intersects, then examine all 2N nearest neighbor cells to see if the isosurface intersects those too. In practice, you use a stack of grid cell locations to be examined, and a map of grid cell flags to avoid re-examining already examined grid cells.
Because the number of grid point samples per grid cell is 2N, my example code is not optimal: a lot of nearby grid points end up being evaluated to see if the neighboring grid cells do intersect the isosurface. (Instead of examining only the grid points delimiting the isosurface, grid points belonging to any grid cells surrounding the isosurface are examined.) This extra work grows exponentially as N increases.
A better approach would be to consider each of the 2N possible (N-1)-faces separately, to avoid examining cells the isosurface does not intersect at all.
In an N-dimensional regular rectangular grid, each cell is an N-dimensional cuboid, defined by the 2N grid points at the vertices (corners). The N-cuboid cells have N(N-1) two-dimensional faces, and 2N (N-1)-dimensional faces.
To examine each (N-1)-face, you need to examine the cost function at the 2N-1 grid points defining that (N-1)-face. If the cost function at those points spans the isosurface value, then the isosurface intersects the (N-1)-face, and the isosurface intersects the next grid cell in that direction also.
There are two (N-1)-faces perpendicular to each axis. If the isosurface intersects the (N-1)-face closer to negative infinity, then the isosurface intersects the next grid cell along that axis towards negative infinity too. Similarly, if the isosurface intersects the (N-1)-face closer to positive infinity, then it also intersects the next grid cell along that axis towards positive infinity too. Thus, the (N-1)-faces are perfect for deciding which neighboring cells should be examined or not. This is true because the (N-1)-face is exactly the set of grid points the two cells share.
I'm very hesitant to provide example C code, because the example code of the same approach for the 3D case does not seem to have helped anyone thus far. I fear a longer explanation with 2- and 3-dimensional example images for illustration would be needed to describe the approach in easily understandable terms; and without a firm grasp of the logic, any example code would just look like gobbledygook.
You are better using a library for 2 dimension you can try the conrec algorithm from Prof. Paul Bourke. It's similar to a marching cube.

Best way to store different large matrices in Fortran

I need to store panel information of different bodies into matrices. Each matrix will contain all info for one body. So N bodies will leads to N matrices. However, the total number of bodies will be decided by user input.
I am looking for a way to create the matrices separately. The for loop index-----i would be part of the matrix name so that the matrix size can vary depending on the specific body. The idea is like:
for i = 1:N
for j = 1: ROW
for k = 1: COL
Mat_i (j,k) = panel(j,k)
end
end
end
Is it feasible in Fortran? Is there any other way to achieve the similar effect?
The index can't be part of the variable name. But you can accomplish this with a user-defined type:
type body_type
real, dimension (:,:), allocatable :: panel
end type body_type
type (body_type), dimension (:), allocatable :: bodies
Then when the user tells you N, allocate the array of bodies:
allocate (bodies (N))
Then when know the dimensions of the arrays, allocate them in a loop over i:
allocate (bodies (i) % panel (ROW_i,COL_i))
If the bodies have additional properties (e.g., mass, color, ...) you can include them as additional items inside the type. Grouping related quantities in this manner is good programming practice.

Reconstruct surface from 3D triangular meshes

I have a 3D model, which consists of the 3D triangular meshes. I want to partition the meshes into different groups. Each group represents a surface, such as a planar face, cylindrical surface. This is something like surface recognition/reconstruction.
The input is a set of 3D triangular meshes. The output is the mesh segmentations per surface.
Is there any library meets my requirement?
If you want to go into lots of mesh processing, then the point cloud library is a good idea, but I'd also suggest CGAL: http://www.cgal.org for more algorithms and loads of structures aimed at meshes.
Lastly, the problem you describe is most easily solved on your own:
enumerate all vertices
enumerate all polygons
create an array of ints with the size of the number of vertices in your "big" mesh, initialize with 0.
create an array of ints with the size of the number of polygons in your "big" mesh, initialize with 0.
initialize a counter to 0
for each polygon in your mesh, look at its vertices and the value that each has in the array.
if the values for each vertex are zero, increase counter and assign to each of the values in the vertex array and polygon array correspondingly.
if not, relabel all vertices and polygons with a higher number to the smallest, non-zero number.
The relabeling can be done quickly with a look up table.
This might save you lots of issues interfacing your code to some library you're not really interested in.
You should have a look at the PCL library, it has all these features and much more: http://pointclouds.org/

Search optimization problem

Suppose you have a list of 2D points with an orientation assigned to them. Let the set S be defined as:
S={ (x,y,a) | (x,y) is a 2D point, a is an orientation (an angle) }.
Given an element s of S, we will indicate with s_p the point part and with s_a the angle part. I would like to know if there exist an efficient data structure such that, given a query point q, is able to return all the elements s in S such that
(dist(q_p, s_p) < threshold_1) AND (angle_diff(q_a, s_a) < threshold_2) (1)
where dist(p1,p2), with p1,p2 2D points, is the euclidean distance, and angle_diff(a1,a2), with a1,a2 angles, is the difference between angles (taken to be the smallest one). The data structure should be efficient w.r.t. insertion/deletion of elements and the search as defined above. The number of vectors can grow up to 10.000 and more, but take this with a grain of salt.
Now suppose to change the above requirement: instead of using the condition (1), let's request all the elements of S such that, given a distance function d, we want all elements of S such that d(q,s) < threshold. If i remember well, this last setup is called range-search. I don't know if the first case can be transformed in the second.
For the distance search I believe the accepted best method is a Binary Space Partition tree. This can be stored as a series of bits. Each two bits (for a 2D tree) or three bits (for a 3D tree) subdivides the space one more level, increasing resolution.
Using a BSP, locating a set of objects to compare distances with is pretty easy. Just find the smallest set of squares or cubes which contain the edges of your distance box.
For the angle, I don't know of anything. I suppose that you could store each object in a second list or tree sorted by its angle. Then you would find every object at the proper distance using the BSP, every object at the proper angles using the angle tree, then do a set intersection.
You have effectively described a "three dimensional cyclindrical space", ie. a space that is locally three dimensional but where one dimension is topologically cyclic. In other words, it is locally flat and may be modeled as the boundary of a four-dimensional object C4 in (x, y, z, w) defined by
z^2 + w^2 = 1
where
a = arctan(w/z)
With this model, the space defined by your constraints is a 2-dimensional cylinder wrapped "lengthwise" around a cross section wedge, where the wedge wraps around the 4-d cylindrical space with an angle of 2 * threshold_2. This can be modeled using a "modified k-d tree" approach (modified 3-d tree), where the data structure is not a tree but actually a graph (it has cycles). You can still partition this space into cells with hyperplane separation, but traveling along the curve defined by (z, w) in the positive direction may encounter a point encountered in the negative direction. The tree should be modified to actually lead to these nodes from both directions, so that the edges are bidirectional (in the z-w curve direction - the others are obviously still unidirectional).
These cycles do not change the effectiveness of the data structure in locating nearby points or allowing your constraint search. In fact, for the most part, those algorithms are only slightly modified (the simplest approach being to hold a visited node data structure to prevent cycles in the search - you test the next neighbors about to be searched).
This will work especially well for your criteria, since the region you define is effectively bounded by these axis-defined hyperplane-bounded cells of a k-d tree, and so the search termination will leave a region on average populated around pi / 4 percent of the area.