isosurface tracking in high dimensions - dimensions

how to trace isosurface on a higher dimensional space efficiently

You have a scalar cost function in N dimensions,
f(y0, y1, .., yN) ∊ ℝ, y ∊ ℝ
but sampled only in a regular rectangular grid,
yk = Ψk + ψk xk, constants Ψk ∊ ℝ and ψk ∊ ℝ, and grid coordinates xk ∊ ℕ
and the problem is to locate the isosurface(s) i,
f(y0, y1, .., yN) = Ci
The direct approach would be to just loop over each cell in the grid, and check if the current isosurface intersects the current cell, and if so, describe the part of the isosurface within the current cell. (Marching Cubes is one approach to describing how the isosurface intersects each grid cell.)
The restriction here is to use a neighborhood based search instead of examining every single cell.
OP had a previous question specifically for the 3D case, to which I posted a link to example code, grid.h and grid.c (at Pastebin.com, because they were too long to include inline).
That implementation is completely different to OP's slicing method. Mine is a direct, simple walk over the grid cells intersecting the current isosurface. It caches the grid samples, and uses a separate map (one char per grid cell) to keep track which grid cells have been cached, walked, and/or pushed to a stack to be walked later. This approach is easily extended to more than three dimensions. Although the code is written for exactly three dimensions, the approach itself is not specific to three dimensions at all; all you need to do is to adjust the data structures to accommodate any (sensible) number of dimensions.
The isosurface walk itself is trivial. You start from any grid cell the isosurface intersects, then examine all 2N nearest neighbor cells to see if the isosurface intersects those too. In practice, you use a stack of grid cell locations to be examined, and a map of grid cell flags to avoid re-examining already examined grid cells.
Because the number of grid point samples per grid cell is 2N, my example code is not optimal: a lot of nearby grid points end up being evaluated to see if the neighboring grid cells do intersect the isosurface. (Instead of examining only the grid points delimiting the isosurface, grid points belonging to any grid cells surrounding the isosurface are examined.) This extra work grows exponentially as N increases.
A better approach would be to consider each of the 2N possible (N-1)-faces separately, to avoid examining cells the isosurface does not intersect at all.
In an N-dimensional regular rectangular grid, each cell is an N-dimensional cuboid, defined by the 2N grid points at the vertices (corners). The N-cuboid cells have N(N-1) two-dimensional faces, and 2N (N-1)-dimensional faces.
To examine each (N-1)-face, you need to examine the cost function at the 2N-1 grid points defining that (N-1)-face. If the cost function at those points spans the isosurface value, then the isosurface intersects the (N-1)-face, and the isosurface intersects the next grid cell in that direction also.
There are two (N-1)-faces perpendicular to each axis. If the isosurface intersects the (N-1)-face closer to negative infinity, then the isosurface intersects the next grid cell along that axis towards negative infinity too. Similarly, if the isosurface intersects the (N-1)-face closer to positive infinity, then it also intersects the next grid cell along that axis towards positive infinity too. Thus, the (N-1)-faces are perfect for deciding which neighboring cells should be examined or not. This is true because the (N-1)-face is exactly the set of grid points the two cells share.
I'm very hesitant to provide example C code, because the example code of the same approach for the 3D case does not seem to have helped anyone thus far. I fear a longer explanation with 2- and 3-dimensional example images for illustration would be needed to describe the approach in easily understandable terms; and without a firm grasp of the logic, any example code would just look like gobbledygook.

You are better using a library for 2 dimension you can try the conrec algorithm from Prof. Paul Bourke. It's similar to a marching cube.

Related

Fast check if polygon contains point between dataframes

I have two dataframes. One contains a column of Polygons, taken from an image of polygon shapes. Each polygon has a set of coordinates. This dataframe also has a "segment-id" column. I have another dataframe, containing a column of Points, also with coordinates. These Points represent pixels from the same image of Polygon shapes, and therefore have the same coordinate system. I want to give every Point the "segment-id" of the Polygon which contains it. Every Polygon contains at least one Point.
Currently, I achieve this by using a nested for loop:
for i, row in enumerate(point_df.itertuples(), 0):
point = pixel_df.at[i, 'geometry']
for j in range(len(polygon_df)):
polygon = polygon_df.iat[j, 0]
if polygon.contains(point):
pixel_df.at[i, 'segment_id'] = polygon_df.at[j, 'segment_id']
else:
pass
This is extremely slow. For 100 Points, it takes around 10 seconds. I need a faster way of doing this. I have tried using apply but it is still super slow.
Hope someone can help me out, thanks very much.
For fast "is point inside polygon":
Preparation: In the code that obtains the data describing the polygons; using all the vertices, find the minimum and maximum y-coord, and minimum and maximum x-coord; and store that with the polygon's data.
1) Using the point's coords and the polygon's "minimum and maximum x and y" (pre-determined during preparation); do a "bounding box" test. This is just a fast way to find out if the point is definitely not inside the polygon (so you can skip the more expensive steps most of the time).
2) Set a "yes/no" flag to "no"
3) For each edge in the polygon; determine if a horizontal line passing through the point would intersect with the edge, and if it does determine the x-coord of the intersection. If the x-coord of the intersection is less than the point's x-coord, toggle (with NOT) the "yes/no" flag. Ignore "horizontal line passes through a vertex" during this step.
4) For each vertex, compare its y-coord with the point's y-coord. If they're the same you need to look at both edges coming from that vertex to determine if the edge's vertices are in the same y direction. If the edge's vertices are in the same y direction (if the edges form a 'V' shape or upside-down 'V' shape) ignore the vertex. Otherwise (if the edges form a '<' or '>' shape), if the vertex's x-coord is less than the point's x-coord, toggle the "yes/no" flag.
After all this is done; that "yes/no" flag will tell you if the point was in the polygon.

How to draw an outline of a group of multiple rectangles?

I need to draw an enclosing polygon of a group of rectangles that are placed next to each other.
Let's think of text fields that share at least one edge (or part of it) with at least one of the other rectangles.
I can get the rectangles points coordinates, and so I basically have any data I need about them.
Can you think of a simple algorithm / procedure to draw a polygon (connected straight paths) around these objects.
Here's a demonstration of different potential cases (A, B, C, etc...). In example A I also drew a blue polygon which is the path that I need to draw, outlining the group of rectangles.
I've read here about convex hull and stuff like that but really, this looks like a far simpler problem.
One (beginning of) solution I thought of was that the points I actually need to draw through are only ones that are not shared by any pair of rectangles, meaning points that are vertices of more than one rectangle are redundant. What I couldn't find out was the order by which I need to draw lines from one to the next.
I currently work on objective c, but any other language or algo would be appreciated, including pseudo.
Thanks!
IMHO it should be like this. Make a list of edged and see if some are overlaying: This should be simple if the rectangles are aligned with the x,y axis. You just find the edges that have the vertexes on the same x or y and the other coordinates need to be in between. After this the remaining edges should form the outline.
Another method to find common edges is to break all rectangles along each x and y axis where you have vertices. This should look as if you are growing all lines to infinity. After this all common edges will have common vertices and can be eliminated.
You have two rows, and three different y-values. Let's say y0 is the top of the thing, y2 is the bottom end, and y1 marks the middle between both rows.
Each row has a maximum and a minimum x-value, let's say the top-row goes from x0_min to x0_max, and the bottom row from x2_min to x2_max. Given those values you just draw around the thing:
(x0_min,y0)->
(x0_max,y0)->
(x0_max,y1)->
(x2_max,y1)->
(x2_max,y2)->
(x2_min,y2)->
(x2_min,y1)->
(x0_min,y1)->
(x0_min,y0)

Looking for an efficient structure for checking which circles enclose a point

I have a large set of overlapping circles each at a random location with a specific radius.
type Circle =
struct
val x: float
val y: float
val radius: float
end
Given a new point with type
type Point =
struct
val x: float
val y: float
end
I would like to know which circles in my set enclose the new point. A linear search is trivial. I'm looking for a structure that can hold the circles and return the enclosing circles with better than O(N) for the presented point.
Ideally the structure should be fast for insertion of new circles and removal of circles as well.
I would like to implement this in F# but ideas in any language are fine.
For your information I'm looking to implement
http://takisword.wordpress.com/2009/08/13/bowyerwatson-algorithm/
but it would be an O(N^2) if I use the naive approach of scanning all circles for every new point.
If we assume that circles are distributed over some rectangle with area 1 and the average area of a circle is a then a quadtree with m levels will leave you with an area with size 1/2^m. This leaves
O(Na/2^m)
as the expected number of circles left in the remaining area.
However, we have done O(log(m)) comparisons to get to this point. This leaves the total number of comparisons as
O(log(m)) + O(N/2^m)
The second term will be constant if log(m) is proportional to N.
This suggests that a quadtree can cut things down to O(log n)
Quadtree is a structure for efficient plane search. You can use it to hold subdivision of the plane.
For example you can create quad tree with such properties:
1. Every cell of quadtree contains indices of circles, overlapping it.
2. Every cell does contain not more than K circles (for example 10) // may be broken
3. Height of tree is bounded by M (usually O(log n))
You can construct quadtree, by iterating overlapped cells, and if number of circles inside cell exceedes K, then subdivide that cell into four (if not exceeding max height). Also something should be considered in case of cell inside circles, because its subdivision is pointless.
When finding circles you should localise quadtree, then iterate through overlapping circles and find, those which contains point.
In case of sparse circle distribution search will be very efficient.
I have a bachelor thesis, where I adapted quadtree, for closest segment location, with expected time O(log n), I think similar approach could be used here
Actually you search for triangles whose circumcircles include the new point p. Thus your Delaunay triangulation is already the data structure you need: First search for the triangle t which includes p (google for 'delaunay walk'). The circumcircle of t certainly includes p. Then start from t and grow the (connected) area of triangles whose circumcircles include p.
Implementing it in a fast an reliable way is a lot of work. Unless you want to create a new library you may want to use an existing one. My approach for C++ is Fade2D [1] but there are also many others, it depends on your specific needs.
[1] http://www.geom.at/fade2d/html/

Calculating total coverage area of a union of polygons

I have a number of 2D (possibly intersecting) polygons which I rendered using OpenGL ES on the screen. All the polygons are completely contained within the screen. What is the most timely way to find the percentage area of the union of these polygons to the total screen area? Timeliness is required as I have a requirement for the coverage area to be immediately updated whenever a polygon is shifted.
Currently, I am representing each polygon as a 2D array of booleans. Using a point-in-polygon function (from a geometry package), I sample each point (x,y) on the screen to check if it belongs to the polygon, and set polygon[x][y] = true if so, false otherwise.
After doing that to all the polygons in the screen, I loop through all the screen pixels again, and check through each polygon array, counting that pixel as "covered" if any polygon has its polygon[x][y] value set to true.
This works, but the performance is not ideal as the number of polygons increases. Are there any better ways to do this, using open-source libraries if possible? I thought of:
(1) Unioning the polygons to get one or more non-overlapping polygons. Then compute the area of each polygon using the standard area-of-polygon formula. Then sum them up. Not sure how to get this to work?
(2) Using OpenGL somehow. Imagine that I am rendering all these polygons with a single color. Is it possible to count the number of pixels on the screen buffer with that certain color? This would really sound like a nice solution.
Any efficient means for doing this?
If you know background color and all polygons have other colors, you can read all pixels from framebuffer glReadPixels() and simply count all pixels that have color different than background.
If first condition is not met you may consider creating custom framebuffer and render all polygons with the same color (For example (0.0, 0.0, 0.0) for backgruond and (1.0, 0.0, 0.0) for polygons). Next, read resulting framebuffer and calculate mean of red color across the whole screen.
If you want to get non-overlapping polygons, you can run a line intersection algorithm. A simple variant is the Bentley–Ottmann algorithm, but even faster algorithms of O(n log n + k) (with n vertices and k crossings) are possible.
Given a line intersection, you can unify two polygons by constructing a vertex connecting both polygons on the intersection point. Then you follow the vertices of one of the polygons inside of the other polygon (you can determine the direction you have to go in using your point-in-polygon function), and remove all vertices and edges until you reach the outside of the polygon. There you repair the polygon by creating a new vertex on the second intersection of the two polygons.
Unless I'm mistaken, this can run in O(n log n + k * p) time where p is the maximum overlap of the polygons.
After unification of the polygons you can use an ordinary area function to calculate the exact area of the polygons.
I think that attempt to calculate area of polygons with number of pixels is too complicated and sometimes inaccurate. You can see something similar in stackoverflow answer about calculation the area covered by a polygon and if you construct regular polygons see area of a regular polygon ,

Search optimization problem

Suppose you have a list of 2D points with an orientation assigned to them. Let the set S be defined as:
S={ (x,y,a) | (x,y) is a 2D point, a is an orientation (an angle) }.
Given an element s of S, we will indicate with s_p the point part and with s_a the angle part. I would like to know if there exist an efficient data structure such that, given a query point q, is able to return all the elements s in S such that
(dist(q_p, s_p) < threshold_1) AND (angle_diff(q_a, s_a) < threshold_2) (1)
where dist(p1,p2), with p1,p2 2D points, is the euclidean distance, and angle_diff(a1,a2), with a1,a2 angles, is the difference between angles (taken to be the smallest one). The data structure should be efficient w.r.t. insertion/deletion of elements and the search as defined above. The number of vectors can grow up to 10.000 and more, but take this with a grain of salt.
Now suppose to change the above requirement: instead of using the condition (1), let's request all the elements of S such that, given a distance function d, we want all elements of S such that d(q,s) < threshold. If i remember well, this last setup is called range-search. I don't know if the first case can be transformed in the second.
For the distance search I believe the accepted best method is a Binary Space Partition tree. This can be stored as a series of bits. Each two bits (for a 2D tree) or three bits (for a 3D tree) subdivides the space one more level, increasing resolution.
Using a BSP, locating a set of objects to compare distances with is pretty easy. Just find the smallest set of squares or cubes which contain the edges of your distance box.
For the angle, I don't know of anything. I suppose that you could store each object in a second list or tree sorted by its angle. Then you would find every object at the proper distance using the BSP, every object at the proper angles using the angle tree, then do a set intersection.
You have effectively described a "three dimensional cyclindrical space", ie. a space that is locally three dimensional but where one dimension is topologically cyclic. In other words, it is locally flat and may be modeled as the boundary of a four-dimensional object C4 in (x, y, z, w) defined by
z^2 + w^2 = 1
where
a = arctan(w/z)
With this model, the space defined by your constraints is a 2-dimensional cylinder wrapped "lengthwise" around a cross section wedge, where the wedge wraps around the 4-d cylindrical space with an angle of 2 * threshold_2. This can be modeled using a "modified k-d tree" approach (modified 3-d tree), where the data structure is not a tree but actually a graph (it has cycles). You can still partition this space into cells with hyperplane separation, but traveling along the curve defined by (z, w) in the positive direction may encounter a point encountered in the negative direction. The tree should be modified to actually lead to these nodes from both directions, so that the edges are bidirectional (in the z-w curve direction - the others are obviously still unidirectional).
These cycles do not change the effectiveness of the data structure in locating nearby points or allowing your constraint search. In fact, for the most part, those algorithms are only slightly modified (the simplest approach being to hold a visited node data structure to prevent cycles in the search - you test the next neighbors about to be searched).
This will work especially well for your criteria, since the region you define is effectively bounded by these axis-defined hyperplane-bounded cells of a k-d tree, and so the search termination will leave a region on average populated around pi / 4 percent of the area.