I'm in a little over my head on this one. I have approximately 300 shapefiles containing about 7000 polygons, each of which I'm trying to clip a systematic grid of points. Each shapefile has a unique number of polygons (buffers around a point location) and I need to have the grid points assigned to each polygon so that they can be recognized as discrete sets later on.
For example, polygon 1 in shapefile 1 will have a set of grid points associated with it. Polygon 2 in shapefile 1 will have another set of grid points, including many that may be the same as those in polygon 1. I would need an attribute field that identifies those points as belonging to that polygon. If it helps, this is for a discrete choice model being applied to resource selection. Any help is greatly appreciated!
Image: Polygons with grid points.
Image: Single shapefile containing polygons
Using Intersect should connect the two layers in a new feature class with one attribute table.
You can also try Spatial Join which will ad the table of one layer to the table of the other according to location.
Related
As in the title:
pl = plt.contour(X,Y,Z,levels=[0])
paths = pl.allsegs
I wonder how are the data points in paths ordered. Specifically is it oriented clockwise, counterclockwise w.r.t. a guiding center?
The reason I am asking is because matplotlib.pyplot is unaware of torus topology, where edges are identified as the same. connected paths on a torus can look disconnected on an open ended 2D space. I would like to make use of the path datasets to glue together seemingly disconnected segments onto a torus manifold.
I resolved this problem by the following steps:
make use of the torus structure, i.e., Z[0,:] = Z[N,:], Z[:,0] = Z[:,M], where (N,M) are linear dimensions of the matrix.
find allsegs from contour plot for a given level z0:
pl = plt.contour(X,Y,Z,levels=[z0])
segs = pl.allsegs[0]
segs[i] contains the coordinates of a given contour whose two end points either: (1) are the same, then segs[i] is a closed contour in the domain set by X and Y. (2) are different and therefore must terminate at either of the four edges of the domain. In this case, there must exist at least another open contour whose end points pair-up with the current open contour. This pair identification is achieved by calculating their distance on a torus, which is defined as the smallest of |r1-r2|, |r1-r2 +/- period_along_x|, |r1-r2 +/- period_along_y|
ultimately the numerical algorithm boils down to identifying closed contours as well as identifying pairs of matching end points satisfying torus topology.
Three example solutions are shown in the attached figure.
Four open contours, but it is one contour on a torus, where end points are identified as pairs by color
Two closed contours, where two end points of each contour are identical
I have 20 000 polygons in a dataset. I need to have the Euclidean Distance between all polygons, so a 20 000 x 20 000 distance matrix where for each of the polygons, the distance to all other polygons is stored.
I have read in some other threads the recommendation to use the "Near" tool in Arcmap. However, this tool only calculates the distance to the NEAREST polygon, while I need the distance from ALL polygons to ALL polygons.
Is there any solution for this?
Near tool: Calculates distance and additional proximity information between the
input features and the closest feature in another layer or feature
class.
In order to calculate the distance between the centroids of each of your polygons make sure your map is in a projected coordinate system.
Then, make sure the centroid points are calculated (detailed step-by-step here: https://support.esri.com/en/technical-article/000009381 )
Export your centroid point attribute table as a DBF (Click on Options > Export.)
Add the table to your map. Right click on the new table, Display XY Data, select Longitude for the X and Latitude for Y, and select the map's coordinate system to create an events layer.
Then, use the Point Distance tool (Details here: https://desktop.arcgis.com/en/arcmap/10.3/tools/analysis-toolbox/point-distance.htm ). The event points are both the input and near features. The output will be a table displaying distance between all polygon centroids on the map.
Basically what I'm looking for is an algorithm or an extension similar to least cost analysis, but instead of using points on top of a DEM to create a path (line vector) between the points, I whish to create a Thiessen (Voronoi) polygons (centered on points), whose spatial limits would be defined by the DEM.
So for example, a border between 2 polygons would be determined by the least cost analysis between the center points of the 2 polygons. The aim would then be, instead of getting a set of Thiessen polygons with arrow-straight borders (like in the pic), to create a set of polygons whose limits would be determined by the DEM (relief). Sort of like a watershed centered on a single point.
Btw, it would be great if there was a solution applicable in QGIS.
Thanks!
I have an array of arrays containing points that define polygons. These polygons together form another, final shape. What I want, is to only get the outline points of the final shape in the correct order, so I can draw them on screen.
I have tried removing duplicate points (where two shapes meet, they have exact same points) and sorting them around their centroid and then connecting those, but that gives an approximate outline with many deviations (as of course, connecting the points in a clockwise order is not necessarily correct).
So basically, what I want to do is turn this into this.
I have a number of 2D (possibly intersecting) polygons which I rendered using OpenGL ES on the screen. All the polygons are completely contained within the screen. What is the most timely way to find the percentage area of the union of these polygons to the total screen area? Timeliness is required as I have a requirement for the coverage area to be immediately updated whenever a polygon is shifted.
Currently, I am representing each polygon as a 2D array of booleans. Using a point-in-polygon function (from a geometry package), I sample each point (x,y) on the screen to check if it belongs to the polygon, and set polygon[x][y] = true if so, false otherwise.
After doing that to all the polygons in the screen, I loop through all the screen pixels again, and check through each polygon array, counting that pixel as "covered" if any polygon has its polygon[x][y] value set to true.
This works, but the performance is not ideal as the number of polygons increases. Are there any better ways to do this, using open-source libraries if possible? I thought of:
(1) Unioning the polygons to get one or more non-overlapping polygons. Then compute the area of each polygon using the standard area-of-polygon formula. Then sum them up. Not sure how to get this to work?
(2) Using OpenGL somehow. Imagine that I am rendering all these polygons with a single color. Is it possible to count the number of pixels on the screen buffer with that certain color? This would really sound like a nice solution.
Any efficient means for doing this?
If you know background color and all polygons have other colors, you can read all pixels from framebuffer glReadPixels() and simply count all pixels that have color different than background.
If first condition is not met you may consider creating custom framebuffer and render all polygons with the same color (For example (0.0, 0.0, 0.0) for backgruond and (1.0, 0.0, 0.0) for polygons). Next, read resulting framebuffer and calculate mean of red color across the whole screen.
If you want to get non-overlapping polygons, you can run a line intersection algorithm. A simple variant is the Bentley–Ottmann algorithm, but even faster algorithms of O(n log n + k) (with n vertices and k crossings) are possible.
Given a line intersection, you can unify two polygons by constructing a vertex connecting both polygons on the intersection point. Then you follow the vertices of one of the polygons inside of the other polygon (you can determine the direction you have to go in using your point-in-polygon function), and remove all vertices and edges until you reach the outside of the polygon. There you repair the polygon by creating a new vertex on the second intersection of the two polygons.
Unless I'm mistaken, this can run in O(n log n + k * p) time where p is the maximum overlap of the polygons.
After unification of the polygons you can use an ordinary area function to calculate the exact area of the polygons.
I think that attempt to calculate area of polygons with number of pixels is too complicated and sometimes inaccurate. You can see something similar in stackoverflow answer about calculation the area covered by a polygon and if you construct regular polygons see area of a regular polygon ,