Rotating series of polygons aswell as the envelope - pandas

I have a polygon (rectangle or very close it) in a geopandas dataframe that is at an angle relative the x-axis, i.e. it is neither horizontal not vertical. I have a function that splits polygons into smaller rectangles (isometric) but it only works (as desired) on polygon making an angle that is a multiple of pi/2 with the x-axis.
So, my idea has been to rotate any polygon that does not satisfy my requirements, split it and rotate it back to its original position.
For instance:
polygon =
id geometry
85 POLYGON ((49.37794 51.395203, 49.37794 51.395203, 49.37794 51.395203, 49.37794 51.395203, 49.178337 50.363914, 49.178337 50.363914, 49.178337 50.363914, 49.178337 50.363914, 59.99021 48.733814, 59.99021 48.733814, 59.99021 48.733814, 59.99021 48.733814, 60.223083 49.698566, 60.223083 49.698566, 60.223083 49.698566, 60.223083 49.698566, 49.37794 51.395203))
which looks like this:
Now, I determine its angle with the x-axis and rotate it:
polygon = pd.DataFrame(geostore_obstacles_geometry_polygon.loc[85:85,])
polygon['angle'] = polygon.apply(lambda row : polygon_angle(row['geometry']), axis = 1)
polygon = gpd.GeoDataFrame(polygon)
polygon = polygon.set_geometry('geometry')
polygon['rotated'] = polygon.apply(lambda row : shapely.affinity.rotate(row['geometry'], row['angle']), axis = 1)
polygon = polygon.set_geometry('rotated')
which gives:
This step splits the polygon inte smaller pieces:
polygon['add'] = polygon.apply(lambda row : split_polygon_up(row['rotated'],side_length=side_length, shape="square", thresh=threshold), axis = 1)
polygon = polygon.explode('add')
polygon = polygon.set_geometry('add')
Before I finally rotate it back
polygon['rotated_add'] = polygon.apply(lambda row : shapely.rotate(row['add'], -row['angle']), axis = 1)
polygon = polygon.set_geometry('rotated_add')
But, as you can imagine, this is not what I expect to have (sorry for the very uggly image).
I understand WHY it does this but I cannot solve it. I have some ideas that the one possible solution would be to rotate all the smaller polygons together with the convex hull or envelope of their union, but I struggle using geopandas to do it.
I would be immensely grateful for any help on how to solve this issue. The dataframe obtained after all the transformations can be found here: https://drive.google.com/file/d/1wY7g3jsD7PNpaTkGBjbGvYArpRUr0UIk/view?usp=sharing

The relevant function shapely.rotate() has origin='center' as its default option. To rotate around a particular point (x,y), you must specify explicitly with origin=(x,y).
In your particular case, the centroid of the original polygon is a good choice for (x,y).

Related

Unusual Mesh Outline PColorMesh

I am utilizing the pcolormesh function in Matplotlib to plot a series of gridded data (in parallel) across multiple map domains. The code snippet relevant to this question is as follows:
im = ax2.pcolormesh(xgrid, ygrid, data.variable.data[0], cmap=cmap, norm=norm, alpha=0.90, facecolor=None)
Where: xgrid = array of longitude points, ygrid = array of latitude points, data.variable.data[0] = array of corresponding data values, cmap = defined colormap, & norm = defined value normalization
Consider the following image generated from the provided code:
The undesired result I've found in the image above is what appears to be outlines around each grid square, or perhaps better described as patchwork that stands out slightly as the mesh alpha is reduced below 1.
I've set facecolor=None assuming that would remove these outlines, to no avail. What additions or corrections can I make to remove this feature?

How to fill a line in 2D image along a given radius with the data in a given line image?

I want to fill a 2D image along its polar radius, the data are stored in a image where each row or column corresponds to the radius in target image. How can I fill the target image efficiently? Such as with iradius or some functions? I do not prefer a pix-pix operation.
Are you looking for something like this?
number maxR = 100
image rValues := realimage("I(r)",4,maxR)
rValues = 10 + trunc(100*random())
image plot :=realimage("Ring",4,2*maxR,2*maxR)
rValues.ShowImage()
plot.ShowImage()
plot = rValues.warp(iradius,0)
You might also want to check out the relevant example code from the F1 help documentation of GMS itself:
Explaining warp a bit:
plot = rValues.warp(iradius,0)
Assigns values to plot based on a value-lookup in rValues.
For each pixel in plot a coordinate position in rValues is computed, and the value is simply looked up. If the computed coordinate is non-integer, bilinear interpolation between the 4 closest points is used.
In the example, the two 'formulas' for the coordinate calculation are simple x' = iradius and y' = 0 where iradius is an expression computed from the coordinate in plot, for convenience.
You can feed any expression into the parameters for warp( ) and the command is closely related to just using the square bracket notation of addressing values. In fact, the only difference is that warp performs the bilinear interpolation of values instead of truncating the coordinates to integer values.

Fast check if polygon contains point between dataframes

I have two dataframes. One contains a column of Polygons, taken from an image of polygon shapes. Each polygon has a set of coordinates. This dataframe also has a "segment-id" column. I have another dataframe, containing a column of Points, also with coordinates. These Points represent pixels from the same image of Polygon shapes, and therefore have the same coordinate system. I want to give every Point the "segment-id" of the Polygon which contains it. Every Polygon contains at least one Point.
Currently, I achieve this by using a nested for loop:
for i, row in enumerate(point_df.itertuples(), 0):
point = pixel_df.at[i, 'geometry']
for j in range(len(polygon_df)):
polygon = polygon_df.iat[j, 0]
if polygon.contains(point):
pixel_df.at[i, 'segment_id'] = polygon_df.at[j, 'segment_id']
else:
pass
This is extremely slow. For 100 Points, it takes around 10 seconds. I need a faster way of doing this. I have tried using apply but it is still super slow.
Hope someone can help me out, thanks very much.
For fast "is point inside polygon":
Preparation: In the code that obtains the data describing the polygons; using all the vertices, find the minimum and maximum y-coord, and minimum and maximum x-coord; and store that with the polygon's data.
1) Using the point's coords and the polygon's "minimum and maximum x and y" (pre-determined during preparation); do a "bounding box" test. This is just a fast way to find out if the point is definitely not inside the polygon (so you can skip the more expensive steps most of the time).
2) Set a "yes/no" flag to "no"
3) For each edge in the polygon; determine if a horizontal line passing through the point would intersect with the edge, and if it does determine the x-coord of the intersection. If the x-coord of the intersection is less than the point's x-coord, toggle (with NOT) the "yes/no" flag. Ignore "horizontal line passes through a vertex" during this step.
4) For each vertex, compare its y-coord with the point's y-coord. If they're the same you need to look at both edges coming from that vertex to determine if the edge's vertices are in the same y direction. If the edge's vertices are in the same y direction (if the edges form a 'V' shape or upside-down 'V' shape) ignore the vertex. Otherwise (if the edges form a '<' or '>' shape), if the vertex's x-coord is less than the point's x-coord, toggle the "yes/no" flag.
After all this is done; that "yes/no" flag will tell you if the point was in the polygon.

Calculating total coverage area of a union of polygons

I have a number of 2D (possibly intersecting) polygons which I rendered using OpenGL ES on the screen. All the polygons are completely contained within the screen. What is the most timely way to find the percentage area of the union of these polygons to the total screen area? Timeliness is required as I have a requirement for the coverage area to be immediately updated whenever a polygon is shifted.
Currently, I am representing each polygon as a 2D array of booleans. Using a point-in-polygon function (from a geometry package), I sample each point (x,y) on the screen to check if it belongs to the polygon, and set polygon[x][y] = true if so, false otherwise.
After doing that to all the polygons in the screen, I loop through all the screen pixels again, and check through each polygon array, counting that pixel as "covered" if any polygon has its polygon[x][y] value set to true.
This works, but the performance is not ideal as the number of polygons increases. Are there any better ways to do this, using open-source libraries if possible? I thought of:
(1) Unioning the polygons to get one or more non-overlapping polygons. Then compute the area of each polygon using the standard area-of-polygon formula. Then sum them up. Not sure how to get this to work?
(2) Using OpenGL somehow. Imagine that I am rendering all these polygons with a single color. Is it possible to count the number of pixels on the screen buffer with that certain color? This would really sound like a nice solution.
Any efficient means for doing this?
If you know background color and all polygons have other colors, you can read all pixels from framebuffer glReadPixels() and simply count all pixels that have color different than background.
If first condition is not met you may consider creating custom framebuffer and render all polygons with the same color (For example (0.0, 0.0, 0.0) for backgruond and (1.0, 0.0, 0.0) for polygons). Next, read resulting framebuffer and calculate mean of red color across the whole screen.
If you want to get non-overlapping polygons, you can run a line intersection algorithm. A simple variant is the Bentley–Ottmann algorithm, but even faster algorithms of O(n log n + k) (with n vertices and k crossings) are possible.
Given a line intersection, you can unify two polygons by constructing a vertex connecting both polygons on the intersection point. Then you follow the vertices of one of the polygons inside of the other polygon (you can determine the direction you have to go in using your point-in-polygon function), and remove all vertices and edges until you reach the outside of the polygon. There you repair the polygon by creating a new vertex on the second intersection of the two polygons.
Unless I'm mistaken, this can run in O(n log n + k * p) time where p is the maximum overlap of the polygons.
After unification of the polygons you can use an ordinary area function to calculate the exact area of the polygons.
I think that attempt to calculate area of polygons with number of pixels is too complicated and sometimes inaccurate. You can see something similar in stackoverflow answer about calculation the area covered by a polygon and if you construct regular polygons see area of a regular polygon ,

Obtaining 3D location of an object being looked at by a camera with known position and orientation

I am building an augmented reality application and I have the yaw, pitch, and roll for the camera. I want to start placing objects in the 3D environment. I want to make it so that when the user clicks, a 3D point pops up right where the camera is pointed (center of the 2D screen) and when the user moves, the point moves accordingly in 3D space. The camera does not change position, only orientation. Is there a proper way to recover the 3D location of this point? We can assume that all points are equidistant from the camera location.
I am able to accomplish this independently for two axes (OpenGL default orientation). This works for changes in the vertical axis:
x = -sin(pitch)
y = cos(pitch)
z = 0
This also works for changes in the horizontal axis:
x = 0
y = -sin(yaw)
z = cos(yaw)
I was thinking that I should just make combine into:
x = -sin(pitch)
y = sin(yaw) * cos(pitch)
z = cos(yaw)
and that seems to be close, but not exactly correct. Any suggestions would be greatly appreciated!
It sounds like you just want to convert from a rotation vector (pitch,yaw,roll) to a rotation matrix. The conversion can bee seen on the Wikipedia article on rotation matrices. The idea is that once you have constructed your matrix, to transform any point simply.
final_pos = rot_mat*initial_pose
where final and initial pose are 3x1 vectors and rot_mat is a 3x3 matrix.