When you make a line profile of all x-values or all y-values the extraction from each pixel is clear. But when you take a line profile along a diagonal, how does DM choose which pixels to use in the one dimensional readout?
Not really a scripting question, but I'm rather certain that it uses bi-linear interpolation between the grid-points along the drawn line. (And if perpendicular integration is enabled, it does so in an integral.) It's the same interpolation you would get for a "rotate" image.
In fact, you can think of it as a rotate-image (bi-linearly interpolated) with a 'cut-out' afterwards, potentially summed/projected onto the new X-axis.
Here is an example
Assume we have a 5 x 4 image, which gives the grid as shown below.
I'm drawing top-left corners to indicate the coordinates system pixel convention used in DigitalMicrgraph, where
(x/y)=(0/0) is the top-left corner of the image
Now extract a LineProfile from (1/1) to (4/3). I have highlighted the pixels for those coordinates.
Note, that a Line drawn from the corners seems to be shifted by half-a-pixel from what feels 'natural', but that is the consequence of the top-left-corner convention. I think, this is why a LineProfile-Marker is shown shifted compared to f.e. LineAnnotations.
In general, this top-left corner convention makes schematics with 'pixels' seem counter-intuitive. It is easier to think of the image simply as grid with values in points at the given coordinates than as square pixels.
Now the maths.
The exact profile has a length of:
As we can only have profiles with integer channels, we actually extract a LineProfile of length = 4, i.e we round up.
The angle of the profile is given by the arc-tangent of dX and dY.
So to extract the profile, we 'rotate' the grid by that angle - done by bilinear interpolation - and then extract the profile as grid of size 4 x 1:
This means the 'values' in the profile are from the four points:
Which are each bi-linearly interpolated values from four closest points of the original image:
In case the LineProfile is averaged over a certain width W, you do the same thing but:
extract a 2D grid of size L x W centered symmetrically over the line.i.e. the grid is shifted by (W-1)/2 perpendicular to the profile direction.
sum the values along W
Related
I am plotting meteor observation data from a sky camera, sometimes using right ascension and declination for my x and y axes, at other times azimuth and elevation. The problem I have in both cases is with the x axis when my observations span the 360 degree mark. Sometimes I get a batch of observations on the left of my plot (near zero degrees, and a batch on the right hand side (near 360 degrees), with a big expanse of nothing in the middle. Is there any easy way I can change the x axis so that the 360/0 degree wrap over is in the centre of the plot? I would still want to show the true azimuth (or right ascension) in the axis labels.
PS. Pointing the camera elsewhere is not an option ]1
PPS So in the image shown the plots on the left hand side should be to the right of those on the right hand side with x axis from 250 (via 360/0) to 100.
PPPS So the second image shows what I am after - but I got to that by doctoring the data - as is obvious from the scale of the x axis in this plot
I have two dataframes. One contains a column of Polygons, taken from an image of polygon shapes. Each polygon has a set of coordinates. This dataframe also has a "segment-id" column. I have another dataframe, containing a column of Points, also with coordinates. These Points represent pixels from the same image of Polygon shapes, and therefore have the same coordinate system. I want to give every Point the "segment-id" of the Polygon which contains it. Every Polygon contains at least one Point.
Currently, I achieve this by using a nested for loop:
for i, row in enumerate(point_df.itertuples(), 0):
point = pixel_df.at[i, 'geometry']
for j in range(len(polygon_df)):
polygon = polygon_df.iat[j, 0]
if polygon.contains(point):
pixel_df.at[i, 'segment_id'] = polygon_df.at[j, 'segment_id']
else:
pass
This is extremely slow. For 100 Points, it takes around 10 seconds. I need a faster way of doing this. I have tried using apply but it is still super slow.
Hope someone can help me out, thanks very much.
For fast "is point inside polygon":
Preparation: In the code that obtains the data describing the polygons; using all the vertices, find the minimum and maximum y-coord, and minimum and maximum x-coord; and store that with the polygon's data.
1) Using the point's coords and the polygon's "minimum and maximum x and y" (pre-determined during preparation); do a "bounding box" test. This is just a fast way to find out if the point is definitely not inside the polygon (so you can skip the more expensive steps most of the time).
2) Set a "yes/no" flag to "no"
3) For each edge in the polygon; determine if a horizontal line passing through the point would intersect with the edge, and if it does determine the x-coord of the intersection. If the x-coord of the intersection is less than the point's x-coord, toggle (with NOT) the "yes/no" flag. Ignore "horizontal line passes through a vertex" during this step.
4) For each vertex, compare its y-coord with the point's y-coord. If they're the same you need to look at both edges coming from that vertex to determine if the edge's vertices are in the same y direction. If the edge's vertices are in the same y direction (if the edges form a 'V' shape or upside-down 'V' shape) ignore the vertex. Otherwise (if the edges form a '<' or '>' shape), if the vertex's x-coord is less than the point's x-coord, toggle the "yes/no" flag.
After all this is done; that "yes/no" flag will tell you if the point was in the polygon.
I'm trying to figure out exactly how line width affects a stroked line in PDF, given the current transformation matrix (CTM). Two questions...
First: how do I convert the line width to device space using the CTM? Page 208 in the PDF 1.7 Reference, which describes how to convert points using the CTM, assumes the input data is an (x, y) point. Line width is just a single value, so how do I convert it? Do I create a "dummy" point from it like (lineWidth, lineWidth)?
Second: once I make that calculation, I'll get another (x, y) point. If the CTM has different scaling factors for horizontal vs. vertical, that gives me two different line widths. How are these line widths actually applied? Does the first one (x) get applied only when drawing horizontal lines?
A concrete example for the second question: if I draw/stroke a horizontal line from (0, 0) to (4, 4) with line width (2, 1), what are the coordinates of the bounding box of the resulting rectangle (i.e., the rectangle that contains the line width)?
This is from Page 215 in the Reference, but it doesn't actually explain how the thickness of stroked lines will vary:
The effect produced in device space depends on the current transformation matrix
(CTM) in effect at the time the path is stroked. If the CTM specifies scaling by
different factors in the horizontal and vertical dimensions, the thickness of
stroked lines in device space will vary according to their orientation.
how do I convert the line width to device space using the CTM?
The line width essentially is the line size perpendicular to its direction. Thus, to calculate the width after transformation using the CTM, you choose a planar vector perpendicular to the original line whose length is the line width from the current graphics state, apply the CTM (without translation, i.e. setting e and f to 0) to that vector (embedded in the three dimensional space by setting the third coordinate to 1) and calculate the length of the resulting 2D vector (projecting on the first two coordinates).
E.g. you have a line from (0,0) to (1,4) in current user space coordinates with a width of 1. You have to find a vector perpendicular to it, e.g. (-4,1) by rotating 90° counter clockwise, and scale it to a length of 1, i.e. ( -4/sqrt(17), 1/sqrt(17) ) in that case.
If the CTM is the one from #Tikitu's answer
CTM has a horizontal scaling factor of 2 and a vertical scaling factor of 1
it would be
2 0 0
0 1 0
0 0 1
This matrix would make the line from the example above go from (0,0) to (2,4) and the "width vector" ( -4/sqrt(17), 1/sqrt(17) ) would be transformed to ( -8/sqrt(17), 1/sqrt(17) ) (the CTM already has no translation part) with a length of sqrt(65/17) which is about 1.955. I.e. the width of the resulting line (its size perpendicular to its direction) is nearly 2.
If the original line would instead have been (0,0) to (4,1) with width 1, a width vector choice would have been ( -1/sqrt(17), 4/sqrt(17) ). In that case the transformed line would go from (0,0) to (8,1) and the width vector would be transformed to ( -2/sqrt(17), 4/sqrt(17) ) with a length of sqrt(20/17) which is about 1.085. I.e. the width of the resulting line (perpendicular to its direction) is slightly more than 1.
You seem to be interested in the "corners" of the line. For this you have to take start and end of the transformed line and add or subtract half the transformed width vector. In the samples above:
(original line from (0,0) to (1,4)): ( -4/sqrt(17), 1/(2*sqrt(17)) ), ( 4/sqrt(17), -1/(2*sqrt(17)) ), ( 2-4/sqrt(17), 4+1/(2*sqrt(17)) ), ( 2+4/sqrt(17), 4-1/(2*sqrt(17)) );
(original line from (0,0) to (4,1)): ( -1/sqrt(17), 2/sqrt(17) ), ( 1/sqrt(17), -2/sqrt(17) ), ( 8-1/sqrt(17), 1+2/sqrt(17) ), ( 8+1/sqrt(17), 1-2/sqrt(17) ).
Don't forget, though, that PDF lines often are not cut off at the end but instead have some cap. And furthermore remember the special meaning of line width 0.
I don't know anything about PDF internals, but I can make a guess at what that passage might mean, based on knowing a bit about using matrices to represent linear transformations.
If you imagine your stroked line as a rectangle (long and thin, but with a definite width) and apply the CTM to the four corner points, you'll see how the orientation of the line changes its width when the CTM has different horizontal and vertical scaling factors.
If your CTM has a horizontal scaling factor of 2 and a vertical scaling factor of 1, think about lines at various angles:
a horizontal line (a short-but-wide rectangle) gets its length doubled, and it's "height" (the width of the line) stays the same;
a vertical line (a tall-and-thin rectangle) gets it's width doubled (i.e., the line gets twice as thick), and it's length stays the same;
lines at various angles get thicker by different degrees, depending on the angle, because they get stretched horizontally but not verticallye.g.
the thickness of a line at 45 degrees is measured diagonally (45 degrees the other way), so it gets somewhat thicker (some horizontal stretching), but not twice as thick (the vertical component of the diagonal didn't get bigger). (You can figure out the thickness with two applications of the Pythagorean theorem; it's about 1.58 times greater, or sqrt(5)/sqrt(2).)
If this story is correct, you can't convert line width using the CTM: it is simply different case-by-case, depending on the orientation of the line. What you can convert is the width of a particular line, with a particular orientation, via the trick of thinking of the line as a solid area and running its corners individually through the CTM. (This also means that "the same" line, with the same thickness, will look different as you vary its orientation, if your CTM has different horizontal and vertical scaling factors.)
I have a number of 2D (possibly intersecting) polygons which I rendered using OpenGL ES on the screen. All the polygons are completely contained within the screen. What is the most timely way to find the percentage area of the union of these polygons to the total screen area? Timeliness is required as I have a requirement for the coverage area to be immediately updated whenever a polygon is shifted.
Currently, I am representing each polygon as a 2D array of booleans. Using a point-in-polygon function (from a geometry package), I sample each point (x,y) on the screen to check if it belongs to the polygon, and set polygon[x][y] = true if so, false otherwise.
After doing that to all the polygons in the screen, I loop through all the screen pixels again, and check through each polygon array, counting that pixel as "covered" if any polygon has its polygon[x][y] value set to true.
This works, but the performance is not ideal as the number of polygons increases. Are there any better ways to do this, using open-source libraries if possible? I thought of:
(1) Unioning the polygons to get one or more non-overlapping polygons. Then compute the area of each polygon using the standard area-of-polygon formula. Then sum them up. Not sure how to get this to work?
(2) Using OpenGL somehow. Imagine that I am rendering all these polygons with a single color. Is it possible to count the number of pixels on the screen buffer with that certain color? This would really sound like a nice solution.
Any efficient means for doing this?
If you know background color and all polygons have other colors, you can read all pixels from framebuffer glReadPixels() and simply count all pixels that have color different than background.
If first condition is not met you may consider creating custom framebuffer and render all polygons with the same color (For example (0.0, 0.0, 0.0) for backgruond and (1.0, 0.0, 0.0) for polygons). Next, read resulting framebuffer and calculate mean of red color across the whole screen.
If you want to get non-overlapping polygons, you can run a line intersection algorithm. A simple variant is the Bentley–Ottmann algorithm, but even faster algorithms of O(n log n + k) (with n vertices and k crossings) are possible.
Given a line intersection, you can unify two polygons by constructing a vertex connecting both polygons on the intersection point. Then you follow the vertices of one of the polygons inside of the other polygon (you can determine the direction you have to go in using your point-in-polygon function), and remove all vertices and edges until you reach the outside of the polygon. There you repair the polygon by creating a new vertex on the second intersection of the two polygons.
Unless I'm mistaken, this can run in O(n log n + k * p) time where p is the maximum overlap of the polygons.
After unification of the polygons you can use an ordinary area function to calculate the exact area of the polygons.
I think that attempt to calculate area of polygons with number of pixels is too complicated and sometimes inaccurate. You can see something similar in stackoverflow answer about calculation the area covered by a polygon and if you construct regular polygons see area of a regular polygon ,
Let's say I have circle bouncing around inside a rectangular area. At some point this circle will collide with one of the surfaces of the rectangle and reflect back. The usual way I'd do this would be to let the circle overlap that boundary and then reflect the velocity vector. The fact that the circle actually overlaps the boundary isn't usually a problem, nor really noticeable at low velocity. At high velocity it becomes quite clear that the circle is doing something it shouldn't.
What I'd like to do is to programmatically take reflection into account and place the circle at it's proper position before displaying it on the screen. This means that I have to calculate the point where it hits the boundary between it's current position and it's future position -- rather than calculating it's new position and then checking if it has hit the boundary.
This is a little bit more complicated than the usual circle/rectangle collision problem. I have a vague idea of how I should do it -- basically create a bounding rectangle between the current position and the new position, which brings up a slew of problems of it's own (Since the rectangle is rotated according to the direction of the circle's velocity). However, I'm thinking that this is a common problem, and that a common solution already exists.
Is there a common solution to this kind of problem? Perhaps some basic theories which I should look into?
Since you just have a circle and a rectangle, it's actually pretty simple. A circle of radius r bouncing around inside a rectangle of dimensions w, h can be treated the same as a point p at the circle's center, inside a rectangle (w-r), (h-r).
Now position update becomes simple. Given your point at position x, y and a per-frame velocity of dx, dy, the updated position is x+dx, y+dy - except when you cross a boundary. If, say, you end up with x+dx > W (letting W = w-r), then you do the following:
crossover = (x+dx) - W // this is how far "past" the edge your ball went
x = W - crossover // so you bring it back the same amount on the correct side
dx = -dx // and flip the velocity to the opposite direction
And similarly for y. You'll have to set up a similar (reflected) check for the opposite boundaries in each dimension.
At each step, you can calculate the projected/expected position of the circle for the next frame.
If this lies outside the rectangle, then you can then use the distance from the old circle position to the rectangle's edge and the amount "past" the rectangle's edge that the next position lies at (the interpenetration) to linearly interpolate and determine the precise time when the circle "hits" the rectangle edge.
For example, if the circle is 10 pixels away from the rectangle's edge, then is predicted to move to 5 pixels beyond it, you know that for 2/3rds of the timestep (10/15ths) it moves on its orginal path, then is reflected and continues on its new path for the remaining 1/3rd of the timestep (5/15ths). By calculating these two parts of the motion and "adding" the translations together, you can find the correct new position.
(Of course, it gets more complicated if you hit near a corner, as there may be several collisions during the timestep, off different edges. And if you have more than one circle moving, things get a lot more complex. But that's where you can start for the case you've asked about)
Reflection across a rectangular boundary is incredibly simple. Just take the amount that the object passed the boundary and subtract it from the boundary position. If the position without reflecting would be (-0.8,-0.2) for example and the upper left corner is at (0,0), the reflected position would be (0.8,0.2).