JTS with lat/lon - latitude-longitude

I'm having some spatial data that has all of its coordinates as lat/lon pairs (with about 10 digits decimal precision), it's stored in a database as WGS84 data.Some of the data is represented as polygons which are the resulting union of some smaller polygons whose boundaries are stored.Then I'm having a number of points from which I build a linesegments (just 2 points in each segment) which I use later for intersection tests with the polygons.
I'm using a SpatialIndex to improve my queries so I insert the envelopes of all polygons in a tree (tested with both QuadTree and STRtree).Then, I connect two points into a linesegment and I'm using its envelope to query the tree for possible intersections.The problem is that I get pretty much all the polygons as a result which is clearly wrong.. To give you some idea about the real scale of my data, I have about 100 polygons that cover the whole North america, each line covers a very very small part of a single polygon.Ideally, i would expect no more than 2 polygons as a result.
I'm using JTS to do this calculation and I'm aware that it's not really suited for spherical data so can you suggest me another library/tool to achieve the desired behaviour or possible a workaround (for example, projecting before using JTS)?

If you only have north america, just rotate earth by 90 degrees so that Alaska is no longer on the far east. (Fun fact: Alaska is both the most northern, western and eastern state of the U.S.) Then your rectangles should be okay.
There are a number of non-trivial cases though when working with spherical data. Depending on how your data is defined, your polygon borders may actually be bent lines, instead of straight lines. Consider this screenshot of Google Ingress: https://lh4.ggpht.com/S_9jrMqf08JfIbr7DgUDH96rvXMK4wOGtaSKYPGCruXv2HE4oeRuEaQIDIywMgH4198=h900
I read somewhere that the mismatch of the "fog" texture and the green line visible in the left field is due to the two drawing functions using different approximations. One is always a straight line, whereas the other follows the curvature of the earth. If you have a large field (polygon!), the error becomes worse.
"Intersection" becomes a tricky term when your data consists of non-straight lines on the surface of a sphere, unfortunately; and a "straight" line on the surface of earth will often yield an arctan type curve in latlon coordinates.
Projections: these can help, but mostly when your data is local. UTM projections are pretty good, but you need at least 9 UTM zones to cover north america without Alaska. As long as your data is within one UTM zone, projecting the data into this zone and then working with 2D euclidean space should work good. But if it gets lager than this, you may need to stitch different projections, and that is really messy, too.

Related

GPS distance: Pythagora's on an Equirectangular approximation vs Haversine fomula errors at different scales?

I'm trying to decide whether it makes cpu processing time sense to use the more complex Haversine formula instead of the faster Pythagorean's formula but while there seems to be a pretty unanimous answer on the lines of: "you can use Pythagora's formula for acceptable results on small distances but haversine is better", I can not find even a vague definition on what "small distances" mean.
This page, linked in the top answer to the very popular question Calculate distance between two latitude-longitude points? claims:
If performance is an issue and accuracy less important, for small distances Pythagoras’ theorem can be used on an equi­rectangular projec­tion:*
Accuracy is somewhat complex: along meridians there are no errors, otherwise they depend on distance, bearing, and latitude, but are small enough for many purposes*
the asterisc even says "Anyone care to quantify them?"
But this answer claims that the error is about 0.1% at 1000km (but it doesn't cite any reference, just personal observations) and that for 4km (even assuming the % doesn't shrink due to way smaller distance) it would mean under 4m of error which for public acces GPS is around the open-space best gps accuracy.
Now, I don't know what the average Joe thinks of when they say "small distances" but for me, 4km is definitely not a small distance (- I'm thinking more of tens of meters), so I would be grateful if someone can link or calculate a table of errors just like the one in this answer of Measuring accuracy of latitude and longitude? but I assume the errors would be higher near the poles so maybe choose 3 representative lattitudes (5*, 45* and 85*?) and calculate the error with respect to the decimal degree place.
Of course, I would also be happy with an answer that gives an exact meaning to "small distances".
Yes ... at 10 meters and up to 1km meters you're going to be very accurate using plain old Pythagoras Theorem. It's really ridiculous nobody talks about this, especially considering how much computational power you save.
Proof:
Take the top of the earth, since it will be a worst case, the top 90 miles longitude, so that it's a circle with the longitudinal lines intersecting in the middle.
Note above that as you zoom in to an area as small as 1km, just 50 miles from the poles, what originally looked like a trapezoid with curved top and bottom borders, essentially looks like a nearly perfect rectangle. In other words we can assume rectilinearity at 1km, and especially at a mere 10M.
Now, its true of course that the longitude degrees are much shorter near the poles than at the equator. For example any slack-jawed yokel can see that the rectangles made by the latitude and longitude lines grow taller, the aspect ratio increasing, as you get closer to the poles. In fact the relationship of the longitude distance is simply what it would be at the equator multiplied by the cosine of the latitude of anywhere along the path. ie. in the image above where "L" (longitude distance) and "l" (latitude distance) are both the same degrees it is:
LATcm = Latitude at *any* point along the path (because it's tiny compared to the earth)
L = l * cos(LATcm)
Thus, we can for 1km or less (even near the poles) calculate the distance very accurately using Pythagoras Theorem like so:
Where: latitude1, longitude1 = polar coordinates of the start point
and: latitude2, longitude2 = polar coordinates of the end point
distance = sqrt((latitude2-latitude1)^2 + ((longitude2-longitude1)*cos(latitude1))^2) * 111,139*60
Where 111,139*60 (above) is the number of meters within one degree at the equator,
because we have to convert the result from equator degrees to meters.
A neat thing about this is that GPS systems usually take measurements at about 10m or less, which means you can get very accurate over very large distances by summing up the results from this equation. As accurate as Haversine formula. The super-tiny errors don't magnify as you sum up the total because they are a percentage that remains the same as they are added up.
Reality is however that the Haversine formula (which is very accurate) isn't difficult, but relatively speaking Haversine will consume your processor at least 3 times more, and up to 31x more computational intensive according to this guy: https://blog.mapbox.com/fast-geodesic-approximations-with-cheap-ruler-106f229ad016.
For me this formula did come useful to me when I was using a system (Google sheets) that couldn't give me the significant digits that are necessary to do the haversine formula.

Is it possible to create Thiessen polygons within a GIS software, but weighted according to a DEM?

Basically what I'm looking for is an algorithm or an extension similar to least cost analysis, but instead of using points on top of a DEM to create a path (line vector) between the points, I whish to create a Thiessen (Voronoi) polygons (centered on points), whose spatial limits would be defined by the DEM.
So for example, a border between 2 polygons would be determined by the least cost analysis between the center points of the 2 polygons. The aim would then be, instead of getting a set of Thiessen polygons with arrow-straight borders (like in the pic), to create a set of polygons whose limits would be determined by the DEM (relief). Sort of like a watershed centered on a single point.
Btw, it would be great if there was a solution applicable in QGIS.
Thanks!

USA and Russia Geometry extracted from Bigquery has a visual distortion

I am using this query to extract the geometries of all countries using BigQuery public dataset, see question here
how to extract all countries geometry from Openstreet map dataset in BigQuery
I use R to draw the results
I tried Kepler.GL and gave me the same results
Something is wrong with Russia and the USA
I know little about R visualization, but what is probably happening is you getting WKT text from BigQuery, and feeding it to R, which has different assumptions.
Tthe issue is your R package probably treats WKT differently than BigQuery. WKT semantics depends on the spatial reference system (SRS) used, which could be geographic (non-projected, using sphere or ellipsoid) or projected (flat map). BigQuery uses geographic system, so edge between points A and B is the shortest geodesic path. Most visualization systems use projected coordinates, and assume flat map. Edge between A and B is shortest straight line on the flat map.
While this does not matter too much in many cases, it still does affect precision when you have long edges. But when an edge crosses anti-meridian (180 degree meridian) you get big problem. An edge between (-169, 66) (eastern edge of Russia) and say (176, 70) (a nearby point on Chukchi sea) is relatively short on the sphere, it crosses anti-meridian, and spans 15 degrees longitude. But the same edge on flat map span 145 degrees longitude and crosses most of the map! These are the long near-horizontal lines you see.
What should you do?
If R has a packet that supports geographic SRS (it is sometime an option to use geodesic edges), you could try it.
Or you can also let BigQuery convert geography from geographic SRS to flat map, that R would understand, using ST_AsGeoJson function. GeoJson is defined on flat map, so BigQuery ST_AsGeoJson converts the semantics from geographic SRS to flat map SRS. You then visualize GeoJson string instead of WKT string in R.
ST_AsGeoJson does a lot of work, to make result conformant to GeoJson spec and flat map. It splits parts of geography that lay east and west of anti-meridian, so you don't get edges that cross it. It also approximates geodesic edges with flat map edges. But it makes visualization system much easier.

Solidworks Feature Recognition on a fill pattern/linear pattern

I am currently creating a feature and patterning it across a flat plane to get the maximum number of features to fit on the plane. I do this frequently enough to warrant building some sort of marcro for this if possible. The issue that I run into is I still have to manually set the spacing between the parts. I want to be able to create a feature and have it determine "best" fit spacing given an area while avoiding overlaps. I have had very little luck finding any resources describing this. Any information or links to potentially helpful resources on this would be much appreciated!
Thank you.
Before, you start the linear pattern bit:
Select the face2 of that feature2, get the outer most loop2 of edges. You can test for that using loop2.IsOuter.
Now:
if the loop has one edge: that means it's a circle and the spacing must superior to the circle's radius
if the loop has more that one edge, that you need to calculate all the distances between the vertices and assume that the largest distance is the safest spacing.
NOTA: If one of the edges is a spline, then you need a different strategy:
You would need to convert the face into a sketch and finds the coordinates of that spline to calculate the highest distances.
Example: The distance between the edges is lower than the distance between summit of the splines. If the linear pattern has the a vertical direction, then spacing has to be superior to the distance between the summit.
When I say distance, I mean the distance projected on the linear pattern direction.

Calculating distance in m in xyz between GPS coordinates that are close together

I have a set of GPS Coordinates and I want to find the speed required for a UAV to travel between them. Doing this by calculating distance in x y z and then dividing by time to travel - m/s.
I know the great circle distance but I assume this will be incorrect since they are all relatively close together(within 10m)?
Is there an accurate way to do this?
For small distances you can use the haversine formula without a relevant loss of accuracy in comparison to Vincenty's f.e. Plus, it's designed to be accurate for very small distances. This can be read up here if you are interested.
You can do this by converting lat/long/alt into XYZ format for both points. Then, figure out the rotation angles to move one of those points (usually, the oldest point) so that it would be at lat=0 long=0 alt=0. Rotate the second position report (the newest point) by the same rotation angles. If you do it all correctly, you will find X equals the east offset, Y equals the north offset, and Z equals the up offset. You can use Pythagorean theorm with X and Y (north and east) offsets to determine the horizontal distance traveled. Normally, you just ignore the altitude differences and work with horizontal data only.
All of this assumes you are using accurate formulas to convert lat/lon/alt into XYZ. It also assumes you have enough precision in the lat/lon/alt values to be accurate. Approximations are not good if you want good results. Normally, you need about 6 decimal digits of precision in lat/lon values to compute positions down to the meter level of accuracy.
Keep in mind that this method doesn't work very well if you haven't moved far (greater than about 10 or 20 meters, more is better). There is enough noise in the GPS position reports that you are going to get jumpy velocity values that you will need to further filter to get good accuracy. The math approach isn't the problem here, it's the inherent noise in the GPS position reports. When you have good reports, you will get good velocity.
A GPS receiver doesn't normally use this approach to know velocity. It looks at the way doppler values change for each satellite and factor in current position to know what the velocity is. This works reasonably well when the vehicle is moving. It is a much faster way to detect changes in velocity (for instance, to release a position clamp). The normal user doesn't have access to the internal doppler values and the math gets very complicated, so it's not something you can do.