Checking for intersection between an area and a set of points with heterogeneous geographical data - gps

I have an application which deals with geographical data and, in particular, tracks. Given an area (radius around geopoint, or bounding box of geopoints) I need to find, among of all these tracks, the ones that have points that interesect with said area.
For the tracks, it seems I will be dealing both with EPSG:3857 and EPSG:4326: they are either inputted by map interaction (openstreetmap, which uses EGPS:4326 data but EGPS:3857 for tiles) or GPS data (which is EGPS:4326). As for the area, I have "blank slate", but I think it will come in both forms (user either indicates an area on the map or provides an address and radius. address would be resolved with the android geocoder, which I am assuming uses EGPS:4326.)
How do I go about doing this?

Related

GPS coordinates analisys library

I am working on a project where I save the Latitude and Longuite of a vehicle each an interval. I have also a route saved as an array of gps coordinates. So I would like to know if there is some library, that helps me to know if a point is inside the rout and other basic calculations with the coordinates as distance calculations for ex.
Any tool an any language helps!
Based on your comment, since you're not building a typical internet map, I might recommend you use a combination of Python and the Shapely library. You can see some nice examples on this post over at GIS.SE.
GIS Analyses: Geometry Types, Buffering, Intersection, etc.
In order to treat several individual Lat/Long positions as a "route", you'll need to format them as points in a LineString geometry type. Also beware: In most GIS software, points are arranged as X,Y. That means you'll be adding your points as Long,Lat. Inverting this is a common mistake that can be frustrating if you're not aware of it.
Next, in order to test whether any given point is within your route, you'll need to Buffer your route (LineString). I would use the accuracy of the GPS unit, + a few extra meters, as my buffering radius. This will give you a proper geometry (Polygon) for a Point-In-Polygon test (i.e. Intersection) that will calculate whether a given point is within the bounds of the route.
The GIS.SE post I linked to provides examples for both buffering and intersection using Python and Shapely.
Some notes about coordinates: Geodetic vs. Cartesian
I'm not confident if Shapely will perform reliable calculations on geodetic data, which is what we call the familiar coordinates you get from GPS. Before doing operations in Shapely, you may need to translate your long/lat points into projected X/Y coordinates for an appropriate coordinate system, such as UTM, etc. (Hopefully someone will comment whether this is necessary.)
Assuming this is necessary, you could add the PyProj library to give you a bridge between the GPS coordinates you have and the Cartesian coordinates you need. PyProj is the one-size-fits-all solution to this problem. However if UTM coordinates will work you might find the library cited here to be easier to implement.
If you decide to go with PyProj, it will help to know that your GPS data is described by the EPSG:4326 coordinate system. And if you are comfortable with UTM for your projected coordinates, you'll need need to determine an appropriate UTM zone for your area and get its Proj4 coordinate definition from SpatialReference.org.
For example I live in South Carolina, USA, which is UTM 17 North. So if I go to SpatialReference.org, search for "EPSG UTM zone 17N", select the option which references "WGS 1984" (I happen to know this means units in meters), then click on the Proj4 link, the site provides the coordinate system definition I'm after in Proj4 notation:
+proj=utm +zone=17 +ellps=WGS84 +datum=WGS84 +units=m +no_defs
If you're not comfortable diving into the world of coordinate systems, EPSG codes, Proj4 strings and such, you might want to favor that alternate coordinate translation library I mentioned earlier rather than PyProj. On the other hand, if you will benefit from a more localized coordinate system (most countries have their own localized systems), or if you need to keep your code portable for use in many areas, I'd recommend using PyProj and make sure to keep your Proj4 definition string in a config file, and NOT hard-coded throughout your app!

It is possible to recognize all objects from a room with Microsoft Kinect?

I have a project where I have to recognize an entire room so I can calculate the distances between objects (like big ones eg. bed, table, etc.) and a person in that room. It is possible something like that using Microsoft Kinect?
Thank you!
Kinect provides you following
Depth Stream
Color Stream
Skeleton information
Its up to you how you use this data.
To answer your question - Official Micorosft Kinect SDK doesnt provides shape detection out of the box. But it does provide you skeleton data/face tracking with which you can detect distance of user from kinect.
Also with mapping color stream to depth stream you can detect how far a particular pixel is from kinect. In your implementation if you have unique characteristics of different objects like color,shape and size you can probably detect them and also detect the distance.
OpenCV is one of the library that i use for computer vision etc.
Again its up to you how you use this data.
Kinect camera provides depth and consequently 3D information (point cloud) about matte objects in the range 0.5-10 meters. With this information it is possible to segment out the floor (by fitting a plane) of the room and possibly walls and the ceiling. This step is important since these surfaces often connect separate objects making them a one big object.
The remaining parts of point cloud can be segmented by depth if they don't touch each other physically. Using color one can separate the objects even further. Note that we implicitly define an object as 3D dense and color consistent entity while other definitions are also possible.
As soon as you have your objects segmented you can measure the distances between your segments, analyse their shape, recognize artifacts or humans, etc. To the best of my knowledge however a Skeleton library can recognize humans after they moved for a few seconds. Below is a simple depth map that was broken on a few segments using depth but not color information.

Determine if GPS location is within city limits?

I want to be able to determine if a GPS location is in an inhabited or uninhabited zone.
I have tried several reverse geocoding API like Nominatim, but failed to get good results. It always returns the nearest possible address, even when I selected a location in the middle of a forest.
Is there any way to determine this with reasonable accuracy? Are there any databases or web services for this ?
If you have to calculate that youself, then the interesting things start:
The information whether or not a region is inhabited is stored in digital maps in layer "Land_Use". There are values for Forest, Water, Industry, Cemetary, etc.
You would have to import these Land_use polygons into a special DB (PostGres).
Such a spatial DB provides fast geo indizeds for searching only the relevant polygons.
Some countries may also fit in main memory, but then you need some kind of geo spatial index, like Quad-Tree or k-d tree to store the polygons.
Once you have imported the polygons, it is a simple "point in polygon" query, or "polygons within radius r". The typoe of th epolygon denotes the land use.
OpenStreetMap provides these polygons for free.
Otherwise you have to buy them from TomTom or probably NavTeq (Nokia Maps). But this makes only sense for major companies.
Since you're using Nominatim, you're getting the coordinates of the nearest address back in the reply.
Since the distance between two coordinates can be calculated, you can just use that to calculate the distance to the closest address found, and from that figure out if you're close to populated areas or not.

Is there a common/standard/accepted way to model GPS entities (waypoints, tracks)?

This question somewhat overlaps knowledge on geospatial information systems, but I think it belongs here rather than GIS.StackExchange
There are a lot of applications around that deal with GPS data with very similar objects, most of them defined by the GPX standard. These objects would be collections of routes, tracks, waypoints, and so on. Some important programs, like GoogleMaps, serialize more or less the same entities in KML format. There are a lot of other mapping applications online (ridewithgps, strava, runkeeper, to name a few) which treat this kind of data in a different way, yet allow for more or less equivalent "operations" with the data. Examples of these operations are:
Direct manipulation of tracks/trackpoints with the mouse (including drawing over a map);
Merging and splitting based on time and/or distance;
Replacing GPS-collected elevation with DEM/SRTM elevation;
Calculating properties of part of a track (total ascent, average speed, distance, time elapsed);
There are some small libraries (like GpxPy) that try to model these objects AND THEIR METHODS, in a way that would ideally allow for an encapsulated, possibly language-independent Library/API.
The fact is: this problem is around long enough to allow for a "common accepted standard" to emerge, isn't it? In the other hand, most GIS software is very professionally oriented towards geospatial analyses, topographic and cartographic applications, while the typical trip-logging and trip-planning applications seem to be more consumer-hobbyist oriented, which might explain the quite disperse way the different projects/apps treat and model the problem.
Thus considering everything said, the question is: Is there, at present or being planned, a standard way to model canonicaly, in an Object-Oriented way, the most used GPS/Tracklog entities and their canonical attributes and methods?
There is the GPX schema and it is very close to what I imagine, but it only contains objects and attributes, not methods.
Any information will be very much appreciated, thanks!!
As far as I know, there is no standard library, interface, or even set of established best practices when it comes to storing/manipulating/processing "route" data. We have put a lot of effort into these problems at Ride with GPS and I know the same could be said by the other sites that solve related problems. I wish there was a standard, and would love to work with someone on one.
GPX is OK and appears to be a sort-of standard... at least until you start processing GPX files and discover everyone has simultaneously added their own custom extensions to the format to deal with data like heart rate, cadence, power, etc. Also, there isn't a standard way of associating a route point with a track point. Your "bread crumb trail" of the route is represented as a series of trkpt elements, and course points (e.g. "turn left onto 4th street") are represented in a separate series of rtept elements. Ideally you want to associate a given course point with a specific track point, rather than just giving the course point a latitude and longitude. If your path does several loops over the same streets, it can introduce some ambiguity in where the course points should be attached along the route.
KML and Garmin's TCX format are similar to GPX, with their own pros and cons. In the end these formats really only serve the purpose of transferring the data between programs. They do not address the issue of how to represent the data in your program, or what type of operations can be performed on the data.
We store our track data as an array of objects, with keys corresponding to different attributes such as latitude, longitude, elevation, time from start, distance from start, speed, heart rate, etc. Additionally we store some metadata along the route to specify details about each section. When parsing our array of track points, we use this metadata to split a Route into a series of Segments. Segments can be split, joined, removed, attached, reversed, etc. They also encapsulate the method of trackpoint generation, whether that is by interpolating points along a straight line, or requesting a path representing directions between the endpoints. These methods allow a reasonably straightforward implementation of drag/drop editing and other common manipulations. The Route object can be used to handle operations involving multiple segments. One example is if you have a route composed of segments - some driving directions, straight lines, walking directions, whatever - and want to reverse the route. You can ask each segment to reverse itself, maintaining its settings in the process. At a higher level we use a Map class to wire up the interface, dispatch commands to the Route(s), and keep a series of snapshots or transition functions updated properly for sensible undo/redo support.
Route manipulation and generation is one of the goals. The others are aggregating summary statistics are structuring the data for efficient visualization/interaction. These problems have been solved to some degree by any system that will take in data and produce a line graph. Not exactly new territory here. One interesting characteristic of route data is that you will often have two variables to choose from for your x-axis: time from start, and distance from start. Both are monotonically increasing, and both offer useful but different interpretations of the data. Looking at the a graph of elevation with an x-axis of distance will show a bike ride going up and down a hill as symmetrical. Using an x-axis of time, the uphill portion is considerably wider. This isn't just about visualizing the data on a graph, it also translates to decisions you make when processing the data into summary statistics. Some weighted averages make sense to base off of time, some off of distance. The operations you end up wanting are min, max, weighted (based on your choice of independent var) average, the ability to filter points and perform a filtered min/max/avg (only use points where you were moving, ignore outliers, etc), different smoothing functions (to aid in calculating total elevation gain for example), a basic concept of map/reduce functionality (how much time did I spend between 20-30mph, etc), and fixed window moving averages that involve some interpolation. The latter is necessary if you want to identify your fastest 10 minutes, or 10 minutes of highest average heartrate, etc. Lastly, you're going to want an easy and efficient way to perform whatever calculations you're running on subsets of your trackpoints.
You can see an example of all of this in action here if you're interested: http://ridewithgps.com/trips/964148
The graph at the bottom can be moused over, drag-select to zoom in. The x-axis has a link to switch between distance/time. On the left sidebar at the bottom you'll see best 30 and 60 second efforts - those are done with fixed window moving averages with interpolation. On the right sidebar, click the "Metrics" tab. Drag-select to zoom in on a section on the graph, and you will see all of the metrics update to reflect your selection.
Happy to answer any questions, or work with anyone on some sort of standard or open implementation of some of these ideas.
This probably isn't quite the answer you were looking for but figured I would offer up some details about how we do things at Ride with GPS since we are not aware of any real standards like you seem to be looking for.
Thanks!
After some deeper research, I feel obligated, for the record and for the help of future people looking for this, to mention the pretty much exhaustive work on the subject done by two entities, sometimes working in conjunction: ISO and OGC.
From ISO (International Standards Organization), the "TC 211 - Geographic information/Geomatics" section pretty much contains it all.
From OGS (Open Geospatial Consortium), their Abstract Specifications are very extensive, being at the same time redundant and complimentary to ISO's.
I'm not sure it contains object methods related to the proposed application (gps track and waypoint analysis and manipulation), but for sure the core concepts contained in these documents is rather solid. UML is their schema representation of choice.
ISO 6709 "[...] specifies the representation of coordinates, including latitude and longitude, to be used in data interchange. It additionally specifies representation of horizontal point location using coordinate types other than latitude and longitude. It also specifies the representation of height and depth that can be associated with horizontal coordinates. Representation includes units of measure and coordinate order."
ISO 19107 "specifies conceptual schemas for describing the spatial characteristics of geographic features, and a set of spatial operations consistent with these schemas. It treats vector geometry and topology up to three dimensions. It defines standard spatial operations for use in access, query, management, processing, and data exchange of geographic information for spatial (geometric and topological) objects of up to three topological dimensions embedded in coordinate spaces of up to three axes."
If I find something new, I'll come back to edit this, including links when available.

Algorithm for reducing GPS track data to discard redundant data?

We're building a GIS interface to display GPS track data, e.g. imagine the raw data set from a guy wandering around a neighborhood on a bike for an hour. A set of data like this with perhaps a new point recorded every 5 seconds, will be large and displaying it in a browser or a handheld device will be challenging. Also, displaying every single point is usually not necessary since a user can't visually resolve that much data anyway.
So for performance reasons we are looking for algorithms that are good at 'reducing' data like this so that the number of points being displayed is reduced significantly but in such a way that it doesn't risk data mis-interpretation. For example, if our fictional bike rider stops for a drink, we certainly don't want to draw 100 lat/lon points in a cluster around the 7-Eleven.
We are aware of clustering, which is good for when looking at a bunch of disconnected points, however what we need is something that applies to tracks as described above. Thanks.
A more scientific and perhaps more math heavy solution is to use the Ramer-Douglas-Peucker algorithm to generalize your path. I used it when I studied for my Master of Surveying so it's a proven thing. :-)
Giving your path and the minimum angle you can tolerate in your path, it simplifies the path by reducing the number of points.
Typically the best way of doing that is:
Determine the minimum number of screen pixels you want between GPS points displayed.
Determine the distance represented by each pixel in the current zoom level.
Multiply answer 1 by answer 2 to get the minimum distance between coordinates you want to display.
starting from the first coordinate in the journey path, read each next coordinate until you've reached the required minimum distance from the current point. Repeat.