Query Redis geospatial data with a bounding box? - redis

I am trying to find a way to query geospatial data in Redis with a Bounding Box.
After going through all the documentation on Redis's site the only reference to a bounding box I can find is in the GEORADIUS command's documentation;
Time complexity: O(N+log(M)) where N is the number of elements inside the bounding box of the circular area delimited by center and radius and M is the number of items inside the index.
In other words, it seems like there is already a fundamental bounding box inside Redis, but for some reason it does not appear to be readily available to the user. I find this strange.
I am aware that there are some libraries available, such as geo.lua which increases the capabilities of Redis. I would like to avoid extensions to my Redis database if possible, specially since it appears that the concept of a bounding box is already in the geospatial system for Redis.
I keep seeing references to a bounding box everywhere (eg. Redis release notes) Am I missing something obvious?

Regrettably your research is correct - a bounding box query on a geoset is presently not available in core Redis.
That being the case, your alternative is to perform a radius search that includes the box, and then filter out results that fall outside the box. Filtering can be done:
in the client: least efficient but probably easiest
server-side Lua: similar to how geo.lua proposes
server-side module: probably the most performant (and of possible use for the community) but also less trivial to implement
You may also submit a feature request to the Redis repository asking for this functionality to be part of the core and describing the use case - IMO it would be a nice addition.

Related

Does Redis geo have the capability for storing polygon?

I followed several example about geospatial support in Redis. I tried to add POINT features to my Redis dataset without any problem, and subsequently I can query POINT names within certain radius (in meters, km, miles) of a certain coordinate (or a certain member of POINT).
The next immediate feature i need to try is POINT-in-POLYGON query. Now I am curious :
Does Redis geo have the capability for storing polygon?
If yes, does this polygon capability come as native or need another stack of software/extension?
I think it is still not possible to store polygons with redis geospatial.
But I found an article explaining how to load a custom lua script to query points (stored in redis) within a polygon. The GEOSEARCH command allows you to do that for a bounding box out of the box if it is enough for you.

D3D12 Use backbuffer surface as unordered access view (UAV)

Im making a simple raytracer for a schoolproject were a compute shader is supposed to be used to shade a triangle or some other primitive.
For this I'd like to write to a backbuffer-surface directly in the compute shader, to then present the results imideatly. I know for certain that this is possible in DX11 though i can't seem to get it to work in DX12.
I couldn't gather that much information about this, but i found this gamedev thread discussing the exact same problem I try to figure out and they seem to come to the conclusion which was my go to workaround: writing to an intermediate texture and then sampling in a pipeline.
I can't fully accept that this would be impossible to achieve in dx12. Why would that feature be removed? Could it be that the queuing-systems removes some overhead that makes it unnecessary to have this feature?
Is there any way to achieve a raytracer without writing to a separate texture and then sampling in a pipeline or copy it onto the back-buffer? What are my best alternatives for achieving performance?
You will have to access the answer. They removed the capability to create an UAV the same way they removed the capability to use multisample surface in the swapchain.
The problem with authorizing UAV on the swapchain surface is that they would have to forfeit tracking of what is happening to it. DX12 rely on descriptor heaps that are 100% volatile at runtime for UAVs ( render targets are CPU side only and can be tracked ).
Microsoft need to track the swapchain surface status strongly in order to guarantee behavior with the desktop presentation and for that reason, they choose to deny the UAV binding.

Is there a common/standard/accepted way to model GPS entities (waypoints, tracks)?

This question somewhat overlaps knowledge on geospatial information systems, but I think it belongs here rather than GIS.StackExchange
There are a lot of applications around that deal with GPS data with very similar objects, most of them defined by the GPX standard. These objects would be collections of routes, tracks, waypoints, and so on. Some important programs, like GoogleMaps, serialize more or less the same entities in KML format. There are a lot of other mapping applications online (ridewithgps, strava, runkeeper, to name a few) which treat this kind of data in a different way, yet allow for more or less equivalent "operations" with the data. Examples of these operations are:
Direct manipulation of tracks/trackpoints with the mouse (including drawing over a map);
Merging and splitting based on time and/or distance;
Replacing GPS-collected elevation with DEM/SRTM elevation;
Calculating properties of part of a track (total ascent, average speed, distance, time elapsed);
There are some small libraries (like GpxPy) that try to model these objects AND THEIR METHODS, in a way that would ideally allow for an encapsulated, possibly language-independent Library/API.
The fact is: this problem is around long enough to allow for a "common accepted standard" to emerge, isn't it? In the other hand, most GIS software is very professionally oriented towards geospatial analyses, topographic and cartographic applications, while the typical trip-logging and trip-planning applications seem to be more consumer-hobbyist oriented, which might explain the quite disperse way the different projects/apps treat and model the problem.
Thus considering everything said, the question is: Is there, at present or being planned, a standard way to model canonicaly, in an Object-Oriented way, the most used GPS/Tracklog entities and their canonical attributes and methods?
There is the GPX schema and it is very close to what I imagine, but it only contains objects and attributes, not methods.
Any information will be very much appreciated, thanks!!
As far as I know, there is no standard library, interface, or even set of established best practices when it comes to storing/manipulating/processing "route" data. We have put a lot of effort into these problems at Ride with GPS and I know the same could be said by the other sites that solve related problems. I wish there was a standard, and would love to work with someone on one.
GPX is OK and appears to be a sort-of standard... at least until you start processing GPX files and discover everyone has simultaneously added their own custom extensions to the format to deal with data like heart rate, cadence, power, etc. Also, there isn't a standard way of associating a route point with a track point. Your "bread crumb trail" of the route is represented as a series of trkpt elements, and course points (e.g. "turn left onto 4th street") are represented in a separate series of rtept elements. Ideally you want to associate a given course point with a specific track point, rather than just giving the course point a latitude and longitude. If your path does several loops over the same streets, it can introduce some ambiguity in where the course points should be attached along the route.
KML and Garmin's TCX format are similar to GPX, with their own pros and cons. In the end these formats really only serve the purpose of transferring the data between programs. They do not address the issue of how to represent the data in your program, or what type of operations can be performed on the data.
We store our track data as an array of objects, with keys corresponding to different attributes such as latitude, longitude, elevation, time from start, distance from start, speed, heart rate, etc. Additionally we store some metadata along the route to specify details about each section. When parsing our array of track points, we use this metadata to split a Route into a series of Segments. Segments can be split, joined, removed, attached, reversed, etc. They also encapsulate the method of trackpoint generation, whether that is by interpolating points along a straight line, or requesting a path representing directions between the endpoints. These methods allow a reasonably straightforward implementation of drag/drop editing and other common manipulations. The Route object can be used to handle operations involving multiple segments. One example is if you have a route composed of segments - some driving directions, straight lines, walking directions, whatever - and want to reverse the route. You can ask each segment to reverse itself, maintaining its settings in the process. At a higher level we use a Map class to wire up the interface, dispatch commands to the Route(s), and keep a series of snapshots or transition functions updated properly for sensible undo/redo support.
Route manipulation and generation is one of the goals. The others are aggregating summary statistics are structuring the data for efficient visualization/interaction. These problems have been solved to some degree by any system that will take in data and produce a line graph. Not exactly new territory here. One interesting characteristic of route data is that you will often have two variables to choose from for your x-axis: time from start, and distance from start. Both are monotonically increasing, and both offer useful but different interpretations of the data. Looking at the a graph of elevation with an x-axis of distance will show a bike ride going up and down a hill as symmetrical. Using an x-axis of time, the uphill portion is considerably wider. This isn't just about visualizing the data on a graph, it also translates to decisions you make when processing the data into summary statistics. Some weighted averages make sense to base off of time, some off of distance. The operations you end up wanting are min, max, weighted (based on your choice of independent var) average, the ability to filter points and perform a filtered min/max/avg (only use points where you were moving, ignore outliers, etc), different smoothing functions (to aid in calculating total elevation gain for example), a basic concept of map/reduce functionality (how much time did I spend between 20-30mph, etc), and fixed window moving averages that involve some interpolation. The latter is necessary if you want to identify your fastest 10 minutes, or 10 minutes of highest average heartrate, etc. Lastly, you're going to want an easy and efficient way to perform whatever calculations you're running on subsets of your trackpoints.
You can see an example of all of this in action here if you're interested: http://ridewithgps.com/trips/964148
The graph at the bottom can be moused over, drag-select to zoom in. The x-axis has a link to switch between distance/time. On the left sidebar at the bottom you'll see best 30 and 60 second efforts - those are done with fixed window moving averages with interpolation. On the right sidebar, click the "Metrics" tab. Drag-select to zoom in on a section on the graph, and you will see all of the metrics update to reflect your selection.
Happy to answer any questions, or work with anyone on some sort of standard or open implementation of some of these ideas.
This probably isn't quite the answer you were looking for but figured I would offer up some details about how we do things at Ride with GPS since we are not aware of any real standards like you seem to be looking for.
Thanks!
After some deeper research, I feel obligated, for the record and for the help of future people looking for this, to mention the pretty much exhaustive work on the subject done by two entities, sometimes working in conjunction: ISO and OGC.
From ISO (International Standards Organization), the "TC 211 - Geographic information/Geomatics" section pretty much contains it all.
From OGS (Open Geospatial Consortium), their Abstract Specifications are very extensive, being at the same time redundant and complimentary to ISO's.
I'm not sure it contains object methods related to the proposed application (gps track and waypoint analysis and manipulation), but for sure the core concepts contained in these documents is rather solid. UML is their schema representation of choice.
ISO 6709 "[...] specifies the representation of coordinates, including latitude and longitude, to be used in data interchange. It additionally specifies representation of horizontal point location using coordinate types other than latitude and longitude. It also specifies the representation of height and depth that can be associated with horizontal coordinates. Representation includes units of measure and coordinate order."
ISO 19107 "specifies conceptual schemas for describing the spatial characteristics of geographic features, and a set of spatial operations consistent with these schemas. It treats vector geometry and topology up to three dimensions. It defines standard spatial operations for use in access, query, management, processing, and data exchange of geographic information for spatial (geometric and topological) objects of up to three topological dimensions embedded in coordinate spaces of up to three axes."
If I find something new, I'll come back to edit this, including links when available.

Location data storage (points, grouping by distance etc) - Best Practices and Recommended Solutions

I have come across a problem that I 've never solved before but I find it frequently implemented in various apps so I would like to ask if there is a common way to solve it. I have a set of analytics data each representing some logging action (i.e. info, warn etc). Each of this items has a location and a type (i.e. action). There can be millions of these items per area (depending on the area size or map zoom).
I am looking for the best way to store this set of data in my database. I am very comfortable with SQL Server but dont mind what db I have to use as long as it can handle the scalability requirements. If Amazon WS offers such a product or some other cloud solution then even better cause thats how we are planning to host this app. Google maps will be used to visualize the data.
Some requirements:
Be able to plot all data for a given map rectangle (a common google
map interface with markers representing the logging actions)
Be able to zoom in/zoom out and get relevant data for the new map rectangle
Be able to "group" markers in one bigger marker if data are very close. For instance, if point A is 1 km away from point B and I am seeing a map of 10 km radius then I should see two independent points, A and B. But if I zoom out to 500 km radius then point A and B are too close to each other so I would like to group them in one marker. Hopefully that's possible.
If SQL Server is not a good solution then a free, very cheap or cloud-based storage solution should be recommended (no I cant afford an Oracle).
All the queries above should be able to come back within milliseconds or somehow to be cached. Queries will be of the kind: Get me all analytics data for the given map window with zoom of the given rectangle latitude/longitude.
Thanks,
Yannis
If I undertsood correctly, there are 2 sides to your question:
1- A Database System, that supports storage and lookup of spatial data. Many of the free/open source RDBMS have spatial extensions: MySQL and Postgres (PostGIS) in particular. Spatial data are stored like any other data with the addition of spatial geometry attribute, which describes the shape of your data instance (point, rectangle, polygon, ellipse, ...). You can query spatial data entities, with spatial filters. And of course, spatial queries support joints and unions and almost all kind of SQL constructs.
2- A client/server api that would support rendering of spatial data (with the usual functions such as zoom in, zoom out, pan, etc.), caching and drill-down. As far as I know, there isn't one api that support all these features together, out the box. But there are some interesting apis that you might want to investigate.
Hope this helps.

What's the fastest force-directed network graph engine for large data sets? [duplicate]

We currently have a dynamically updated network graph with around 1,500 nodes and 2,000 edges. It's ever-growing. Our current layout engine uses Prefuse - the force directed layout in particular - and it takes about 10 minutes with a hefty server to get a nice, stable layout.
I've looked a little GraphViz's sfpd algorithm, but haven't tested it yet...
Are there faster alternatives I should look at?
I don't care about the visual appearance of the nodes and edges - we process that separately - just putting x, y on the nodes.
We do need to be able to tinker with the layout properties for specific parts of the graph, for instance, applying special tighter or looser springs for certain nodes.
Thanks in advance, and please comment if you need more specific information to answer!
EDIT: I'm particularly looking for speed comparisons between the layout engine options. Benchmarks, specific examples, or just personal experience would suffice!
I wrote a JavaScript-based graph drawing library VivaGraph.js.
It calculates layout and renders graph with 2K+ vertices, 8.5K edges in ~10-15 seconds. If you don't need rendering part it should be even faster.
Here is a video demonstrating it in action: WebGL Graph Rendering With VivaGraphJS.
Online demo is available here. WebGL is required to view the demo but is not needed to calculate graphs layouts. The library also works under node.js, thus could be used as a service.
Example of API usage (layout only):
var graph = Viva.Graph.graph(),
layout = Viva.Graph.Layout.forceDirected(graph);
graph.addLink(1, 2);
layout.run(50); // runs 50 iterations of graph layout
// print results:
graph.forEachNode(function(node) { console.log(node.position); })
Hope this helps :)
I would have a look at OGDF, specifically http://www.ogdf.net/doku.php/tech:howto:frcl
I have not used OGDF, but I do know that Fast Multipole Multilevel is a good performant algorithm and when you're dealing with the types of runtimes involved with force directed layout with the number of nodes you want, that matters a lot.
Why, among other reasons, that algorithm is awesome: Fast Multipole method. The fast multipole method is a matrix multiplication approximation which reduces the O() runtime of matrix multiplication for approximation to a small degree. Ideally, you'd have code from something like this: http://mgarland.org/files/papers/layoutgpu.pdf but I can't find it anywhere; maybe a CUDA solution isn't up your alley anyways.
Good luck.
The Gephi Toolkit might be what you need: some layouts are very fast yet with a good quality: http://gephi.org/toolkit/
30 secondes to 2 minutes are enough to layout such a graph, depending on your machine.
You can use the ForAtlas layout, or the Yifan Hu Multilevel layout.
For very large graphs (+50K nodes and 500K links), the OpenOrd layout wil
In a commercial scenario, you might also want to look at the family of yFiles graph layout and visualization libraries.
Even the JavaScript version of it can perform layouts for thousands of nodes and edges using different arrangement styles. The "organic" layout style is an implementation of a force directed layout algorithm similar in nature to the one used in Neo4j's browser application. But there are a lot more layout algorithms available that can give better visualizations for certain types of graph structures and diagrams. Depending on the settings and structure of the problem, some of the algorithms take only seconds, while more complex implementations can also bring your JavaScript engine to its knees. The Java and .net based variants still perform quite a bit better, as of today, but the JavaScript engines are catching up.
You can play with these algorithms and settings in this online demo.
Disclaimer: I work for yWorks, which is the maker of these libraries, but I do not represent my employer on SO.
I would take a look at http://neo4j.org/ its open source which is beneficial in your case so you can customize it to your needs. The github account can be found here.