spatial data distance search - optimization options - sql

Our business user loves for our searches to be done by distance, problem is we have over 1 million records with a lat/long location. We are using SQL 2008 but we keep running into issues when we order or restrict our searches by distance that the queries take way to long (30 seconds plus). This is unacceptable, there has got to be a better way to do this. We have done everything we can with SQL 2008 and want to upgrade to 2012 if we can at some point.
I ask though, if there is another technology or optimization that we could apply. Could we switch to a different DB for faster performance, a different search algorithm to apply, estimation algorithm, tree, grids, pre-computation, etc?

A solution that might be useful here would be to break your search into two parts:
1) Run a query where you find all records that are within a certain value + or - of the current lat/lng of your location, the where clause might look like:
where (#latitude > (lat - .001) and #latitude > (lat - .001)) and (#longitude> (lng- .001) and #longitude> (longitude- .001))
Using this approach, and especially with an index on both the latitude and longitude columns, you can very quickly define a working set of locations within a specified distance.
2) with the rough results from step 1, use the great circle/haversine method to determine what the actual distance between the source location and each point is.
Where this approach falls over is if there is never any limit to the radius that you are searching, but it works great if you are for instance looking to find all locations within a specific distance of a given point.

Related

Index by geolocation in database

I'm trying to find a way to let my database support fast location-based searches (for example, all items that lie within a certain distance from some geopoint (LAT, LON)). I guess the brute-force solution which calculates the distances between every point in the database and the query point probably won't work for large dataset, so some kind of indexing should be necessary. I'm not sure if there are any existing standard ways to do this (well I know they are out there but Google failed me), but here is a method (or more like a hack?) that I think might work:
Calculate a value from (LAT, LON) and store it in an indexed column. For example, something like floor(LAT / 10) * 10 * 100 + floor(LON / 10) * 10. Each time a query arrives, we first calculate this value for the query and find all the corresponding rows, and then calculate the Euclid distances between all points and the query point.

Find locations in radius of path between 2 other locations

I have a large database of locations, all with lat/long. I use GeoKit and Rails to trivially get locations within a set radius of any other location. All that works great.
My goal is to specify 2 locations (say, A and B), and find all other locations within a radius of Xmi on the path between A and B.
What are some ways in which you could approach this issue? I thought about performing my nearby search at some regular interval on the path, but then I am concerned about performance and the flip-side--missing locations near to the path just because I chose a poor interval.
Thoughts?
Just Calculate at end points,
At at intervals of radius length along the path
as shown in the diagram.
may this will solve your problem
I didn't find an elegant answer here, but sadiqxs was on the right track I suppose. GeoKit doesn't support non-rectangular searches, so I ended up with a solution similar to sadiqxs's but optimized to apply a maximum number of searches based on distance.

oracle geocoding query is really slow, how could i optimize a dynamic field?

I have an Oracle table 12K records/gyms, and the query below takes approximately ~0.3s:
SELECT (acos(sin(41.922682*0.017453293) *
sin(to_number(LATITUDE)*0.017453293) + cos(41.922682*0.017453293) *
cos(to_number(LATITUDE)*0.017453293) * cos(to_number(LONGITUDE)*0.017453293 -
(-87.65432*0.017453293)))*3959) as distance
FROM gym
However, I would like to return all of the records where distance <= 10, and as soon as I run the following query, my query execution time jumps up to ~5.0s:
SELECT * from (SELECT (acos(sin(41.922682*0.017453293) *
sin(to_number(LATITUDE)*0.017453293) + cos(41.922682*0.017453293) *
cos(to_number(LATITUDE)*0.017453293) * cos(to_number(LONGITUDE)*0.017453293 -
(-87.65432*0.017453293)))*3959)
as distance FROM gym)
WHERE distance <= 10
ORDER BY distance asc
Any idea how I can optimize this in Oracle?
Most important:
use a where clause to exclude all longitudes and latitudes that will be more than 10 km/miles (?) away from your point. So you only need to make your calculation for the window within a 10km/miles block.
as an and very rough approximation you could use 0.1 degree as a rule, this is 11km at the equator,and less elsewhere
so add
WHERE (longitude - -87.65)<0.1 and (latitude - 41.922)<0.1
(If you use nested queries, add this to the deepest level)
Since your distance is smaller than 10 km or mile, you can consider the length of one unit latitude/longitude as constant, and calculate them once using your formula. Than you can use pythagoras rule to calculate the distance (after adding the bounding box). This is basically why people usually use projected data for calculations.
Other things:
order by is always slow if you don't have an index. Do you need to order?
save your longitude and latitude as numbers in your table. Why would you store them different in a database?
With money. Specifically, Oracle Spatial.
1) How are you measuring 0.3 seconds for the first query? I'll wager that you are measuring the time required to fetch the first row rather than the time required to fetch the last row. Most client tools will start displaying results long before the database has finished producing them if that is possible (which it almost certainly is if there is no ORDER BY). So you're probably measuring the time required by the first query to calculate the distance to the first 50 or 500 gyms against the time required by the last query to calculate the distance to all 12,000 gyms.
2) Oracle Locator is a feature that comes with all editions of the Oracle database that includes the ability to use spatial indexes and that provides built-in methods for computing distance. It's not nearly as powerful as Oracle Spatial but it should be more than sufficient for what you're discussing here.
3) If you want to roll your own, I'd second johanvdw's suggestion of using a bounding box.

Finding posts where position distance is less or equal to dynamic value

I need help with a architectural problem that im working with. The user enters a position and a radius (e.g. distance). The software searches in a (giant = couple of 100k posts) database table for posts where the users location and the "posts" distance to each other is less than the entered distance.
It's kind of hard for me to explain, but imagine a table with two posts, point a and point c, point U is the user location. The user has entered a position and a radius, and the position and radius for a and c is predefined (stored in a database).
In this case i would only be interested in the point A, because the two areas intersect with each other. How should i transform this into doing in a database with a couple of hundred thousand posts in an effective way? In the database i shall store longitude,latitude and radius.
Depends on which database server you're using, but look into the GIS capabilities that might be included. For example, MS SQL Server 2008 has a built-in geometry type, and PostgreSQL has PostGIS. Oracle has something like this too. Anyhow - these native GIS formats come with spacial querying functions that do the sort of thing you're talking about - searching for matches within given distances, etc... It is pretty simple to accomplish once to switch to the proper datatype.
edit
Since you're using SQL 2008, and your data is lat/long, I suggest the "geography" rather than the "geometry" datatype. Take a look here: http://msdn.microsoft.com/en-us/library/cc280766.aspx

Postal code (ZIP) worldwide (not just US) optimized data structure (not SQL, CSV or Google API) for long and lat retrieval

Does any one know of database structure such as this http://www.maxmind.com/app/geolitecity that is optimized for super fast retrieval of long and lat based on either ZIP or (City, State, Country) parameters?
Maxmind's database does not support any other retrieval than IP retrieval, at least not to mine knowledge. So if you know how to do it preferably in Java, I'm all ears.
This should not be SQL type database or CSV file or Google API solution. Thous are just to slow. Especially if you want to offer search results sorted by distance.
Paid solutions are also option. The data structure doesn't have to be free.
I don't believe there is such a thing as a "fast" way to do this. I've built a geocoding API for Canadian postal codes and the way we search is to have two indexes of postal codes - one sorted by lattitude and one sorted by longitude. You can do some spherical geometry and develop a bounding "box" that fits everything in a given radius but you still have to go back and do a point to point distance measurement using Vincenty or Haversine or your algorithm of choice for the distance between your origin and each postal code you find.
With a world-wide database, your math gets complicated by the fact that you can cross meridians and the equator.
You'll want some kind of encoding scheme that lets you work in radians, since that is what most distance calculation hueristics require.
this can be done very quickly with any database engine that supports two dimensional indexes... and mysql supports unlimited dimensions as well as I know... it's simple.. you use a 2-d index to limit your result set to a reasonable size extremely quickly... then you examine your result set with a high precision calculation algorithm if you need to.. not hard.. except you may need to or two lists together if they cross the 180/-180 longitude line
making a 2d index is simple.... index (latitude,longitude) ... that index only works on latitude or latitude,longitude pairs... it won't work on longitude alone... if you want an additional index for longitude index (longitude) .... I select out a rough estimate square and round the corners if I care about them. ...
if you have a zip or city to start with... zip codes are just a 1-d index... no problem making that happen fast.. just use an index index(zip) ... and if your hard drive is too slow, get a solid state drive to eliminate the seek times.. or use a huge ram and cache the whole table... this is not a hard problem either way you want to go
if that's not fast enough for you, using someones service won't help because you have network overhead... you will have to hold your data directly in ram/ssd and build your own 2-d /1-d indexing system if you need it (not hard)... that route could probably beat sql by a factor of 10 or so because the sql engine has a lot of overhead.... I suppose someone might offer a service that runs on your own machine, but realistically, that wouldn't beat sql by very far because you still have to go through a bunch of hoopdiloops to make the request to their service. sql and 2-d indexes with a solid state drive will be damned fast you shouldn't need to process the data yourself unless you are the post office, sorting 10,000 pieces of mail per second with one machine serving the data. then you'll have to write your own data management routines.