Postal code (ZIP) worldwide (not just US) optimized data structure (not SQL, CSV or Google API) for long and lat retrieval - latitude-longitude

Does any one know of database structure such as this http://www.maxmind.com/app/geolitecity that is optimized for super fast retrieval of long and lat based on either ZIP or (City, State, Country) parameters?
Maxmind's database does not support any other retrieval than IP retrieval, at least not to mine knowledge. So if you know how to do it preferably in Java, I'm all ears.
This should not be SQL type database or CSV file or Google API solution. Thous are just to slow. Especially if you want to offer search results sorted by distance.
Paid solutions are also option. The data structure doesn't have to be free.

I don't believe there is such a thing as a "fast" way to do this. I've built a geocoding API for Canadian postal codes and the way we search is to have two indexes of postal codes - one sorted by lattitude and one sorted by longitude. You can do some spherical geometry and develop a bounding "box" that fits everything in a given radius but you still have to go back and do a point to point distance measurement using Vincenty or Haversine or your algorithm of choice for the distance between your origin and each postal code you find.
With a world-wide database, your math gets complicated by the fact that you can cross meridians and the equator.
You'll want some kind of encoding scheme that lets you work in radians, since that is what most distance calculation hueristics require.

this can be done very quickly with any database engine that supports two dimensional indexes... and mysql supports unlimited dimensions as well as I know... it's simple.. you use a 2-d index to limit your result set to a reasonable size extremely quickly... then you examine your result set with a high precision calculation algorithm if you need to.. not hard.. except you may need to or two lists together if they cross the 180/-180 longitude line
making a 2d index is simple.... index (latitude,longitude) ... that index only works on latitude or latitude,longitude pairs... it won't work on longitude alone... if you want an additional index for longitude index (longitude) .... I select out a rough estimate square and round the corners if I care about them. ...
if you have a zip or city to start with... zip codes are just a 1-d index... no problem making that happen fast.. just use an index index(zip) ... and if your hard drive is too slow, get a solid state drive to eliminate the seek times.. or use a huge ram and cache the whole table... this is not a hard problem either way you want to go
if that's not fast enough for you, using someones service won't help because you have network overhead... you will have to hold your data directly in ram/ssd and build your own 2-d /1-d indexing system if you need it (not hard)... that route could probably beat sql by a factor of 10 or so because the sql engine has a lot of overhead.... I suppose someone might offer a service that runs on your own machine, but realistically, that wouldn't beat sql by very far because you still have to go through a bunch of hoopdiloops to make the request to their service. sql and 2-d indexes with a solid state drive will be damned fast you shouldn't need to process the data yourself unless you are the post office, sorting 10,000 pieces of mail per second with one machine serving the data. then you'll have to write your own data management routines.

Related

Oracle String Conversion - Alpha String to Numeric Score, Fuzzy Match

I'm working with a lot of name data where the following events are happening:
In one stream the data is submitted as "Sung" and in the other stream "Snug" my initial thought to this was to convert Sung and Snug to where each character equals a number then the sums would be the same, so even if they transverse a character, I'd be able to bucket these appropriately.
The other is where in one stream it comes in as "Lillly" as opposed to "Lilly" in the other stream. I'd like to figure out how to fuzzy match these such that I can identify them. I'm not sure if this is possible in Oracle.
I'm working with many millions of data points and trying to figure out how to write these classification buckets such that I can stop having so much noise in my primary task of finding where people are truly different people as opposed to a clerical error.
Any thoughts would be very appreciated.
A common measure for such distance is called Levenshtein distance (Wikipedia here). This measures the "edit" distance between two strings -- number of edit operations needed to convert one into the other.
That's the good news. More good news is that Oracle even has an implementation in the UTL_MATCH library.
The bad news is that it is really, really expensive on millions of data points. Unfortunately, I cannot help you there so much. One idea is to determine which names are "close enough" because they already share a certain minimum number of characters.
Another method is to convert the strings to what they sound like. That is called soundex. You may be able to use the two together -- assuming your names are predominantly English (the soundex algorithm was invented by the US Census Bureau, so it would work best on names in America).

A better way to handle Long Lat distances

OK so I don't have an issue here but I'm just wondering if there's a more standardized way to handle what I'm doing.
Essentially I have a DB table full of locations including longitude and Latitude, there could potentially be thousands of locations. I also have some functionality to search your postcode and you can then see from the stored the locations the closest x amount to you.
Ive read about going off and using the Google Maps api to do this but I don't really want to pull back and send thousands of requests to the google maps api.
So here's what I'm doing. I have a stored procedure where I am passing the users Long and Lat. I am then using this to form a column called distance with which I am then ordering the data. The distance column I am working out using the below logic:
SQRT(SQUARE((CAST(USERSLAT AS decimal(9,6))) - Latitude) + SQUARE((CAST(USERSLONG AS decimal(9,6)))-(Longitude))) AS Distance
Essentially what this is doing is the classic a^2=b^2+c^2 to find the distance between to coords, and then using these results I can theoretically see the closest locations to the user. Once I have this data i can use the google maps api to find the exact distances. Is this an ok way to do things? I have this nagging feeling in the back of my head that im missing something.

Reading file of Airport codes and lat, long coordinates

I have been given a file of 1200 airports that lists airport code, latitude, longitude, city, and state.
EX: ANB 33.58 85.85 Anniston AL
Eventually I will be writing methods 'Distance' to return distance and information of two input airports, 'Closest' to return code and distance of closest airports to and input airport, and 'Shortest' to find shortest trip that begins and input airport and travels to n airports.
For now my question is, what is the best way to read in this data that will eventually make it easier for me to write/calculate distances later?
Such as would I read in the file then put in a HashMap, or TreeSet in one method, and how would this be done? or would I wait and use HashMap/TreeSet in the other methods?
Sorry I don't have any code yet but I'm stuck on this for now and you guys all ways help me out tremendously, so just looking for direction at this point.
It sounds like the simplest approach would be to create an object to store the information for one airport and then store all of those objects in one array. I say this because you're probably going to be doing a lot of iteration over the entire array in order to build your other methods, and since you only have 1200 objects, any fancy sorting isn't going to speed up your program that much.
I suppose you could also divide your set of airports into geographic regions and override hashcode() in order to group nearby airports together, but that doesn't buy you much speed, and it's not particularly helpful for airports near the edge of a region. Similarly, you could implement a GeoHash, but these also have issues with certain edge conditions that may or may not matter with your set of airports. (There are also open source Java implementations of GeoHashes out there if you do a search.)
Whatever you do, don't set up a group of HashMaps to map airport name to each of your other pieces of data. That is a common beginner approach, but it is also the slowest approach. Creating an object is much better.

How to group nearby latitude and longitude locations stored in SQL

Im trying to analyse data from cycle accidents in the UK to find statistical black spots. Here is the example of the data from another website. http://www.cycleinjury.co.uk/map
I am currently using SQLite to ~100k store lat / lon locations. I want to group nearby locations together. This task is called cluster analysis.
I would like simplify the dataset by ignoring isolated incidents and instead only showing the origin of clusters where more than one accident have taken place in a small area.
There are 3 problems I need to overcome.
Performance - How do I ensure finding nearby points is quick. Should I use SQLite's implementation of an R-Tree for example?
Chains - How do I avoid picking up chains of nearby points?
Density - How to take cycle population density into account? There is a far greater population density of cyclist in london then say Bristol, therefore there appears to be a greater number of backstops in London.
I would like to avoid 'chain' scenarios like this:
Instead I would like to find clusters:
London screenshot (I hand drew some clusters)...
Bristol screenshot - Much lower density - the same program ran over this area might not find any blackspots if relative density was not taken into account.
Any pointers would be great!
Well, your problem description reads exactly like the DBSCAN clustering algorithm (Wikipedia). It avoids chain effects in the sense that it requires them to be at least minPts objects.
As for the differences in densities across, that is what OPTICS (Wikipedia) is supposed do solve. You may need to use a different way of extracting clusters though.
Well, ok, maybe not 100% - you maybe want to have single hotspots, not areas that are "density connected". When thinking of an OPTICS plot, I figure you are only interested in small but deep valleys, not in large valleys. You could probably use the OPTICS plot an scan for local minima of "at least 10 accidents".
Update: Thanks for the pointer to the data set. It's really interesting. So I did not filter it down to cyclists, but right now I'm using all 1.2 million records with coordinates. I've fed them into ELKI for analysis, because it's really fast, and it actually can use the geodetic distance (i.e. on latitude and longitude) instead of Euclidean distance, to avoid bias. I've enabled the R*-tree index with STR bulk loading, because that is supposed to help to get the runtime down a lot. I'm running OPTICS with Xi=.1, epsilon=1 (km) and minPts=100 (looking for large clusters only). Runtime was around 11 Minutes, not too bad. The OPTICS plot of course would be 1.2 million pixels wide, so it's not really good for full visualization anymore. Given the huge threshold, it identified 18 clusters with 100-200 instances each. I'll try to visualize these clusters next. But definitely try a lower minPts for your experiments.
So here are the major clusters found:
51.690713 -0.045545 a crossing on A10 north of London just past M25
51.477804 -0.404462 "Waggoners Roundabout"
51.690713 -0.045545 "Halton Cross Roundabout" or the crossing south of it
51.436707 -0.499702 Fork of A30 and A308 Staines By-Pass
53.556186 -2.489059 M61 exit to A58, North-West of Manchester
55.170139 -1.532917 A189, North Seaton Roundabout
55.067229 -1.577334 A189 and A19, just south of this, a four lane roundabout.
51.570594 -0.096159 Manour House, Picadilly Line
53.477601 -1.152863 M18 and A1(M)
53.091369 -0.789684 A1, A17 and A46, a complex construct with roundabouts on both sides of A1.
52.949281 -0.97896 A52 and A46
50.659544 -1.15251 Isle of Wight, Sandown.
...
Note, these are just random points taken from the clusters. It may be sensible to compute e.g. cluster center and radius instead, but I didn't do that. I just wanted to get a glimpse of that data set, and it looks interesting.
Here are some screenshots, with minPts=50, epsilon=0.1, xi=0.02:
Notice that with OPTICS, clusters can be hierarchical. Here is a detail:
First, your example is quite misleading. You have two different sets of data, and you don't control the data. If it appears in a chain, then you will get a chain out.
This problem is not exactly suitable for a database. You'll have to write code or find a package that implements this algorithm on your platform.
There are many different clustering algorithms. One, k-means, is an iterative algorithm where you look for a fixed number of clusters. k-means requires a few complete scans of the data, and voila, you have your clusters. Indexes are not particularly helpful.
Another, which is usually appropriate on slightly smaller data sets, is hierarchical clustering -- you put the two closest things together, and then build the clusters. An index might be helpful here.
I recommend though that you peruse a site such as kdnuggets in order to see what software -- free and otherwise -- is available.

spatial data distance search - optimization options

Our business user loves for our searches to be done by distance, problem is we have over 1 million records with a lat/long location. We are using SQL 2008 but we keep running into issues when we order or restrict our searches by distance that the queries take way to long (30 seconds plus). This is unacceptable, there has got to be a better way to do this. We have done everything we can with SQL 2008 and want to upgrade to 2012 if we can at some point.
I ask though, if there is another technology or optimization that we could apply. Could we switch to a different DB for faster performance, a different search algorithm to apply, estimation algorithm, tree, grids, pre-computation, etc?
A solution that might be useful here would be to break your search into two parts:
1) Run a query where you find all records that are within a certain value + or - of the current lat/lng of your location, the where clause might look like:
where (#latitude > (lat - .001) and #latitude > (lat - .001)) and (#longitude> (lng- .001) and #longitude> (longitude- .001))
Using this approach, and especially with an index on both the latitude and longitude columns, you can very quickly define a working set of locations within a specified distance.
2) with the rough results from step 1, use the great circle/haversine method to determine what the actual distance between the source location and each point is.
Where this approach falls over is if there is never any limit to the radius that you are searching, but it works great if you are for instance looking to find all locations within a specific distance of a given point.