Need a little help from someone who knows a little about logistics;
I am currently working with an application known as Framework. The application is not really something that I am familiar with, but regardless I can figure out how it works. One of the tabs running in the application is for expected orders (shipping trucks). Within that, I am able to see where an outbound truck's current location is, as well as it's destination. I am trying to add functionality to the application that would allow me to see an estimated time of arrival to its current destination + the drive back to my location. This seems simple enough, but I'm trying to figure out the best way that I could calculate this. I looked into The Google Distance Matrix API, but I have no need to display a map on the application, all I want is the ETA. I am pretty inexperienced with this kind of thing, so I was hoping someone could point me in the right direction.
Thanks guys.
This may not be the best forum for this question...
It looks like Google Distance Matrix requires you to display the map. An alternative is the open source OSRM project. Natively it's a C++ engine for routing, which outputs directions and the total route information so the any map display is up to you.
There is a demo and HTTP API hosted on the project site but you will need to check if it's suitable for your usage level.
Just an idea, but depending on the size of your delivery area, and how accurate you want the estimated time, you may be able to keep it all in a database.
Let's assume your delivery area is 10 miles x 10 miles.
So that's 100 square miles. We'll use each square mile as a point.
Do a one time calculation of how long it will take to get from each
point to the rest. You can
use the Google Distance Matrix API for this since you're only doing
this once.
This will give you 10,000 records that has every point to point time.
So, if your truck is in point 25, and has to get to point 64, you do a lookup and see that it should take about 10 minutes. plus the drive from point 64 back to the warehouse (point 10) is 8 minutes. Then you'll know the truck should be back in about 18 minutes.
It's not super accurate, but it might be close enough for your needs. I would be curious if you do implement this method.
Btw, if your delivery area is 100 miles x 100 miles, then that would be 100,000,000 location points if each point is 1 square mile. If that's too much, then if you increase your point size to 2 miles x 2 miles (4 square miles), then that's about 6,000,000 records.
Related
Is there currently a way to incorporate traffic patterns into OptaPlanner with the package and delivery VRP problem?
Eg. Let's say I need to optimize 500 pickup and deliveries today and tomorrow amongst 30 vehicles where each pickup has a 1-4hr time window. I want to avoid busy areas of the city during rush hours when possible.
New pickups can also be added (or cancelled in the meantime).
I'm sure this is a common problem. Does a decent solution exist for this in OptaPlanner?
Thanks!
Users often do this, but there is no out-of-the-box example of it.
There are several ways to do it, but one way is to add a 3th dimension to the distanceMatrix, indicating the departureTime. Typically that use a granularity of 15 minutes, 30 minutes or 1 hour.
There are 2 scaling concerns here:
memory. 15 minutes means 24 * 4 = 96 per day. Given that with 2 dimensions, a 10k locations distanceMatrix uses almost 2 GB RAM, clearly memory can become a concern.
pre-calculation time. Calculation the distance matrix can be time consuming. "bulk algorithms" can help here. For example, graphhopper community doesn't support bulk distance calculations, but their enterprise version - as well OSRM (which is free) does. Getting a 3 dimensional matrix from the remote Google Maps API, or the remote enterprise Graphhopper API, can result in bandwidth concerns (see above, the distance matrix can become several GB in size, especially in non-binary formats such as JSON or CSV).
In any case, one that 3 dimensional matrix is there, it's just a matter of adjust OptaPlanner examples's ArrivalTimeUpdateListener to use getDistance(from, to, departureTime).
I am currently developing technique to help users find a spot to park.
But i face a little problem:
if a user indicates that he is parking right now in a free spot but he is lying and he is at home right now.
How can i detect from GPS if he is inside a building or along side the road?
Thanks
You'll need map data (OpenStreetMap is free), and figure out whether the user is somewhere on that map or not. You do that by comparing GPS data to the map data.
What I do in such situations is measure the distance between the lat/lon and each road, and compare the GPS angle to that of each line. The more context information you use the more accurate you can get your results:
If the speed is 60km/h, you're probably not in a building. You're probably not on a 30km/h road either.
If you're standing still for more than 2 minutes, you're probably not in a car.
If you know the buildings, and there are only a few of them, you could check if you see a certain wifi router or not.
Basically you'll calculate a score for each road, and then pick the road with the highest score to know where you are.
Score = DistScore*DistWeight + AngleScore+AngleWeight etc.
Also, from iOS and Android you get an accuracy in meters. You can also calculate that yourself if you can access raw GPS data. Using that, you set the area that you need to scan. For example, for a high accuracy (3m), you probably don't have many roads to scan. If the accuracy is 50m, you should probably match roads that are farther away.
If accuracy is important, you should look at series of GPS data, and test if the followed route is a logical path or not.
I've been using oxyplot for a month now and I'm pretty happy with what it delivers. I'm getting data from an oscilloscope and, after a fast processing, I'm plotting it in real time to a graph.
However, if I compare my application CPU usage to the one provided by the oscilloscope manufacturer, I'm loading a lot more the CPU. Maybe they're using some gpu-based plotter, but I think I can reduce my CPU usage with some modifications.
I'm capturing 10.000 samples per second and adding it to a LineSeries. I'm not plotting all that data, I'm decimating it to a constant number of points, let's say 80 points for a 20 secs measure, so I have 4 points/sec while totally zoomed out and a bit more detail if I zoom in to a specific range.
With the aid of ReSharper, I've noticed that the application is calling a lot of times (I've 6 different plots) the IsValidPoint method (something like 400.000.000 times), which is taking a lot of time.
I think the problem is that, when I add new points to the series, it checks for every point if it is a valid point, instead of the added values only.
Also, it spends a lot of time in the MeasureText/DrawText method.
My question is: is there a way to override those methods and to adapt it to my needs? I'm adding 10.000 new values each second, but the first ones remain the same, so there's no need for re-validate them. Also, the text shown doesn't change.
Thank you in advance for any advice you can give me. Have a good day!
I'm using Foursquare API to get a list of venues of a certain category.
One important requirement is that the list is exhaustive, i.e. includes all relevant points. The v2/venues/search API endpoint enforces a limit of 50 venues on the output.
So the first idea that comes to mind is splitting the area into several sections (using "sw" and "ne" params) and then combining the results.
Clearly, the density of points will vary dramatically depending on location, so we'll need to use some kind of adaptive algorithm to flexibly adjust the size of the search window so that it contains all points. Also, there's an increased risk of running into the rate limit, so we might need the algorithm to stop when it's used up its quota of requests.
Finally, it seems that the only way to tell if a search window should be shrunk even further is to count the number of points in the result: if we have less than 50, then we've got a complete list for this section and can move on to the next one; otherwise, we should split it further. It seems to be wasteful as we'll be throwing away the intermediate results (i.e. all results in our search tree except for the leaves).
So here are some questions that I have:
Is it the best way to put together an exhaustive list? Maybe I'm
missing some API functionality?
Is there any specific algorithm you'd use in this case?
How would you go about reducing the number of results that have to be thrown away?
Thanks in advance!
An important disclaimer would be that foursquare does not like it when you perform a lot of searches in the same area.
Having said that, you should look into experimenting with categoryId filter in the venue search api. Most of the data on foursquare is food (restaurants) and nightlife related.
So if you exclude these (by including others, no way to exclude) you can search on a larger area and still get below 50 results.
Never really tried using such an algorithm because the categoryId filtering worked good enough, but in theory, the algorithm is simple, each lat/lng 0.001 is ~111 meters.
Search using a small radius (~200 for large metropolitan areas) and triangulate (scan) areas.
What got us to originally perform a lot of searches (and later stop doing so) is that sometimes foursquare filter out results without asking you (for me, it looks like bugs, for them its part of the algorithm). So for example I would search on a 50 meter radius, find the place I want (I know what I am searching for), expand to 500 meters, not find it (and get less than 50 results - so it was not dropped out because I hit the cap, it was dropped out because ???), move my search location ~300 meters north, find it -> sporadic behavior.
My point is (and the reason for why we stopped making a lot of searches and changed our approach), what you are trying to achieve, 'complete coverage' is very hard to do given the current API and the current usage policy, and -> it is not important really. After a few months of playing with it, we figured out that we should query foursqaure for what our users are looking for and require at this moment, we cache the results - over time we will have a complete coverage, maybe at start we will miss a few spots, but for the long run its not really important.
Hopefully this is not what you're doing, but as a friendly reminder: scraping foursquare's website and/or API is very much prohibited by its terms of service.
We're building a GIS interface to display GPS track data, e.g. imagine the raw data set from a guy wandering around a neighborhood on a bike for an hour. A set of data like this with perhaps a new point recorded every 5 seconds, will be large and displaying it in a browser or a handheld device will be challenging. Also, displaying every single point is usually not necessary since a user can't visually resolve that much data anyway.
So for performance reasons we are looking for algorithms that are good at 'reducing' data like this so that the number of points being displayed is reduced significantly but in such a way that it doesn't risk data mis-interpretation. For example, if our fictional bike rider stops for a drink, we certainly don't want to draw 100 lat/lon points in a cluster around the 7-Eleven.
We are aware of clustering, which is good for when looking at a bunch of disconnected points, however what we need is something that applies to tracks as described above. Thanks.
A more scientific and perhaps more math heavy solution is to use the Ramer-Douglas-Peucker algorithm to generalize your path. I used it when I studied for my Master of Surveying so it's a proven thing. :-)
Giving your path and the minimum angle you can tolerate in your path, it simplifies the path by reducing the number of points.
Typically the best way of doing that is:
Determine the minimum number of screen pixels you want between GPS points displayed.
Determine the distance represented by each pixel in the current zoom level.
Multiply answer 1 by answer 2 to get the minimum distance between coordinates you want to display.
starting from the first coordinate in the journey path, read each next coordinate until you've reached the required minimum distance from the current point. Repeat.