I am saving the best solution into the DB, and we display that on the web page. I am looking for some solution where a user can add more visits, but that should not change already published trips.
I have checked the documentation and found ProblemFactChange can be used, but only when the solver is already running.
In my case solver is already terminated and the solution is also published. Now I want to add more visits to the vehicle without modifying the existing visits of the Vehicle. Is this possible with Optaplanner? if yes any example of documentation would be very helpful.
You can use PlanningPin annotation for avoiding unwanted changes.
Optaplanner - Pinned planning entities
If you're not looking for pinning (see Ismail's excellent answer), take a look at the OptaPlanner School Timetabling example, which allows adding lessons between solver runs. The lessons simply get stored in the database and then get loaded when the solver starts.
The difficulty with VRP is the chained model complexity (we're working on an alternative): If you add a visit X between A and B, then make sure that afterwards A.next = X, B.previous = X, X.previous = A, X.next = B and X.vehicle = A.vehicle. Not the mention the arrival times etc.
My suggestion would be to resolve what is left after the changes have been introduced. Let's say you are you visited half of your destinations (A -> B -> C) but not yet (C - > D -> E) when two new possible destinations (D' and E') are introduced. Would not this be the same thing as you are starting in C and trying plan for D, D', E and E'? The solution needs to be updated on the progress though so the remainder + changes can be input to the next solution.
Just my two cent.
Related
I've found this code for getting articles by tag and display them as a list with links in xWiki, but I want it sorted by date.
Has anyone a suggestion for me?
{{velocity}}
#set ($list = $xwiki.tag.getDocumentsWithTag('myTag'))
#foreach($reference in $list)
#set ($document = $xwiki.getDocument($reference))
#set ($label = $document.getTitle())
[[$label>>$reference]]
#end
{{/velocity}}
Thanks in advance!
Sorting in velocity can hit one of 2 performance penalties:
Actually sorting in velocity, either with a sorting algorithm -> unnecesarrily compicated
Loading all the document results into memory (a collection) and
sorting that collection with the sort/collection tool -> you risk quickly running out of memory if the result is larger than you expected.
The easiest alternative, given that there is XWiki running behind it, would be to do an XWQL query for the XWiki.TagClass objects stored inside the documents and do the sorting at the database level. At this point, in velocity, you only need to display the results:
{{velocity}}
#foreach ($docStringRef in $services.query.xwql("from doc.object(XWiki.TagClass) tagsObj where 'conference' member of tagsObj.tags order by doc.creationDate DESC").setLimit(10).execute())
#set ($document = $xwiki.getDocument($docStringRef))
[[$document.title>>$docStringRef]]
#end
{{/velocity}}
For future use/reference, the list of available Velocity tools in XWiki might also be useful https://extensions.xwiki.org/xwiki/bin/view/Extension/Velocity%20Module#HVelocityTools since they can help with common operations, including sorting (that I mentioned at point 2. above)
How can I get a list of all topics that I created?
I think it should be something like
%SEARCH{ "versions[-1].info.author = '%USERNAME%" type="query" web="Sandbox" }%
but that returns 0 results.
With "versions[-1]" I get all topics, and with "info.author = '%USERNAME%'" a list of the topics where the last edit was made by me. Having a list of all topics where any edit was made by me would be fine, too, but "versions.info.author = '%USERNAME%'" again gives 0 results.
I’m using Foswiki-1.0.9. (I know that’s quite old.)
The right syntax would be
%SEARCH{ "versions[-1,info.author='%USERNAME%']" type="query" web="Sandbox"}%
But that's not performing well, i.e. on your old Foswiki install.
Better is to install DBCacheContrib and DBCachePlugin and use
%DBQUERY{"createauthor='%WIKINAME%'"}%
This plugin caches the initial author in a way it does not have to retrieve the information from the revision system for every topic under consideration during query time.
I have columns taken from excel as a dataframe, the columns are as follows:
HolidayTourProvider|Packages|Meals|Accommodation|LocalTravelVehicle|Cancellationfee
Holiday Tour Provider has a couple of company names
Packages, the features provided in each package are mostly the same like
Meals,Accommodation etc... even though one company may call it "Saver", others may call it "Budget". (each of column mostly follow Yes/No, except Local travel vehicle are again car names like Ford Taurus,jeep cherokee etc..
Cancellation amount is integers)
I need to write a function like
match(HolidayTP,Package)
where the user can give input like
match(AdventureLife, Luxury)
then I need to return all the packages that have similar features with Luxury by other Holiday Tour Providers, no matter what name they give the package like 'Semi Lux', 'Comfort' etc...
I want to give a counter for every match and display all the packages that exceed the counter by 3 or 4.
This is my first python code. I am stuck here.
fb is the total df I exported to
def mapHol(HTP, PACKAGE):
mfb = (fb['HTP']== HTP)&(fb['package']== package)
B = fb[mfb]
for i in fb[i]:
for j in B[j]:
if fb[i]==B[j]:
count+=1
I dont know how to proceed, please help me this is my first major project, I started on my own.
The project I am working on involves static offline GTFS data in a mobile app. All the GTFS data is available inside realm-objects (or SQLite if needed).
Now, I would like to establish all train- or bus-connections from A to B (starting after a certain departure-time).
How do I query the GTFS-data in order to get a connection from A to B ???
I reealized to get all trips leaving from A.
I realized to get all station-names along that trip including times.
But I find it very hard to get the connection information between two locations A and B. What SQL queries do i have to set up in order to get that information ?
Any help appreciated !
If you just want to dynamically calculate shortest travel routes between static hubs, in an offline application, determined like the following image, you can use the following formula:
(Source)
Here's the pseudo code:
1 function Dijkstra(Graph, source):
2
3 create vertex set Q
4
5 for each vertex v in Graph: // Initialization
6 dist[v] ← INFINITY // Unknown distance from source to v
7 prev[v] ← UNDEFINED // Previous node in optimal path from source
8 add v to Q // All nodes initially in Q (unvisited nodes)
9
10 dist[source] ← 0 // Distance from source to source
11
12 while Q is not empty:
13 u ← vertex in Q with min dist[u] // Node with the least distance
14 // will be selected first
15 remove u from Q
16
17 for each neighbor v of u: // where v is still in Q.
18 alt ← dist[u] + length(u, v)
19 if alt < dist[v]: // A shorter path to v has been found
20 dist[v] ← alt
21 prev[v] ← u
22
23 return dist[], prev[]
(Source)
Alright, I'll admit so far my answer was a tad facetious...
I suspect you don't realize just how complicated it is to do what you're asking, especially in an offline environment. Companies like Google and Microsoft have spent millions on research with huge teams of data scientists.
If this is something you are serious about, I'd encourage you to start with a 10×10 grid and work on the logic of getting from "Point A → Point B" when you start adding barriers in random places (this simulating roads beginning & ending). Recently I was surprised how complicated a seemingly-simple, somewhat-related Stack Overflow "pipe sizes conversion" question that I answered had become.
If you didn't have the "offline" condition, I would've suggested looking into getting a [free] API Key for Google Web Services Directions API. Basically you tell it a where Points A & B are, and it gives you detailed route information, via transit or other methods.
Another of Google's many API's that could be helpful is the Snap to Roads API, which turns partial or error-ridden paths into driveable ones.
The study Route Logic and Shortest Path Logic is actually really fascinating stuff, and there are some amazing resources to learn about the related theories (see below), including a video from Google explaining how they went about it.
...but unfortunately this isn't something that's going to be accomplished with a simple SQL Query.
Actual Resources:
YouTube : Google Tech Talk: Fast Route Planning
Wikipedia : Shortest path problem
Wikipedia : Dijkstra's algorithm
...and some slightly Lighter Reading:
Wikipedia : Travelling Salesman Problem
Why UPS drivers don’t turn left and you probably shouldn’t either
Google Web Services : Directions API Developer's Guide
Stack Overflow : Shortest Route of converters between two different pipe sizes?
Wikipedia : Seven Bridges of Königsberg
I want to match my user to a different user in his/her community every day. Currently, I use code like this:
#matched_user = User.near(#user).order("RANDOM()").first
But I want to have a different #matched_user on a daily basis. I haven't been able to find anything in Stack or in the APIs that has given me insight on how to do it. I feel it should be simpler than having to resort to a rake task with cron. (I'm on postgres.)
Whenever I find myself hankering for shared 'memory' or transient state, I think to myself "this is what (distributed) caches were invented for".
#matched_user = Rails.cache.fetch(#user.cache_key + '/daily_match', expires_in: 1.day) {
User.near(#user).order("RANDOM()").first
}
NOTE: While specifying a TTL for cache entry tells Rails/the cache system to try and keep that value for the given timeframe, there's NO guarantee that it will. In particular, a cache that aggressively tries to reclaim memory may expire an entry well before its desired expires_in time.
For this particular use case, it shouldn't be a big deal but in cases where the business/domain logic demands periodically generated values that are durable then you really have to factor that into your database.
How about using PostgreSQL's SETSEED function? I used the date to seed so that every day the seed will change, but within a day, the seed will be consistent.:
User.connection.execute "SELECT SETSEED(#{Date.today.strftime("%y%d%m").to_i/1000000.0})"
#matched_user = User.near(#user).order("RANDOM()").first
You may want to seed a random value after using this so that any future calls to random aren't biased:
random = User.connection.execute("SELECT RANDOM()").to_a.first["random"]
# Same code as above:
User.connection.execute "SELECT SETSEED(#{Date.today.strftime("%y%d%m").to_i/1000000.0})"
#matched_user = User.near(#user).order("RANDOM()").first
# Use random value before seed to make new seed:
User.connection.execute "SELECT SETSEED(#{random})"
I have split these steps in different sections just for readability. you can optimise query later.
1) Find all user records till today morning. so that the count will freeze.
usrs_till_today_morning = User.where("created_at <?", DateTime.now.in_time_zone(Time.zone).beginning_of_day)
2) Pluck all ID's
user_ids = usr_till_today_morning.pluck(:id)
3) Today date it will be a range (1..30) but will remain constant throughout the day.
day_today = Time.now.day
4) Select the same ID for the day
todays_user_id = user_ids[day_today % user_ids.count]
#matched_user = User.find(todays_user_id)
So it will give you random user records by maintaining same record throughout the day!!