Solving VRPTW using OptaPlanner - optaplanner

I have tried to run example https://github.com/droolsjbpm/optaplanner/tree/master/optaplanner-examples/src/main/java/org/optaplanner/examples/vehiclerouting
as it is written here:
http://docs.jboss.org/optaplanner/release/6.3.0.Final/optaplanner-docs/html_single/index.html#downloadAndRunTheExamples with data set cvrptw-25customers.xml . When I changed readyTime and dueTime in some customers, it didn't result in any change in score. It looks like this program doesn't care about time windows. Should I change something in Java classes? My goal is to get time needed to drive to all customers, taking into account all time windows.

This should work, I 've done it several times myself. Potential causes:
Did you load the xml file again? Try changing the VehicleRoutingPanel code so it's obvious which customer you changed and if indeed the values changed. Compare the screenshot of before and after your change (or duplicate the xml file so you keep the original).
If some arrival times and dueTimes change, the score doesn't need to change. Try making very small time windows, at annoying times, that should definitely impact the score and make it worse.

Related

Optaplanner: How to calculate score delta for move

I'm using Optaplanner to automatically solve school timetables. After a timetable has been solved the user will manually change some lessons and will get feedback on how this affects the score via the following call:
scoreManager.updateScore(timetable);
This call takes some 200ms and will, I assume, do a complete evaluation. Im trying to optimize this and want to only pass in a Move object so that Optaplanner only has to recalculate the delta, like:
scoreManager.updateScore(previousTimetable,changeMove);
Is there a way to do this?
There really is no way how to do just a single move. You don't do moves - the solver does moves. You can only make external problem changes to the solution. You should look into the ProblemChange interface and its use in the SolverManager.
However, the problem change will likely reset the entire working solution anyway. And after the external change is done, you're not guaranteed that the solution will still make sense. (What if it breaks hard constraints now?) You simply need to expect and account for the fact that, after users submit their changes, the solver will need to run; possibly even for a prolonged period of time.

Is there a way to save all feasible scores found?

I'm building a student schedule generator and I need a way of producing more than one solution. Is there some way to save off feasible scores or scores of Xhard/Ysoft?
I need to be able to output more than one potential schedule, that way the student will have a choice for one schedule over the other if for whatever reason they don't want the "best" schedule (maybe they don't like one of the professors, maybe they don't want an 8am class, whatever)
My original idea was to save off all feasible solutions using the bestSolutionChanged event listener. The problem with this, is that once it finds a 0hard/0soft score, it ignores all scores after that, including scores that are equal.
Ideally I'd like to save off all scores of 0hard/-3soft or better, but just being able to save any feasible scores or force optaplanner to look for a new best score would be useful as well.
This is not a solution, but an analysis of the problem:
Hacking the BestSolutionRecaller is obviously not just a big pain, it's also behaviour we don't want to encourage as it makes upgrading to newer version an even bigger pain. So don't expect us to solve this by adding an easy way to configure that in the solver config any time soon. That being said, a solution for this common problem is clearly needed.
When a new best solution is found, it is planning cloned (see docs for definition) from the working solution (the internal solution in OptaPlanner). This allow us to remember that new best solution as the working solution solution changes. That also means the BestSolutionChangedEvents gets a plannng clone and can safely ship it to another thread, for example to marshal it to a client (presuming any ProblemFactChanges you create do copies instead of alterations), without being corrupted by the solver thread that modifies the working solution.
New best solution imply that workingScore > bestScore. The moment it instead does workingScore >= bestScore, we need far more planning clones (which are a bit CPU expensive), but we could then just send out BestSolutionChangedEvents for that too, if and only if a flag is enabled of course, because most users (unlike yourself) don't want this behaviour.
One proposal is to create a separate BestSolutionChangedOrSameEvent, next to the BestSolutionChangedEvent. This might not be ideal, because we need to be able to detect whether or not someone needs those extra planning clones.
Another proposal is to just have a flag in the <solver> config that switches from > to >= behavior for BestSolutionChangedEvent.
Please create a jira (see "get help" on webpage) and link it it here, or create a support ticket (also see "get help" on webpage).

How bad is it to have tonnes of unused variables

I am using VBA and I have downloaded a tool called MZ-Tools, it helps me find all the unused variables in all the code, now I have almost 300 objects which roughly 500 lines in each.
Overall it has found almost 500 unused variables/procedures
Would removing these variables speed up the program a lot or would it just be a waste of time to clean up code which doesn't have much effect on the program?
Short answer: It is never a waste of time to clean up code. You or someone else will be so happy when you have to revise it a year later or so.
Longer answer: The application probably wont speed up a lot. At least you probably will not feel a change. This depends on how heavy it already is. Also it depends on the kind of objects that are created, how 'big' and complex they are. If there is some of those objects running methods every couple of seconds for example in a loop, it will affect the performance of the application considerably.
More: As result of cleaning up your application you will get a better performance. If it is perceptible or not, depends on a variety of stuff. The bigger problem is that you will not know if the objects used wont cause errors in the future. Maybe some of them will get discontinued at some time, or they could cause other kind of unexpected exceptions. This is, I think the biggest threat.
Have fun going trough the code sooner or later!
Based on your question and comments, my impression is your focus is exclusively on execution speed. If that's all you and the team care about for that project, don't invest any time cleaning up those items because I doubt you will notice any runtime performance improvement.
However, I suggest you look beyond only execution speed. How challenging is this project to debug/troubleshoot for the current maintainer(s)? How difficult to add new features, if needed? How about if someone new has to take over responsibility? How much easier would those tasks be without the distractions of unused variables and procedures?
A related consideration is just how much time are we talking about for that cleanup effort? I wonder whether someone has over-estimated the workload.
Make a copy of the db file. From the Mz-Tools code review panel, choose "export" and save the analysis report as a text file. Print the text file. Then move though that printed list, fix each item, and cross it off the list. If you're really slow, you may only average 2 per minute. And for 500 items, that means 250 minutes. But realistically, the task should take less than 4 hours. Running the Mz-tools code review again will show you if you missed anything. And compiling will tell you whether you removed something by mistake.

Cutscene system ruled just by time?

This is my first time I need to create a cutscene system. I have read a lot on the web about different ways to accomplish this task and have mixed them with my own ideas. Now, it is implementation time, but I need some info from other people with more experience than me in this field. Here we go:
1) Some years ago, I implemented a system that actions could be queued in a serial/parallel way, building a tree of actions that when executed created the final result. This can be sure used as the cutscene system director, but, wouldn't it be so much simple to just have a timeline with actions ran at a certain time? An example:
playMp3(0, "Song.mp3)
createObject(0, "Ogre", "Ogre1")
moveObject(1, "Ogre1", Vector3(100,100,1))
This way everything would be really simple to script. Serial actions are supported buy spreading them correctly in time and parallel actions just need to have shared time ranges.
One problem I have seen is that an action like Sync() (This just waits for all actions to finish before start the other that come afterwards) can't be used because we're using absolute time to trigger our actions. Anyway, a solution could be to have our actions layered based on last "sync()". I mean something like this:
playMp3(0, "Song.mp3)
createObject(0, "Ogre", "Ogre1")
moveObject(1, "Ogre1", Vector3(100,100,1))
sync()
moveObject(0,....);
createObject(1,....);
As you may notice, times after sync() starts again from 0, so, when a sync() is ran, and it determines all previous actions from last sync() are finished, timeLine elapsed time would be 0 again. This can be seen as Little cutscene action groups.
2) The previous explanation needs all actions to be added at the beginning of the cutscene playing. Is this how it usually is done? Or do you usually add actions to the timeline as they are needed?
Well, I could be wrong here, but I think this could be a nice & simple way to lay the actions for a cutscene. What do you think?.
Thanks in advance.
I've done a few of these systems. I'll tell you what I like to use I hope this will answer your questions.
One of the first cutscenes system I did used LISP dialect because it is just couple of hours work to get a parser working. It used to be something like...
(play "song1.mp3")
(set "ogre1" (create "ogre"))
(moveTo "ogre1" '(100, 100, 100))
(wait 1)
I created something like virtual machine (VM) that was processing my scripts. The VM didn't use separate thread instead it had update function that was executing X amount of instructions or until it hits some synchronization instruction like wait for 1 sec.
At that time this system had to work on J2ME device which didn't have XML parser and XML parser was too much code to add. These days I'm using XML for everything except sounds and textures.
These days I'm using keyframe systems as BRPocock suggested. The problem is that this will be harder to manage without proper tools. If you using already some 3D software for your models I'll suggest you to investigate the option to use that product. I use Blender for cutscenes for personal projects since it's free at my work place we use Maya and 3ds Max, but the idea is the same. I export to COLLADA and then I have my animation tracks with keyframes. The problem is that COLLADA format is not the simplest it is made to be flexible and require decent amount of work to extract what you need from it.
The problem you will have with your format is to describe the interpolation so you want to move the ogre from one position to another... how long is this going to take? The advantage of keyframe system is that you can specify the position of the ogre in time... but scripting for such system without a tool will be difficult. Still here is a simple suggestion for format:
(play 'song1.mp3')
(entity 'ogre'
(position (
0 (100 100 100)
2 (100 200 100)
5 (100 300 100)
7 (100 300 400)
8 (100 300 500))
(mood (
0 'happy'
7 'angry'))
(... more tracks for that entity if you need ...))
(entity 'human'
(position (.....)))
Now with format like this you can see at what time where the ogre has to be. So if you have time 1.5 sec in the cutscene you can interpolate between the keyframes with time 0 and 2 sec. Where mood can be something you don't interpolate just swich when the right tome comes. I think this format is going to be more flexible and will solve your sync issues but I wouldn't suggest you writing it by hand without tools for big scripts. If your cutscenes are going to be just few sec with a few entities then it may be good for your.

Finding unused columns

I'm working with a legacy database which due to poor management and design has had a wildgrowth of columns which never have been or are no longer beeing used.
Is it possible to some how query for column usage? As in how often a column is beeing selected (either specifically or with *, or joined on)?
Seems to me like this is something we should be able to somehow retrieve but i have been unable to find anything like this.
Greetings,
F.B. ten Kate
Unfortunately, this analysis on the DB side isn't really going to be a full answer. I've seen a LOT of instances where application code only needed 3 columns of a 10+ column table, but selected them all anyway.
Your column would still show up on a usage report in any sort of trace or profiling you did, but it still may not ACTUALLY be in use.
You might have to either a) analyze the entire collection of apps that use this website or b) start drafting the a return-on-investment style doc on whether it's worth rebuilding.
This article will give you a good idea of how to search all fixed code (prodedures, views, functions and triggers) for the columns that are used. The code in the article searches for a specific table/column combination. You could easily adapt it to run for all columns. For anything dynamically executed, you'd probably have to set up a profiler trace.
Even if you could determine whether a column had been used in the past X period of time, would that be good enough? There may be some obscure program out there that populates a column once a week, a month, a year; or once every time they click the mystery button that no one ever clicks, or to log the report that only Fred in accounting ever runs (he quit two years ago), or that gets logged to if that one rare bug happens (during daylight savings time, perhaps?)
My point is, the only way you can truly be certain that a column is absolutely not used by anything is to review everything -- every call, every line of code, every ad hoc Excel data dump, every possible contingency -- everything that references the database . As this may be all but unachievable, try to get a formally defined group of programs and procedures that must be supported, bend over backwards to make sure they are supported, and be prepared to fix things when some overlooked or forgotten piece of functionality turns up.