Is there a way to access planning variable's assignment during planning?
In my use case, I want to assign a planning variable with certain status only one time only during planning. After that I don't want to use that planning variable.
I know that in optaplanner, a planning variable/problem fact can not change, so i can not change its status.
Is there a way to get list of planning variable assignment during planning so that in java code or drools file, i can avoid re-assignment if it has been used once?
Thanks!
Use a hard constraint to enforce that.
Yes, you could use MoveFilters too, but that's not a good idea because sometimes you need to break hard constraints to escape local optima.
Related
I'm using Optaplanner to automatically solve school timetables. After a timetable has been solved the user will manually change some lessons and will get feedback on how this affects the score via the following call:
scoreManager.updateScore(timetable);
This call takes some 200ms and will, I assume, do a complete evaluation. Im trying to optimize this and want to only pass in a Move object so that Optaplanner only has to recalculate the delta, like:
scoreManager.updateScore(previousTimetable,changeMove);
Is there a way to do this?
There really is no way how to do just a single move. You don't do moves - the solver does moves. You can only make external problem changes to the solution. You should look into the ProblemChange interface and its use in the SolverManager.
However, the problem change will likely reset the entire working solution anyway. And after the external change is done, you're not guaranteed that the solution will still make sense. (What if it breaks hard constraints now?) You simply need to expect and account for the fact that, after users submit their changes, the solver will need to run; possibly even for a prolonged period of time.
So I am looking at modelling an overconstrained routing problem, where not all tasks have to be picked up in that specific planning problem. Rather the objective will be to maximise the tasks picked up in that planning problem.
I was thinking this should be easy to achieve by allowing the planning variable to be nullable, but it seems that Optaplanner does not allow this on chained planning variables.
So the workaround I am thinking about would be to devise a Dummy/Ghost vehicle for which the objective be to rather minimise the tasks assigned to this vehicle. This approach seems to echo what has been said here.
Alternatively, I think I can put the value null in the valueRangeProvider but I am not sure if this would work as intended.
Is this a reasonable approach, or are there caveats using this approach ?
null in ValueRangeProvider doesn't work.
The Dummy workaround is very, very common - I did it a few times myself (including for the RH summit demo). But once PLANNER-226 is fixed, we can get rid of that dummy workaround.
I'm currently using OptaPlanner 6.2.0 but can update if it can help.
I need to keep some data of chaining between variables and entities outside the planner, as well as some precalculated data and I need a way to update it after each move but before the rules are fired. I use that data when check the rules. I know it's a hacky way but there are complex rules which need separate data storing, can't think of the way to avoid it. Also this way the Planner has shown good performance.
So the thing is I need a way to trigger some code on each move which would update that separately stored data with the changes from current move, before the rules are applied for this move and decision about how good is the move is taken (because the rules address that data), so in fact before the score for this move gets recalculated. On Local Search phase it's pretty simple because all of my moves are custom, so I can do it inside of doMove. But I really need to do the same thing in CH phase.
I'm aware that we can't change move type to custom in CH phase. I also find it very difficult to implement a full analogue of CH phase in CustomPhaseCommand for this little change. Removing CH phase and replacing it with simple custom phase reduces the quality of allocation for this specific task significantly even thought I had an example where it worked better without CH at all. I've also tried adding PhaseLifecycleListener to my Solver but it only listens to steps and phases, not to separate moves. The closest effort so far was implementing a VariableListener which indeed happens to fire beforeVariableChanged, afterVariableChanged at each move, even on CH phase. But in beforeVariableChanged I can only see the old variable, not the new one (I need both to update my data), and afterVariableChanged seems to fire AFTER the rules where fired and checked so it's useless to update the data in this method (by the way how do shadow variables get updated then on CH phase usually? I thought it should fire after, update the shadow variables there and then recheck the rules with updated shadow variables?)
I also tried to look at the chaining but I'm not sure it can fully cover my complex case with storing data separately and using it in rules...
So is there a way to correctly update my structure at each CH move? Or is there a simple way to handle the complex data dependencies other than moving it to separate structure outside of Planner?
Thanks in advance!
Lately I need to do an impact analysis on changing a DB column definition of a widely used table (like PRODUCT, USER, etc). I find it is a very time consuming, boring and difficult task. I would like to ask if there is any known methodology to do so?
The question also apply to changes on application, file system, search engine, etc. At first, I thought this kind of functional relationship should be pre-documented or some how keep tracked, but then I realize that everything can have changes, it would be impossible to do so.
I don't even know what should be tagged to this question, please help.
Sorry for my poor English.
Sure. One can technically at least know what code touches the DB column (reads or writes it), by determining program slices.
Methodology: Find all SQL code elements in your sources. Determine which ones touch the column in question. (Careful: SELECT ALL may touch your column, so you need to know the schema). Determine which variables read or write that column. Follow those variables wherever they go, and determine the code and variables they affect; follow all those variables too. (This amounts to computing a forward slice). Likewise, find the sources of the variables used to fill the column; follow them back to their code and sources, and follow those variables too. (This amounts to computing a backward slice).
All the elements of the slice are potentially affecting/affected by a change. There may be conditions in the slice-selected code that are clearly outside the conditions expected by your new use case, and you can eliminate that code from consideration. Everything else in the slices you may have inspect/modify to make your change.
Now, your change may affect some other code (e.g., a new place to use the DB column, or combine the value from the DB column with some other value). You'll want to inspect up and downstream slices on the code you change too.
You can apply this process for any change you might make to the code base, not just DB columns.
Manually this is not easy to do in a big code base, and it certainly isn't quick. There is some automation to do for C and C++ code, but not much for other languages.
You can get a bad approximation by running test cases that involve you desired variable or action, and inspecting the test coverage. (Your approximation gets better if you run test cases you are sure does NOT cover your desired variable or action, and eliminating all the code it covers).
Eventually this task cannot be automated or reduced to an algorithm, otherwise there would be a tool to preview refactored changes. The better you wrote code in the beginning, the easier the task.
Let me explain how to reach the answer: isolation is the key. Mapping everything to object properties can help you automate your review.
I can give you an example. If you can manage to map your specific case to the below, it will save your life.
The OR/M change pattern
Like Hibernate or Entity Framework...
A change to a database column may be simply previewed by analysing what code uses a certain object's property. Since all DB columns are mapped to object properties, and assuming no code uses pure SQL, you are good to go for your estimations
This is a very simple pattern for change management.
In order to reduce a file system/network or data file issue to the above pattern you need other software patterns implemented. I mean, if you can reduce a complex scenario to a change in your objects' properties, you can leverage your IDE to detect the changes for you, including code that needs a slight modification to compile or needs to be rewritten at all.
If you want to manage a change in a remote service when you initially write your software, wrap that service in an interface. So you will only have to modify its implementation
If you want to manage a possible change in a data file format (e.g. length of field change in positional format, column reordering), write a service that maps that file to object (like using BeanIO parser)
If you want to manage a possible change in file system paths, design your application to use more runtime variables
If you want to manage a possible change in cryptography algorithms, wrap them in services (e.g. HashService, CryptoService, SignService)
If you do the above, your manual requirements review will be easier. Because the overall task is manual, but can be aided with automated tools. You can try to change the name of a class's property and see its side effects in the compiler
Worst case
Obviously if you need to change the name, type and length of a specific column in a database in a software with plain SQL hardcoded and shattered in multiple places around the code, and worse many tables present similar column namings, plus without project documentation (did I write worst case, right?) of a total of 10000+ classes, you have no other way than manually exploring your project, using find tools but not relying on them.
And if you don't have a test plan, which is the document from which you can hope to originate a software test suite, it will be time to make one.
Just adding my 2 cents. I'm assuming you're working in a production environment so there's got to be some form of unit tests, integration tests and system tests already written.
If yes, then a good way to validate your changes is to run all these tests again and create any new tests which might be necessary.
And to state the obvious, do not integrate your code changes into the main production code base without running these tests.
Yet again changes which worked fine in a test environment may not work in a production environment.
Have some form of source code configuration management system like Subversion, GitHub, CVS etc.
This enables you to roll back your changes
I am using HP Exstream (formerly Dialogue from Exstream Software) version 5.0.x. It has a feature to define and save boolean expressions as "Rules".
It has been about 6 years since I used this, but does anybody know if you can define a rule in terms of another rule? There is a "VB-like" language in a popup window, so you are not forced to use the and/or, variable-relational expression form, but I don't have documentation handy. :-(
I would like to define a rule, "NotFoo", in terms of "Foo", instead of repeating the inverse of the whole thing. (Yes, that would be retarded, but that's probably what I will be forced to do, as in other examples of what I am maintaining.) Actually, nested rules would have many uses, if I can figure out how to do it.
I later found that what one needs to do in this case is create user defined "functions", which can reference each other (so long as you avoid indirect recursion). Then, use the functions to define the "rules" (and, don't even bother with "library" rules instead of "inline" rules, most of the time).
I'm late to the question but since you had to answer yourself there is a better way to handle it.
The issue with using functions and testing the result is that there's a good chance that you're going to be adding unnecessary processing because the engine will run through the function every time it's called. Not a big issue with a simple function but it can easily become a problem if the function is complex, especially if it's called in several places.
Depending on the timing of the function (you didn't say whether it was a run level, customer level, or specific to particular documents), it's often better to have the function set a User Boolean variable to store the result then in your library rules you can just check the value of the variable without having to run through the function every time.