In my optaplanner project i have periods with fixed duration.
For some of them there is a medium constraint that they should be scheduled in a row, occupying for example 5 directly adjacent timeslots.
I want to use Java constraints streams but dont manage to define this constraint using the timeslot-pattern.
I know that this constraint can be defined using the time-grain-pattern as suggested by https://stackoverflow.com/a/30702865. I have done this and it works. But I want to compare timeslot-pattern to time-grain-pattern because the have different behaviour when it comes to escape local maxima. The problem with time-grain-pattern is that those 5 periods could also be sheduled in every possible partition of 5 (eg. as 2 + 2 + 1).
Has anyone a hint on how to define the constraint using timeslot-pattern?
You may want to try using the newly added ifExists() building block. Without knowing your actual domain model, I imagine the constraint to look like this:
private Constraint twoConsecutivePeriods(ConstraintFactory constraintFactory) {
return constraintFactory.from(Period.class)
.ifExists(Period.class, equal(Period::getDay, period -> period.getDay() + 1))
.penalize("2 consecutive periods", period -> ...);
}
Consequently, ifNotExists() may be used to achieve the opposite. We have examples of both in Traveling Tournament OptaPlanner example.
Please note that this API is only available since OptaPlanner 7.33.0.Final onward.
I solved the problem using ConstraintCollectors.toList() as follows:
factory.from(Period.class).groupBy(Period::getCourse, ConstraintCollectors.toList())
.penalize(id, score, (course,list)->dayDistributionPenalize(course,list));
and
public int dayDistributionPenalize(Course course, List<Period> list) {
var penalize = 0;
var dayCodes = dayCodesFromPeriodList(list);
for (var dayCode : dayCodes)
if (!getAllowedDayCodes(course).contains(dayCode))
penalize++;
return penalize;
}
Where getAllowedDayCodes() returns for example 0111110000, 0230000000 or 0005000000 etc. from a hash-map if a course has to have 5 consecutive periods.
Related
We have a use-case where we want to present the user with some human-readable message with why an "assignment" was rejected based on the score of the constraints.
For e.g. in the CloudBalancing problem with 3 computers (Computer-1,2,3) and 1 process (Process-1) we ended up with the below result:
Computer-1 broke a hard constraint (requiredCpu)
Computer-2 lost due to a soft constraint (min cost)
Computer-3 assigned to Process-1 --> (Optimal solution)
We had implemented the BestSolutionChanged listener where we used solution.explainScore() to get some info and enabled DEBUG logging which provided us the OptaPlanner internal logs for intermediate moves and their scores. But the requirement is to provide some custom human readable information on why all the non-optimal solutions (Computer-1, Computer-2) were rejected even if they were infeasible (basically explanation of scores of these two solutions).
So wanted to know how can we achieve the above ?
We did not want to rely on listening to BestSolutionChanged event as
it might not get triggered for other solutions if the LS/CH
phase starts with a solution which is already a "best solution"
(Computer-3). Is this a valid assumption ?
DEBUG logs do provide us with the
information but building a custom message from this log does not seem
like a good idea so was wondering if there is another
listener/OptaPlanner concept which can be used to achieve this.
By "all the non-optimal solutions", do you mean instead a particular non-optimal solution? Search space can get very large very quickly, and OptaPlanner itself probably won't evaluate the majority of those solutions (simply because the search space is so large).
You are correct that BestSolutionChanged event will not fire again if the problem/solution given to the Solver is already the optimal solution (since by definition, there are no solutions better than it).
Of particular interest is ScoreManager, which allows you to calculate and explain the score of any problem/solution:
(Examples taken from https://www.optaplanner.org/docs/optaplanner/latest/score-calculation/score-calculation.html#usingScoreCalculationOutsideTheSolver)
To create it and get a ScoreExplanation do:
ScoreManager<CloudBalance, HardSoftScore> scoreManager = ScoreManager.create(solverFactory);
ScoreExplanation<CloudBalance, HardSoftScore> scoreExplanation = scoreManager.explainScore(cloudBalance);
Where cloudBalance is the problem/solution you want to explain. With the
score explanation you can:
Get the score
HardSoftScore score = scoreExplanation.getScore();
Break down the score by constraint
Collection<ConstraintMatchTotal<HardSoftScore>> constraintMatchTotals = scoreExplanation.getConstraintMatchTotalMap().values();
for (ConstraintMatchTotal<HardSoftScore> constraintMatchTotal : constraintMatchTotals) {
String constraintName = constraintMatchTotal.getConstraintName();
// The score impact of that constraint
HardSoftScore totalScore = constraintMatchTotal.getScore();
for (ConstraintMatch<HardSoftScore> constraintMatch : constraintMatchTotal.getConstraintMatchSet()) {
List<Object> justificationList = constraintMatch.getJustificationList();
HardSoftScore score = constraintMatch.getScore();
...
}
}
and get the impact of individual entities and problem facts:
Map<Object, Indictment<HardSoftScore>> indictmentMap = scoreExplanation.getIndictmentMap();
for (CloudProcess process : cloudBalance.getProcessList()) {
Indictment<HardSoftScore> indictment = indictmentMap.get(process);
if (indictment == null) {
continue;
}
// The score impact of that planning entity
HardSoftScore totalScore = indictment.getScore();
for (ConstraintMatch<HardSoftScore> constraintMatch : indictment.getConstraintMatchSet()) {
String constraintName = constraintMatch.getConstraintName();
HardSoftScore score = constraintMatch.getScore();
...
}
}
We're are trying to put together a proof of concept planning constraints solver using OptaPlanner. However the construction phase seems slow for even a trivial set of constraints i.e. assign to one User with no overlapping Tasks for that User.
Problem overview:
We are assigning Tasks to Users
Only one Task can be assigned to User
The Tasks can be variable length: 1-16 hours
Users can only do one Task at a time
Users have 8 hours per day
We are using the Time Grain pattern - 1 grain = 1 hour.
See constraints configuration below.
This works fine (returns in a 20 seconds) for a small number of Users and Tasks e.g. 30 Users / 1000 Tasks but when we start scaling up the performance rapidly drops off. Simply increasing the number of Users without increasing the number of Tasks (300 Users / 1000 Tasks) increases the solve time to 120 seconds.
But we hope to scale up to 300 Users / 10000 Tasks and incorporate much more elaborate constraints.
Is there a way to optimise the constraints/configuration?
Constraint constraint1 = constraintFactory.forEach(Task.class)
.filter(st -> st.getUser() == null)
.penalize("Assign Task", HardSoftLongScore.ONE_HARD);
Constraint constraint2 = constraintFactory.forEach(Task.class)
.filter(st -> st.getStartDate() == null)
.penalize("Assign Start Date", HardSoftLongScore.ONE_HARD);
Constraint constraint3 = constraintFactory
.forEachUniquePair(Task.class,
equal(Task::getUser),
overlapping(st -> st.getStartDate().getId(),
st -> st.getStartDate().getId() + st.getDurationInHours()))
.penalizeLong("Crew conflict", HardSoftLongScore.ONE_HARD,
(st1, st2) -> {
int x1 = st1.getStartDate().getId() > st2.getStartDate().getId() ? st1.getStartDate().getId(): st2.getStartDate().getId();
int x2 = st1.getStartDate().getId() + st1.getDurationInHours() < st2.getStartDate().getId() + st2.getDurationInHours() ?
st1.getStartDate().getId() + st1.getDurationInHours(): st2.getStartDate().getId() + + st2.getDurationInHours();
return Math.abs(x2-x1);
});
constraint1 and constraint2 seem redundant to me. The Construction Heuristic phase will initialize all planning variables (automatically, without being penalized for not doing so) and Local Search will never set a planning variable to null (unless you're optimizing an over-constrained problem).
You should be able to remove constraint1 and constraint2 without impact on the solution quality.
Other than that, it seems you have two planning variables (Task.user and Task.startDate). By default, in each CH step, both variables of a selected entity are initialized "together". That means OptaPlanner looks for the best initial pair of values for that entity in the Cartesian product of all users and all time grains. This scales poorly.
See the Scaling construction heuristics chapter to learn how to change that default behavior and for other ways how to make Construction Heuristic algorithms scale better.
Here is a description of the optimization problem I need to solve, but with a small twist. I need to add two constraints:
The first constraint: From each group we want to choose only one product, which means that we can't allow two products from the same group to be in the same basket(i.e. Product11 and Product12 should never be in the same basket )
The second constraint: In the user basket we only want products from the categories that the user is interested in. i.e if user is interested in the category 'Protein' he should never find in his basket a product from the category 'Carbs' nor 'Fat'.
Accordingly I have changed the OPL code products.mod:
{string} categories=...;
{string} groups[categories]=...;
{string} allGroups=union (c in categories) groups[c];
{string} products[allGroups]=...;
{string} allProducts=union (g in allGroups) products[g];
float prices[allProducts]=...;
int Uc[categories]=...;
float Ug[allGroups]=...;
float budget=...;
dvar boolean z[allProducts]; // product out or in ?
dexpr int xg[g in allGroups]=(sum(p in products[g]) z[p]);
dexpr int xc[c in categories]=(1<=sum(g in groups[c]) xg[g]);
maximize
sum(c in categories) Uc[c]*xc[c]+
sum(c in categories) sum(g in groups[c]) Uc[c]*Ug[g]*xg[g];
subject to
{
ctBudget:// first constraint
sum(p in allProducts) z[p]*prices[p]<=budget;
ctGroups: // second constraint
forall( g in allGroups )
xg[g]==1;
ctCategories: // third constraint
forall( c in categories )
Uc[c]==xc[c];
}
{string} solution={p | p in allProducts : z[p]==1};
execute
{
writeln("xg=",xc);
writeln("xg=",xg);
writeln("Solution=",solution);
}
Here the code for products.data
categories={"Carbs","Protein","Fat"};
groups=[{"Meat","Milk"},{"Pasta","Bread"},{"Oil","Butter"}];
products=[
{"Product11","Product12"},{"Product21","Product22","Product23"},
{"Product31","Product32"},{"Product41","Product42"},
{"Product51"},{"Product61","Product62"}];
prices=[1,1,3,3,2,1,2,1,3,1,2,1];
Uc=[1,0,0];
Ug=[0.8,0.2,0.1,1,0.01,0.6];
budget=2;
The results given by IBM Studio are the following: {Product12,Product31}; while the result I want is either {Product11} or {Product12}.
I have also noticed this in the conflicts tab:
And this in the relaxation tab:
So I have five questions:
I don't see any conflicts between the constraints, because if we choose the product "Product12" (or Product11") we respect all the constraint and the budget would be <= 2 because price["Product12"]==1.
I don't understand why does the optimizer chose to not respect the last constraint, and to maximize the objective function instead.
If the optimizer did not use any relaxation, would this lead to an infeasible model (no solution to the problem)? I don't understand why? For me choosing only "Product12" (or "Product11") is a perfect solution with no need to any relaxation.
How to oblige the optimizer to not relax the last constraint? (Note that changing the settings file, products.ops to relax only labeled constraints as in the documentation did not help, as I want to relax only one constraint )
In the documentation about relaxing infeasible models I found this:
Be aware, however, that infeasibility may be the consequence of an error in the modeling of another constraint.
Is this my case?
Thank you in advance for the help
on no.1+2 = you have some things which are not defined in the model... can you say if AllGroups and groups exist separately or the 2 are the same then what is the data for those? Also you use "products" and "AllProducts", the same Q as for "groups". Would you paste here a full .mod and .dat which you have run and produced the relaxed result you showed...? Once I can at least reproduce the problem you show, I can start looking at the "why's" :-)
on no.3 = yes, it supposed to
on no.4 = the way you could arrive to a non-relaxed model is that you remove the naming for the constraints. I.e. EVERY constraint which is named is considered to be relaxed if w/o relaxation there could be no solution. Every non-named constraint is "hard", i.e. it HAS to be respected, can not be relaxed. Simply remove or comment out these lines:
ctBudget:// first constraint, ctGroups: // second constraint, ctCategories: // third constraintif you want all constraint to be respected as they are with the given data...
I wonder if it's a good/bad idea to use deepstream record.getList for storing a lot of unique values, for example, emails or any other unique identifiers. The main purpose is to be able to answer a question quickly whether we already have, say, a user with such email (email in use) or another record by specific unique field.
I made few experiments today and got two problems:
1) when I tried to populate the list with few thousands values I got
FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - process out of memory
and my deepstream server went off. I was able to fix it by adding more memory to the server node process with this flag
--max-old-space-size=5120
it doesn't look fine but allowed me to make a list with more than 5000 items.
2) It wasn't enough for my tests so I precreated the list with 50000 items and put the data directly to rethinkdb table and got another issue on getting the list or modifing it:
RangeError: Maximum call stack size exceeded
I was able to fix it with another flag:
--stack-size=20000
It helps but I believe it's only matter of time when one of those errors appear in production when the list size reaches proper value. I don't know really whether it's nodejs, javascript, deepstream or rethinkdb issue. That's all in general made me think that I try to use deepstream List wrong way. Please, let me know. Thank you in advance!
Whilst you can use lists to store arrays of strings, they are actually intended as collections of recordnames - the actual data would be stored in the record itself, the list would only manage the order of the records.
Having said that, there are two open Github issues to improve performance for very long lists by sending more efficient deltas and by introducing a pagination option
Interesting results in regards to memory though, definitely something that needs to be handled more gracefully. In the meantime you could drastically improve performance by combining updates into one:
var myList = ds.record.getList( 'super-long-list' );
// Sends 10.000 messages
for( var i = 0; i < 10000; i++ ) {
myList.addEntry( 'something-' + i );
}
// Sends 1 message
var entries = [];
for( var i = 0; i < 10000; i++ ) {
entries.push( 'something-' + i );
}
myList.setEntries( entries );
I have an hstore column that I'm using to build a table in Prawn (pdf builder). The data will consist of records for a given month. Since it is hstore, the keys used will likely change from day to day so this needs to be dynamic.
I need to determine:
1 What unique keys are used that month
I created a helper to find the unique keys that were used in the month. These will be used as column headers.
keys(#users_logs)
# this returns an array like - ["XC", "PIC", "Mountain"]
The table will display a users dutylog data for the month. For testing...If I explicitly call known hstore keys...the data displays correctly. But, since its hstore...I wont know what the table column will be in production.
For testing, I call known hstore keys...this creates the prawn table row data per duty log.
#users_logs.map do |dutylog|
[ dutylog.properties["XC"],
dutylog.properties["PIC"],
dutylog.properties["Mountain"]
]
end
But, since this is hstore...I wont know what keys to call in production. So, I need to make the above iteration dynamic.
I tried, without success, to iterate over each dutylog entry, then iterate over each unique key and output one "dutylog.properties[x]" call for each key value...but, this just outputs the array of key values. I tried using send() in the block, but that didnt help.
#users_logs.map do |dutylog|
[ keys(#users_logs).each { |k| dutylog.properties[k] }.join(",") ]
end
Any ideas on how I could make the "dutylog.properties[k]" dynamic?
Took some head scratching...but turning out to be quit easy
This will build the rows for the Prawn table
def hstore_duty_log_rows
[keys(#users_logs)] +
#users_logs.map do |dutylog|
keys(#users_logs).map { |key| dutylog.properties.keys.include?(key) ? "#{dutylog.properties[key]}" : "0" }
end
end