Let's assume a variation on Nurse Rostering example in which instead of assigning a nurse to a shift on a day, the nurse is assigned to a variable number of timeblocks on that day (which consists of 24 timeblocks). eg: Nurse1 is assigned to timeblocks [8,9,10,11,12,13,14]. Let's call these consecutive assignments a ShiftPeriod. There is a hard minimum and maximum on these shiftperiods. However, optaplanner has difficulties finding a feasible solution.
When having hard consecutive constraints, is it better to model the planning entity as a startTimeBlock with a duration instead of my current way with assignment to a timeblock and a day and then imposing min/max consecutive?
Take a look at the meeting scheduling example on github master for 6.4.0.Beta1 (but the example will work perfectly with 6.3.0.Final too). Video and docs coming soon. That example uses the design pattern TimeGrains, which is what you're looking for I think.
Related
I am wondering whether it is possible to create constraints over years and hours at the same time in Pyomo.
For example, my current time variable is:
model.T = pyo.RangeSet(len(hourly_data.index))
However, this does not allow me to distinguish between hours and years. I do have a timestamp variable, that contains the date and the time. So, I thought perhaps I could do:
model.T2 = pyo.Set(initialize=hourly_data.DateTime)
Now the problem comes on how to manipulate this TimeStamp object. Consider that the parameters are given and the variables are outputs from the solver. Let's first assume that our objective function is a maximisation function. We would like to create the following constraint:
Get the maximum water used, in normal circumstances, if we would like to get the maximum water usage of during all hours, we can do:
model.c_maxWater = pyo.ConstraintList()
for t in model.T:
model.c_maxWater.add(model.waterUsage[t] <=
model.maxWater)
With a penalty in the objective function associated with model.maxWater. The problem becomes what if we want to penalise every year differently, because we have different water costs? I can imagine that our constraint would be somewhat like:
model.c_maxWater = pyo.ConstraintList()
for t in model.T2:
model.c_maxWater.add(model.waterUsage[t] <=
model.maxWater[y])
My problem is: how can I associate the t variable with certain years y. One index is hourly (in this case the t and the other is annually (y)?
Note: a multi index set is possible, but how to deal with leap years etc.? Can a multi index set have different lengths in it's hourly dimensions for leap years?
You could double-index your variables and data with [hour, year], but that would be redundant information, right? As you should be able to calculate the years from the hours (with or without some initial offset for the yearly rollover, if that is important.)
I would go about this making subsets of your time index associated with years. Do this outside of pyomo using set/list comprehensions and a little math and/or some of the functionality in DateTime if (as you say) you are spanning many years and leap years, etc. [Aside: If you are making an hourly model that spans years--it will probably collapse under it's own weight, but that is a secondary issue. :) ] Then you can use these subsets to build constraints in your model without muddying things up with extra indices.
Comment back if stuck...
I am reviewing the time schedule example to organize the times and room to a specific lesson. However, the time in the example is doing the assumption that each class is 1 hour. Where can I find more information in Optaplanner? if I decide to change the hour to a predefined duration such as 1 hour, 2.5 hours depending of the lesson. Thanks!
Take a look at the Conference Scheduling example in optaplanner-examples (!= optaplanner-quickstarts).
In that case a Talk (= Lesson) is assigned to a Timeslot and a Room, but each talk has a talkType (for example "conference talk", "lab", "deep dive", "lighting talk", etc) and each timeslot has a requiredTalkType.
For performance reasons, it uses "value range from entity" (instead of "value range from solution"), so that requiredTalkType becomes a build-in hard constraints instead of an actual hard constraint.
Also read the docs section Assigning to time in the Design Patterns chapter. This is just the tip of the iceberg: there are other patterns too. Conference scheduling still uses the "timeslot pattern", because the "timegrain pattern" is often slow.
I am applying the VRP example of optaplanner with time windows and I get feasible solutions whenever I define time windows in a range of 24 hours (00:00 to 23:59). But I am needing:
Manage long trips, where I know that the duration between leaving the depot to the first visit, or durations between visits, will be more than 24 hours. So currently it does not give me workable solutions, because the TW format is in 24 hour format. It happens that when applying the scoring rule "arrivalAfterDueTime", always the "arrivalTime" is higher than the "dueTime", because the "dueTime" is in a range of (00:00 to 23:59) and the "arrivalTime" is the next day.
I have thought that I should take each TW of each Customer and add more TW to it, one for each day that is planned.
Example, if I am planning a trip for 3 days, then I would have 3 time windows in each Customer. Something like this: if Customer 1 is available from [08:00-10:00], then say it will also be available from [32:00-34:00] and [56:00-58:00] which are the equivalent of the same TW for the following days.
Likewise I handle the times with long, converted to milliseconds.
I don't know if this is the right way, my consultation would be more about some ideas to approach this constraint, maybe you have a similar problematic and any idea for me would be very appreciated.
Sorry for the wording, I am a Spanish speaker. Thank you.
Without having checked the example, handing multiple days shouldn't be complicated. It all depends on how you model your time variable.
For example, you could:
model the time stamps as a long value denoted as seconds since epoch. This is how most of the examples are model if I remember correctly. Note that this is not very human-readable, but is the fastest to compute with
you could use a time data type, e.g. LocalTime, this is a human-readable time format but will work in the 24-hour range and will be slower than using a primitive data type
you could use a date time data tpe, e.g LocalDateTime, this is also human-readable and will work in any time range and will also be slower than using a primitive data type.
I would strongly encourage to not simply map the current day or current hour to a zero value and start counting from there. So, in your example you denote the times as [32:00-34:00]. This makes it appear as you are using the current day midnight as the 0th hour and start counting from there. While you can do this it will affect debugging and maintainability of your code. That is just my general advice, you don't have to follow it.
What I would advise is to have your own domain models and map them to Optaplanner models where you use a long value for any time stamp that is denoted as seconds since epoch.
I was wondering if anyone has code for a BUGS/JAGS model for a repeated measures ANOVA? Basically, I have a response (y) that I want to model against Time of day, Day, and Treatment. I would also like to include two interaction terms, Treatment x Time of Day and Treatment x Day. There are about 20 individuals in the study, who were measured 4 times per day over about 1 week. I'm not entirely sure where to start, and I'm concerned that the Time of day covariate should also be nested within the Day covariate? If anyone has code for the likelihood portion of the BUGS/JAGS model, it would be greatly appreciated. I can take care of priors. Just can't seem to get off the ground with this one.
There are a few ambiguities in your question.
Do you want Time of Day and Day to enter as continuous covariates or as discrete factors?
Do you want individual identity to enter the model as a fixed or random effect?
If either Day or Time of Day is a factor, do you want to include it as a fixed or random effect?
You ask about whether Time of Day should be nested within Day. This is impossible to answer without knowing more about your data and your aims.
Here's an example of code that assumes that you want to treat individuals as a random effect.
Also assumed: Treatment, Time.of.day, and Day have constant slopes across all individuals. It would be straightforward to extend this model to a fixed- or random-slopes model where different individuals get separate modeled slopes. For example, for a random-slopes model, you'd just modify the beta parameters below to treat them in a manner similar to the alpha parameter.
Following the OP's request, this is the likelihood portion only, and does not include the priors.
for(i in 1:n.observations){
y[i] ~ dnorm(alpha[individual[[i]] + beta1*Day[i] + beta2*Time.of.day[i] + beta3*Treatment[i] + beta4*Treatment[i]*Day[i] + beta5*Treatment[i]*Time.of.day[i], tau.obs)
}
# individual[i] contains the numerical index representing the individual that corresponds to observation i.
for(j in 1:n.individuals){
alpha[j] ~ dnorm(mu, tau)
}
Trying to generate the Roster of employees however going doing at the level of hour. Resource hour requirements are like 1h at 8, 2 at 9, 7 at 1pm..
After assigning the first 3 resources, it keeps checking solutions are around them via assigning/reassigning them to Slots without trying to assign other employees.
How to troubleshoot this problem? couldn't it be the weights for each constraint/violation? Does it speed it if I implement a quick construction heuristic that fills the slots before handing to local search?
Current configuration consists of first_fit for construction heuristic, hill climbing as first phase till it get stuck then tabu with simulated annealing
Normally a CH assigns all employees before LS starts, which just moves them around but never unassigns anyone. See general phase sequence diagram in chapter "optimization algorithms" in the docs. That's presuming you don't apply overconstrained planning (nullable=true or null in the value range).
If you do apply overconstrained planning, that you need to make sure that the score cost of leaving an employee unassigned is worse than the score cost of however he's could be assigned.
Also set up a benchmarker config, so you have some benchmark report graphs to allow you to understand what's going on.