I got 2 constraints :
one HARD ( 100 score )
one SOFT ( 100 score )
When i run the solver, it's like he just try to resolve the hard one, and doesn't look for the soft one. I have no softScore, optaplanner return a 0Hard/0Medium/0Soft.
My HARD constraint is : a worker can't work more than 5 days in a row
My SOFT constraint is : try to put the most of working days possible ( calculated by hours of work )
My test is for two weeks. A worker need to be above 66 hours of work, is he is under we penalize more.
Here the two constraints in JAVA:
the SOFT one :
private Constraint averageWorkingHours(ConstraintFactory constraintFactory) {
return constraintFactory
.from(WorkingDay.class)
.filter((wd) -> wd.isFreeToWork())
.groupBy(WorkingDay::getAgent,
WorkingDay::hasBreakDayBySolver,
// hasBreakDayBySolver return if the solver has put my
#PlanningVariable ( a breakDay ) in the WorkingDay
ConstraintCollectors.count())
.filter((agent, hasBreakDayBySolver, count) -> {
return !hasBreakDayBySolver;
})
.penalizeConfigurable(AVERAGE_HOURS_WORKING, ((agent, hasBreakDayBySolver, count) -> {
// a worker need to be above 66 hours of work for 2 weeks
// We penalize more if a worker is under the average of working hours wanted for two weeks ( 66 )
if(count * 7 < 66){ // count * hours worked for one day
return (66 - count * 7) * 2 ;
}
else{
return count * 7 - 66;
}
}));
}
the HARD one :
private Constraint fiveConsecutiveWorkingDaysMax(ConstraintFactory constraintFactory) {
return constraintFactory
.from(WorkingDay.class)
.filter(WorkingDay::hasWork)
.join(constraintFactory.from(WorkingDay.class)
.filter(WorkingDay::hasWork),
Joiners.equal(WorkingDay::getAgent),
Joiners.greaterThan(wd->wd.getDayJava()),
Joiners.filtering((wd1, wd2)->{
LocalDate fourDaysBefore = wd1.getDayJava().minusDays(4);
Boolean wd2isAfterFourDaysBeforeWd1 = wd2.getDayJava().compareTo(fourDaysBefore) >= 0;
return wd2isAfterFourDaysBeforeWd1;
})
)
.groupBy((wd1, wd2) -> wd2, ConstraintCollectors.countBi())
.filter((wd2, count) -> count >= 4)
.penalizeConfigurable(FIVE_CONSECUTIVE_WORKING_DAYS_MAX,((wd2, count)-> count - 3));
}
I hope my explanations are clear.
Thanx !
Unit test your constraints, using a ConstraintVerfier. See also this short video by Lukas.
Verify that your #ConstraintWeight for that soft constraint in your dataset isn't zero.
Related
I have the following timelines :
7 a.m --------------------- 12 a.m. 2 am .................. 10 a.m
10-------11 3------5
closed closed
the output should be the non-intersecting time ranges:
7-10 a.m, 11 -12 a.m, 2-3 p.m, 5-10 p.m
I tried to minus and subtract method for Ranges but didn't work
A tricky part could be the following case
7 a.m --------------------- 12 a.m. 2 am .................. 10 a.m
10----------------------------------------5
closed
the output should be the non-intersecting time ranges:
7-10 a.m, 5-10 p.m
Any Idea for kotlin implementation?
I tried to minus and subtract method for Ranges but didn't work
Sounds like a pretty common case and I suspect there are some existing algorithms for it, but nothing comes out of top of my head.
My idea is to first transform both lists of ranges into a single list of opening/closing "events", ordered by time. The start of an opening range increases the "openess" by +1 while its end decreases it (-1). Start of a closing range also decreases "openess" while its end increases it. Then we iterate the events in the time order, keeping the information on what is the current "openess" level. Whenever the "openess" level is 1, that means we are in the middle of an opening range, but not inside a closing range, so we are entirely open.
Assuming both lists of ranges are initially properly ordered, as in your example, I believe it should be doable in linear time and even without this intermediary list of events. However, such implementation would be pretty complicated to cover all possible states, so I decided to go with a simpler solution which is I believe O(n * log(n)). Also, this implementation requires that opening ranges do not overlap with each other, the same for closing ranges:
fun main() {
// your first example
println(listOf(Range(7, 12), Range(14, 22)) - listOf(Range(10, 11), Range(15, 17)))
// second example
println(listOf(Range(7, 12), Range(14, 22)) - listOf(Range(10, 17)))
// two close rangs "touch" each other
println(listOf(Range(8, 16)) - listOf(Range(10, 11), Range(11, 13)))
// both open and close range starts at the same time
println(listOf(Range(8, 16)) - listOf(Range(8, 12)))
}
data class Range(val start: Int, val end: Int)
operator fun List<Range>.minus(other: List<Range>): List<Range> {
// key is the time, value is the change of "openess" at this time
val events = sortedMapOf<Int, Int>()
forEach { (start, end) ->
events.merge(start, 1, Int::plus)
events.merge(end, -1, Int::plus)
}
other.forEach { (start, end) ->
events.merge(start, -1, Int::plus)
events.merge(end, 1, Int::plus)
}
val result = mutableListOf<Range>()
var currOpeness = 0
var currStart = 0
for ((time, change) in events) {
// we were open and now closing
if (currOpeness == 1 && change < 0) {
result += Range(currStart, time)
}
currOpeness += change
// we were closed and now opening
if (currOpeness == 1 && change > 0) {
currStart = time
}
}
return result
}
On the Android Studio emulator The user is required to enter a maximum of 10 numbers. When I put in the number 1 the output shows 0 instead of 1 (this is for the min number; the max works perfectly fine) Can anyone please assist me in this problem. I tried using minOf() and max() nothing worked Below is a snippet of my source code:
val arrX = Array(10) { 0 }
.
.
.
.
findMinAndMaxButton.setOnClickListener {
fun getMin(arrX: Array<Int>): Int {
var min = Int.MAX_VALUE
for (i in arrX) {
min = min.coerceAtMost(i)
}
return min
}
fun getMax(arrX: Array<Int>): Int {
var max = Int.MIN_VALUE
for (i in arrX) {
max = max.coerceAtLeast(i)
}
return max
}
output.text = "The Min is "+ getMin(arrX) + " and the Max is " + getMax(arrX)
}
}
}
Is there anything that can be done to get this work?
You're initialising arrX to a bunch of zeroes, and 0.coerceAtMost(someLargerNumber) will always stick at 0.
Without seeing how you set the user's numbers it's hard to say what you need to do - but since you said the user enters a maximum of 10 numbers, at a guess there are some gaps in your array, i.e. indices that are still set to 0. If so, they're going to be counted in your min calculation.
You should probably use null as your default value instead - that way you can just ignore those in your calculations:
val items = arrayOfNulls<Int?>(10)
// this results in null, because there are no values - handle that however you like
println(items.filterNotNull().minOrNull())
>> null
// set values on some of the indices
(3..5).forEach { items[it] = it }
// now this prints 3, because that's the smallest of the numbers that -do- exist
println(items.filterNotNull().minOrNull())
>> 3
I am trying to apply optaplanner to my project, such as picking order path calculation. When there are many order items, the calculation speed is very slow. I want to know how to improve the calculation speed,When the order items are more than 200, the calculation speed is particularly slow, and I only add constraints. I defined a selectionsorterweightfactory, but the debug doesn't seem to work.
private Constraint requiredNumberOfBuckets(ConstraintFactory constraintFactory) {
return constraintFactory
.forEach(TrolleyStep.class)
// raw total volume per order
.groupBy(
trolleyStep -> trolleyStep.getTrolley(),
trolleyStep -> trolleyStep.getOrderItem().getOrderCode(),
sum(trolleyStep -> trolleyStep.getOrderItem().getProduct().getCapacity()))
// required buckets per order
.groupBy(
(trolley, order, orderTotalVolume) -> trolley,
(trolley, order, orderTotalVolume) -> order,
sum(
(trolley, order, orderTotalVolume) ->
calculateOrderRequiredBuckets(orderTotalVolume, trolley.getBucketCapacity())))
// required buckets per trolley
.groupBy(
(trolley, order, orderTotalBuckets) -> trolley,
sum((trolley, order, orderTotalBuckets) -> orderTotalBuckets))
// penalization if the trolley don't have enough buckets to hold the orders
.filter((trolley, trolleyTotalBuckets) -> trolley.getBucketNum() < trolleyTotalBuckets)
.penalize(
"Required number of buckets",
HardSoftLongScore.ONE_HARD,
(trolley, trolleyTotalBuckets) -> trolleyTotalBuckets - trolley.getBucketNum());}
private Constraint minimizeOrderSplitByTrolley(ConstraintFactory constraintFactory) {
return constraintFactory
.forEach(TrolleyStep.class)
.groupBy(
trolleyStep -> trolleyStep.getOrderItem().getOrderCode(),
countDistinctLong(TrolleyStep::getTrolley))
.penalizeLong(
"Minimize order split by trolley",
HardSoftLongScore.ONE_SOFT,
(order, trolleySpreadCount) -> trolleySpreadCount * 10000);
private Constraint distanceToEnd(ConstraintFactory constraintFactory) {
return constraintFactory
.forEach(TrolleyStep.class)
.filter(ele -> ele.getNextStep() == null)
.penalizeLong(
" distance to end ",
HardSoftLongScore.ONE_SOFT,
trolleyStep -> (long) trolleyStep.distanceToLocation(OrderPickingService.end));}
private Constraint distanceToPrevious(ConstraintFactory constraintFactory) {
return constraintFactory
.forEach(TrolleyStep.class)
.penalizeLong(
" distance to previous ",
HardSoftLongScore.ONE_SOFT,
trolleyStep ->
(long)
trolleyStep.distanceToLocation(
trolleyStep.getPreviousStandstill().getLocation()));}
Your problem will be the requiredNumberOfBuckets constraint.
Unfortunately, in the current implementation of Constraint Streams, there is no way how to make three consecutive groupBy()s fast. You will have to find a way around it. I'd experiment with writing a custom ConstraintCollector that would do the work in one groupBy(...), as opposed to doing it in three steps.
We have 490 students
We have a capacity of 20 students per classroom
Ie we have 490/20 = 24.5 ie 25 sections
First I want to arrange students in alphabetical order
Secondly I want to make a table containing
Id_classroom and id_student attributes
My Problem is in id_classroom and how to populate it
How can I automatically give this
Students from 1 to 20 === id_classroom = 1
Students from 21 to 40 === id_classroom = 2
and so on
thank you in advance
In your SQL RESQUEST, the function FLOOR is your friend.
What it does is reduce your number to the largest integer less than or equal to the expression.
Let's say you have a student id 46.
The operation would look like this:
46-1= 45 //index starts at 1 instead of 0.
45/20=2.25
Floor(2.25)=2
2+1=3
So your SQL would have something as such in it:
id_classroom=FLOOR((id_student-1)/20)+1
Haven't tested this, I just thought something up :)
EDIT: Just noticed this was in SQL not C# forum. Not sure if still useful.
TotalStudents = 490;
MaxCap = 20;
RoomID = 0;
Dictionary<RoomID, MaxCap>() StudentsAndRooms = new Dictionary();
void PopulateRoom(Dictionary<int,int> myD)
{
myD = StudentsAndRooms;
for(int i = 0; i < TotalStudents; i++)
{
if(myD[RoomID].Value.Count <= MaxCap)
{
myD[RoomID] = myD[RoomID +1]
Debug.Log("Added Room");
}
else if(myD[RoomID].Value.Count != MaxCap)
{
myD[RoomID] +=1;
Debug.Log("Added Student");
}
else
{
Debug.Log("Something went wrong!");
}
}
void Start()
{
PopulateRoom(StudentsAndRooms);
}
I'm looking for a way to generate combinations of objects ordered by a single attribute. I don't think lexicographical order is what I'm looking for... I'll try to give an example. Let's say I have a list of objects A,B,C,D with the attribute values I want to order by being 3,3,2,1. This gives A3, B3, C2, D1 objects. Now I want to generate combinations of 2 objects, but they need to be ordered in a descending way:
A3 B3
A3 C2
B3 C2
A3 D1
B3 D1
C2 D1
Generating all combinations and sorting them is not acceptable because the real world scenario involves large sets and millions of combinations. (set of 40, order of 8), and I need only combinations above the certain threshold.
Actually I need count of combinations above a threshold grouped by a sum of a given attribute, but I think it is far more difficult to do - so I'd settle for developing all combinations above a threshold and counting them. If that's possible at all.
EDIT - My original question wasn't very precise... I don't actually need these combinations ordered, just thought it would help to isolate combinations above a threshold. To be more precise, in the above example, giving a threshold of 5, I'm looking for an information that the given set produces 1 combination with a sum of 6 ( A3 B3 ) and 2 with a sum of 5 ( A3 C2, B3 C2). I don't actually need the combinations themselves.
I was looking into subset-sum problem, but if I understood correctly given dynamic solution it will only give you information is there a given sum or no, not count of the sums.
Thanks
Actually, I think you do want lexicographic order, but descending rather than ascending. In addition:
It's not clear to me from your description that A, B, ... D play any role in your answer (except possibly as the container for the values).
I think your question example is simply "For each integer at least 5, up to the maximum possible total of two values, how many distinct pairs from the set {3, 3, 2, 1} have sums of that integer?"
The interesting part is the early bailout, once no possible solution can be reached (remaining achievable sums are too small).
I'll post sample code later.
Here's the sample code I promised, with a few remarks following:
public class Combos {
/* permanent state for instance */
private int values[];
private int length;
/* transient state during single "count" computation */
private int n;
private int limit;
private Tally<Integer> tally;
private int best[][]; // used for early-bail-out
private void initializeForCount(int n, int limit) {
this.n = n;
this.limit = limit;
best = new int[n+1][length+1];
for (int i = 1; i <= n; ++i) {
for (int j = 0; j <= length - i; ++j) {
best[i][j] = values[j] + best[i-1][j+1];
}
}
}
private void countAt(int left, int start, int sum) {
if (left == 0) {
tally.inc(sum);
} else {
for (
int i = start;
i <= length - left
&& limit <= sum + best[left][i]; // bail-out-check
++i
) {
countAt(left - 1, i + 1, sum + values[i]);
}
}
}
public Tally<Integer> count(int n, int limit) {
tally = new Tally<Integer>();
if (n <= length) {
initializeForCount(n, limit);
countAt(n, 0, 0);
}
return tally;
}
public Combos(int[] values) {
this.values = values;
this.length = values.length;
}
}
Preface remarks:
This uses a little helper class called Tally, that just isolates the tabulation (including initialization for never-before-seen keys). I'll put it at the end.
To keep this concise, I've taken some shortcuts that aren't good practice for "real" code:
This doesn't check for a null value array, etc.
I assume that the value array is already sorted into descending order, required for the early-bail-out technique. (Good production code would include the sorting.)
I put transient data into instance variables instead of passing them as arguments among the private methods that support count. That makes this class non-thread-safe.
Explanation:
An instance of Combos is created with the (descending ordered) array of integers to combine. The value array is set up once per instance, but multiple calls to count can be made with varying population sizes and limits.
The count method triggers a (mostly) standard recursive traversal of unique combinations of n integers from values. The limit argument gives the lower bound on sums of interest.
The countAt method examines combinations of integers from values. The left argument is how many integers remain to make up n integers in a sum, start is the position in values from which to search, and sum is the partial sum.
The early-bail-out mechanism is based on computing best, a two-dimensional array that specifies the "best" sum reachable from a given state. The value in best[n][p] is the largest sum of n values beginning in position p of the original values.
The recursion of countAt bottoms out when the correct population has been accumulated; this adds the current sum (of n values) to the tally. If countAt has not bottomed out, it sweeps the values from the start-ing position to increase the current partial sum, as long as:
enough positions remain in values to achieve the specified population, and
the best (largest) subtotal remaining is big enough to make the limit.
A sample run with your question's data:
int[] values = {3, 3, 2, 1};
Combos mine = new Combos(values);
Tally<Integer> tally = mine.count(2, 5);
for (int i = 5; i < 9; ++i) {
int n = tally.get(i);
if (0 < n) {
System.out.println("found " + tally.get(i) + " sums of " + i);
}
}
produces the results you specified:
found 2 sums of 5
found 1 sums of 6
Here's the Tally code:
public static class Tally<T> {
private Map<T,Integer> tally = new HashMap<T,Integer>();
public Tally() {/* nothing */}
public void inc(T key) {
Integer value = tally.get(key);
if (value == null) {
value = Integer.valueOf(0);
}
tally.put(key, (value + 1));
}
public int get(T key) {
Integer result = tally.get(key);
return result == null ? 0 : result;
}
public Collection<T> keys() {
return tally.keySet();
}
}
I have written a class to handle common functions for working with the binomial coefficient, which is the type of problem that your problem falls under. It performs the following tasks:
Outputs all the K-indexes in a nice format for any N choose K to a file. The K-indexes can be substituted with more descriptive strings or letters. This method makes solving this type of problem quite trivial.
Converts the K-indexes to the proper index of an entry in the sorted binomial coefficient table. This technique is much faster than older published techniques that rely on iteration. It does this by using a mathematical property inherent in Pascal's Triangle. My paper talks about this. I believe I am the first to discover and publish this technique, but I could be wrong.
Converts the index in a sorted binomial coefficient table to the corresponding K-indexes.
Uses Mark Dominus method to calculate the binomial coefficient, which is much less likely to overflow and works with larger numbers.
The class is written in .NET C# and provides a way to manage the objects related to the problem (if any) by using a generic list. The constructor of this class takes a bool value called InitTable that when true will create a generic list to hold the objects to be managed. If this value is false, then it will not create the table. The table does not need to be created in order to perform the 4 above methods. Accessor methods are provided to access the table.
There is an associated test class which shows how to use the class and its methods. It has been extensively tested with 2 cases and there are no known bugs.
To read about this class and download the code, see Tablizing The Binomial Coeffieicent.
Check out this question in stackoverflow: Algorithm to return all combinations
I also just used a the java code below to generate all permutations, but it could easily be used to generate unique combination's given an index.
public static <E> E[] permutation(E[] s, int num) {//s is the input elements array and num is the number which represents the permutation
int factorial = 1;
for(int i = 2; i < s.length; i++)
factorial *= i;//calculates the factorial of (s.length - 1)
if (num/s.length >= factorial)// Optional. if the number is not in the range of [0, s.length! - 1]
return null;
for(int i = 0; i < s.length - 1; i++){//go over the array
int tempi = (num / factorial) % (s.length - i);//calculates the next cell from the cells left (the cells in the range [i, s.length - 1])
E temp = s[i + tempi];//Temporarily saves the value of the cell needed to add to the permutation this time
for(int j = i + tempi; j > i; j--)//shift all elements to "cover" the "missing" cell
s[j] = s[j-1];
s[i] = temp;//put the chosen cell in the correct spot
factorial /= (s.length - (i + 1));//updates the factorial
}
return s;
}
I am extremely sorry (after all those clarifications in the comments) to say that I could not find an efficient solution to this problem. I tried for the past hour with no results.
The reason (I think) is that this problem is very similar to problems like the traveling salesman problem. Until unless you try all the combinations, there is no way to know which attributes will add upto the threshold.
There seems to be no clever trick that can solve this class of problems.
Still there are many optimizations that you can do to the actual code.
Try sorting the data according to the attributes. You may be able to avoid processing some values from the list when you find that a higher value cannot satisfy the threshold (so all lower values can be eliminated).
If you're using C# there is a fairly good generics library here. Note though that the generation of some permutations is not in lexicographic order
Here's a recursive approach to count the number of these subsets: We define a function count(minIndex,numElements,minSum) that returns the number of subsets of size numElements whose sum is at least minSum, containing elements with indices minIndex or greater.
As in the problem statement, we sort our elements in descending order, e.g. [3,3,2,1], and call the first index zero, and the total number of elements N. We assume all elements are nonnegative. To find all 2-subsets whose sum is at least 5, we call count(0,2,5).
Sample Code (Java):
int count(int minIndex, int numElements, int minSum)
{
int total = 0;
if (numElements == 1)
{
// just count number of elements >= minSum
for (int i = minIndex; i <= N-1; i++)
if (a[i] >= minSum) total++; else break;
}
else
{
if (minSum <= 0)
{
// any subset will do (n-choose-k of them)
if (numElements <= (N-minIndex))
total = nchoosek(N-minIndex, numElements);
}
else
{
// add element a[i] to the set, and then consider the count
// for all elements to its right
for (int i = minIndex; i <= (N-numElements); i++)
total += count(i+1, numElements-1, minSum-a[i]);
}
}
return total;
}
Btw, I've run the above with an array of 40 elements, and size-8 subsets and consistently got back results in less than a second.