Optaplanner optimization - optaplanner

I am trying to apply optaplanner to my project, such as picking order path calculation. When there are many order items, the calculation speed is very slow. I want to know how to improve the calculation speed,When the order items are more than 200, the calculation speed is particularly slow, and I only add constraints. I defined a selectionsorterweightfactory, but the debug doesn't seem to work.
private Constraint requiredNumberOfBuckets(ConstraintFactory constraintFactory) {
return constraintFactory
.forEach(TrolleyStep.class)
// raw total volume per order
.groupBy(
trolleyStep -> trolleyStep.getTrolley(),
trolleyStep -> trolleyStep.getOrderItem().getOrderCode(),
sum(trolleyStep -> trolleyStep.getOrderItem().getProduct().getCapacity()))
// required buckets per order
.groupBy(
(trolley, order, orderTotalVolume) -> trolley,
(trolley, order, orderTotalVolume) -> order,
sum(
(trolley, order, orderTotalVolume) ->
calculateOrderRequiredBuckets(orderTotalVolume, trolley.getBucketCapacity())))
// required buckets per trolley
.groupBy(
(trolley, order, orderTotalBuckets) -> trolley,
sum((trolley, order, orderTotalBuckets) -> orderTotalBuckets))
// penalization if the trolley don't have enough buckets to hold the orders
.filter((trolley, trolleyTotalBuckets) -> trolley.getBucketNum() < trolleyTotalBuckets)
.penalize(
"Required number of buckets",
HardSoftLongScore.ONE_HARD,
(trolley, trolleyTotalBuckets) -> trolleyTotalBuckets - trolley.getBucketNum());}
private Constraint minimizeOrderSplitByTrolley(ConstraintFactory constraintFactory) {
return constraintFactory
.forEach(TrolleyStep.class)
.groupBy(
trolleyStep -> trolleyStep.getOrderItem().getOrderCode(),
countDistinctLong(TrolleyStep::getTrolley))
.penalizeLong(
"Minimize order split by trolley",
HardSoftLongScore.ONE_SOFT,
(order, trolleySpreadCount) -> trolleySpreadCount * 10000);
private Constraint distanceToEnd(ConstraintFactory constraintFactory) {
return constraintFactory
.forEach(TrolleyStep.class)
.filter(ele -> ele.getNextStep() == null)
.penalizeLong(
" distance to end ",
HardSoftLongScore.ONE_SOFT,
trolleyStep -> (long) trolleyStep.distanceToLocation(OrderPickingService.end));}
private Constraint distanceToPrevious(ConstraintFactory constraintFactory) {
return constraintFactory
.forEach(TrolleyStep.class)
.penalizeLong(
" distance to previous ",
HardSoftLongScore.ONE_SOFT,
trolleyStep ->
(long)
trolleyStep.distanceToLocation(
trolleyStep.getPreviousStandstill().getLocation()));}

Your problem will be the requiredNumberOfBuckets constraint.
Unfortunately, in the current implementation of Constraint Streams, there is no way how to make three consecutive groupBy()s fast. You will have to find a way around it. I'd experiment with writing a custom ConstraintCollector that would do the work in one groupBy(...), as opposed to doing it in three steps.

Related

Solver doesnt try to optimize solution ( one hard constraint, one soft constraint )

I got 2 constraints :
one HARD ( 100 score )
one SOFT ( 100 score )
When i run the solver, it's like he just try to resolve the hard one, and doesn't look for the soft one. I have no softScore, optaplanner return a 0Hard/0Medium/0Soft.
My HARD constraint is : a worker can't work more than 5 days in a row
My SOFT constraint is : try to put the most of working days possible ( calculated by hours of work )
My test is for two weeks. A worker need to be above 66 hours of work, is he is under we penalize more.
Here the two constraints in JAVA:
the SOFT one :
private Constraint averageWorkingHours(ConstraintFactory constraintFactory) {
return constraintFactory
.from(WorkingDay.class)
.filter((wd) -> wd.isFreeToWork())
.groupBy(WorkingDay::getAgent,
WorkingDay::hasBreakDayBySolver,
// hasBreakDayBySolver return if the solver has put my
#PlanningVariable ( a breakDay ) in the WorkingDay
ConstraintCollectors.count())
.filter((agent, hasBreakDayBySolver, count) -> {
return !hasBreakDayBySolver;
})
.penalizeConfigurable(AVERAGE_HOURS_WORKING, ((agent, hasBreakDayBySolver, count) -> {
// a worker need to be above 66 hours of work for 2 weeks
// We penalize more if a worker is under the average of working hours wanted for two weeks ( 66 )
if(count * 7 < 66){ // count * hours worked for one day
return (66 - count * 7) * 2 ;
}
else{
return count * 7 - 66;
}
}));
}
the HARD one :
private Constraint fiveConsecutiveWorkingDaysMax(ConstraintFactory constraintFactory) {
return constraintFactory
.from(WorkingDay.class)
.filter(WorkingDay::hasWork)
.join(constraintFactory.from(WorkingDay.class)
.filter(WorkingDay::hasWork),
Joiners.equal(WorkingDay::getAgent),
Joiners.greaterThan(wd->wd.getDayJava()),
Joiners.filtering((wd1, wd2)->{
LocalDate fourDaysBefore = wd1.getDayJava().minusDays(4);
Boolean wd2isAfterFourDaysBeforeWd1 = wd2.getDayJava().compareTo(fourDaysBefore) >= 0;
return wd2isAfterFourDaysBeforeWd1;
})
)
.groupBy((wd1, wd2) -> wd2, ConstraintCollectors.countBi())
.filter((wd2, count) -> count >= 4)
.penalizeConfigurable(FIVE_CONSECUTIVE_WORKING_DAYS_MAX,((wd2, count)-> count - 3));
}
I hope my explanations are clear.
Thanx !
Unit test your constraints, using a ConstraintVerfier. See also this short video by Lukas.
Verify that your #ConstraintWeight for that soft constraint in your dataset isn't zero.

Kotlin is filtering list before getting max value best apporach?

If you have a list of objects object(category, value) and want to get the max value excluding some categories you can use something like this:
val max = objects.filter { it.category.name != xy }.maxByOrNull { it.value }
But this uses 2 iterators if I understand it correctly, so would there be a more performant version of this call using only one iterator?
That's correct. This code will first iterate over the whole list to filter the result, and then again to find the max.
I'll detail some alternatives below, but, overall, I would recommend against any of them without a very good justification. The performance benefits are usually marginal - spending time making sure the code is clear and well tested will be a better investment. I'd recommend sticking to your existing code.
Example
Here's an executable version of your code.
fun main() {
val items = listOf(
MyData("shoes", 1),
MyData("shoes", 22),
MyData("shoes", 33),
MyData("car", 555),
)
val max = items
.filter {
println(" filter $it")
it.categoryName == "shoes"
}.maxByOrNull {
println(" maxByOrNull $it")
it.value
}
println("\nresult: $max")
}
There are two iterations, and each are run twice.
filter MyData(categoryName=shoes, value=1)
filter MyData(categoryName=shoes, value=22)
filter MyData(categoryName=shoes, value=33)
filter MyData(categoryName=car, value=555)
maxByOrNull MyData(categoryName=shoes, value=1)
maxByOrNull MyData(categoryName=shoes, value=22)
maxByOrNull MyData(categoryName=shoes, value=33)
result: MyData(categoryName=shoes, value=33)
Sequences
In some situations you can use sequences to reduce the number of operations.
val max2 = items
.asSequence()
.filter {
println(" filter $it")
it.categoryName == "shoes"
}.maxByOrNull {
println(" maxByOrNull $it")
it.value
}
println("\nresult: $max2")
As you can see, the order of operations is different
filter MyData(categoryName=shoes, value=1)
filter MyData(categoryName=shoes, value=22)
maxByOrNull MyData(categoryName=shoes, value=1)
maxByOrNull MyData(categoryName=shoes, value=22)
filter MyData(categoryName=shoes, value=33)
maxByOrNull MyData(categoryName=shoes, value=33)
filter MyData(categoryName=car, value=555)
result: MyData(categoryName=shoes, value=33)
[S]equences let you avoid building results of intermediate steps, therefore improving the performance of the whole collection processing chain.
Note that in this small example, the benefits aren't worth it.
However, the lazy nature of sequences adds some overhead which may be significant when processing smaller collections or doing simpler computations.
Combining operations
In your small example you could combine the 'filterr' and 'maxBy' operations
val max3 = items.maxByOrNull { data ->
when (data.categoryName) {
"shoes" -> data.value
"car" -> -1
else -> -1
}
}
println("\nresult: $max3")
result: MyData(categoryName=shoes, value=33)
I hope it's clear that this solution isn't immediately understandable, and has some nasty edge cases that would be a prime source of bugs. I won't detail the issues, but instead re-iterate that ease-of-use, adaptability, and simple code is usually much more valuable than optimised code!

Generating unique random values with SecureRandom

i'm currently implementing a secret sharing scheme.(shamir)
In order to generate some secret shares, I need to generate some random numbers within a range. FOr this purpose, I have this very simple code:
val sharesPRG = SecureRandom()
fun generateShares(k :Int): List<Pair<BigDecimal,BigDecimal>> {
val xs = IntArray(k){ i -> sharesPRG.nextInt(5)}
return xs
}
I have left out the part that actually creates the shares as coordinates, just to make it reproduceable, and picked an arbitrarily small bound of 5.
My problem is that I of course need these shares to be unique, it doesnt make sense to have shares that are the same.
So would it be possible for the Securerandom.nextint to not return a value that it has already returned?
Of course I could do some logic where I was checking for duplicates, but I really thought there should be something more elegant
If your k is not too large you can keep adding random values to a set until it reaches size k:
fun generateMaterial(k: Int): Set<Int> = mutableSetOf<Int>().also {
while (it.size < k) {
it.add(sharesPRG.nextInt(50))
}
}
You can then use the set as the source material to your list of pairs (k needs to be even):
fun main() {
val pairList = generateMaterial(10).windowed(2, 2).map { it[0] to it[1] }
println(pairList)
}

How to calculate totals for each row in a table (rows*columns) structure in Kotlin?

I have a (simplified) table structure that is defined like this:
data class Column<T>(val name: String, val value: T)
data class Row(val data: List<Column<*>>)
data class Grid(val rows: List<Row>)
I now want to calculate the totals for each column in that grid, i.e. the ith element of each row needs to be accumulated.
My solution looks like this. I simply flatMap the data and group the column values by the column's name, which I then fold to the corresponding sums.
private fun calculateTotals(data: Grid) = data.rows
.flatMap(Row::data)
.groupingBy(Column<*>::name)
.fold(0.0) { accumulator, (_, value) ->
accumulator + when (value) {
is Number -> value.toDouble()
else -> 0.0
}
}
I could not come up with a better solution. I think yours is really good, but I would suggest some syntactic improvements.
Use lambda references
Use destructuring syntax
Don't use when, if you only test for one specific type, use the safe cast operator (as?), the safe call operator (?) and the elvis operator (:?).
private fun calculateTotals(data: GridData) = data.rows
.flatMap(RowData::data) // 1
.groupingBy(ColumnsData<*>::column) // 1
.fold(0.0) { accumulator, (_, value) -> // 2
accumulator + ((value as? Number)?.toDouble() ?: 0.0) // 3
}

Kotlin summing with groupingBy and aggregate

tl/dr: How would Kotlin use groupingBy and aggregate to get a Sequence of (key, number) pairs to sum to a map of counts?
I have 30gb of csv files which are a breeze to read and parse.
File("data").walk().filter { it.isFile }.flatMap { file ->
println(file.toString())
file.inputStream().bufferedReader().lineSequence()
}. // now I have lines
Each line is "key,extraStuff,matchCount"
.map { line ->
val (key, stuff, matchCount) = line.split(",")
Triple(key, stuff, matchCount.toInt())
}.
and I can filter on the "stuff" which is good because lots gets dropped -- yay lazy Sequences. (code omitted)
But then I need a lazy way to get a final Map(key:String to count:Int).
I think I should be using groupingBy and aggregate, because eachCount() would just count rows, not sum up matchCount, and groupingBy is lazy whereas groupBy isn't, but we have reached the end of my knowledge.
.groupingBy { (key, _, _) ->
key
}.aggregate { (key, _, matchCount) ->
??? something with matchCount ???
}
You can use Grouping.fold extension instead of Grouping.aggregate. It would be more suitable for summing grouped entries by a particular property:
triples
.groupingBy { (key, _, _) -> key }
.fold(0) { acc, (_, _, matchCount) -> acc + matchCount }
You need to pass a function with four parameters to aggregate:
#param operation: function is invoked on each element with the following parameters:
key: the key of the group this element belongs to;
accumulator: the current value of the accumulator of the group, can be null if it's the first element encountered in the group;
element: the element from the source being aggregated;
first: indicates whether it's the first element encountered in the group.
Of them, you need accumulator and element (which you can destructure). The code would be:
.groupingBy { (key, _, _) -> key }
.aggregate { _, acc: Int?, (_, _, matchCount), _ ->
(acc ?: 0) + matchCount
}
I ran into a similar problem today and, like one of Kotlin's tutorials, I finally solved it. The code is as follows:
(kt sample:https://play.kotlinlang.org/hands-on/Introduction%20to%20Coroutines%20and%20Channels/02_BlockingRequest)
In the initial list each user is present several times, once for each
repository he or she contributed to.
Merge duplications: each user should be present only once in the resulting list
with the total value of contributions for all the repositories.
Users should be sorted in a descending order by their contributions.
The corresponding test can be found in test/tasks/AggregationKtTest.kt.
You can use 'Navigate | Test' menu action (note the shortcut) to navigate to the test.
*/
fun List<User>.aggregate(): List<User> = this.groupBy { it.login }.map { (login, contributions) ->
User(login, contributions.sumOf { it.contributions })
}.sortedByDescending { it.contributions }
//or
fun List<User>.aggregate2(): List<User> = this.groupingBy { it.login }.aggregate<User, String, User>{ key, accumulator, element, _ ->
User(key, (accumulator?.contributions?:0)+element.contributions)
}.map { (_, v)-> v}.sortedByDescending { it.contributions }