Error occurs when max is used in groupby of bifunction in constraint streams - optaplanner

Error occurs when max is used in groupby of BiFunction in constraint streams.
I want to get max per group.
How do I get the maximum value for each group?
Constraint checkScrubNurseSkill(ConstraintFactory constraintFactory) {
return constraintFactory.forEach(NurseAssignment.class)
.join(NurseOpeSkill.class,
Joiners.filtering((nurseAssignment, nurseOpeSkill) -> nurseAssignment.getOpeID() == nurseOpeSkill.getOpeID() &&
nurseAssignment.getNurseID() == nurseOpeSkill.getNurseID()))
.groupBy((nurseAssignment, nurseOpeSkill) -> nurseAssignment.getOpe(),
(nurseAssignment, nurseOpeSkill) -> nurseAssignment.getDutyID(),
*max((nurseAssignment, nurseOpeSkill) -> nurseOpeSkill.getScurbOpeLevel() )
)*
.filter((ope, dutyid, max) -> max < 3)
.penalize("Check Scrub Nurse Skill", HardSoftScore.ofHard(10));
}
error1: The method max(Comparator) is ambiguous for the type â—‹â—‹ConstraintProvider
error2: The method getScurbOpeLevel() is undefined for the type Object

I think the issue here is an ambiguous parameter of the max() method. Both of the following overloads of the method fit there:
max(BiFunction<A, B, Mapped> groupValueMapping)
max(Comparator<A> comparator)
which cannot be inferred from the lambda.
The issue can be worked around via casting. Have a look at the MachineReassignmentConstraintProvider, which does exactly that.

Related

Scan unstructured JSON BYTEA into map[string]string

This seems like a common problem and may be posted somewhere already, but I can't find any threads talking about it, so here is the problem:
I have a Postgres table storeing a column of type BYTEA.
CREATE TABLE foo (
id VARCHAR PRIMARY KEY,
json_data BYTEA
)
The column json_data is really just JSON stored as BYTEA (It's not ideal I know). It is unstructured, but guaranteed to be of string -> string JSON.
When I query this table, I need to scan the query SELECT * FROM foo WHERE id = $1 into the following struct:
type JSONData map[string]string
type Foo struct {
ID string `db:"id"`
Data JSONData `db:"json_data"`
}
I'm using sqlx's Get method. When I execute a query I'm getting the error message sql: Scan error on column index 1, name "json_data": unsupported Scan, storing driver.Value type []uint8 into type *foo.JSONData.
Obviously, the scanner is having trouble scanning the JSON BYTEA into a map. I can implement my own scanner and call my custom scanner on the json_data column, but I'm wondering if there are better ways to do this. Could my JSONData type implement an existing interface to do this automatically?
As suggested by #iLoveReflection, implementing the Scanner interface on *JSONData worked. Here is the actual implementation:
func (j *JSONData) Scan(src interface{}) error {
b, ok := src.([]byte)
if !ok {
return errors.New("invalid data type")
}
return json.Unmarshal(b, j)
}

use associate array total value count Lua

i want to count the data type of each redis key, I write following code, but run error, how to fix it?
local detail = {}
detail.hash = 0
detail.set = 0
detail.string = 0
local match = redis.call('KEYS','*')
for i,v in ipairs(match) do
local val = redis.call('TYPE',v)
detail.val = detail.val + 1
end
return detail
(error) ERR Error running script (call to f_29ae9e57b4b82e2ae1d5020e418f04fcc98ebef4): #user_script:10: user_script:10: attempt to perform arithmetic on field 'val' (a nil value)
The error tells you that detail.val is nil. That means that there is no table value for key "val". Hence you are not allowed to do any arithmetic operations on it.
Problem a)
detail.val is syntactic sugar for detail["val"]. So if you expect val to be a string the correct way to use it as a table key is detail[val].
Possible problem b)
Doing a quick research I found that this redis call might return a table, not a string. So if detail[val] doesn't work check val's type.

Handling Null DataType

I'm using the Over function from Piggybank to get the Lag of a row
res= foreach (group table by fieldA) {
Aord = order table by fieldB;
generate flatten(Stitch(Aord, Over(Aord.fieldB, 'lag'))) as (fieldA,fieldB,lag_fieldB) ;}
This works correctly and when I do a dump I get the expected result, the problem is when I want to use lag_fieldB for any comparison or transformation I get datatype issues.
If I do a describe it returns fieldA: long,fieldB: chararray,lag_fieldB: NULL
I'm new with PIG but I already tried casting to chararray and using ToString() and I keep getting errors like these:
ERROR 1052: Cannot cast bytearray to chararray
ERROR 1051: Cannot cast to bytearray
Thanks for your help
Ok after some looking around into the code of the Over function I found that you can instantiate the Over class to set the return type. What worked for me was:
DEFINE ChOver org.apache.pig.piggybank.evaluation.Over('chararray');
res= foreach (group table by fieldA) {
Aord = order table by fieldB;
generate flatten(Stitch(Aord, ChOver(Aord.fieldB, 'lag'))) as (fieldA,fieldB,lag_fieldB) ;}
Now the describe is telling me
fieldA: long,fieldB: chararray,lag_fieldB: chararray
And I'm able to use the columns as expected, hope this can save some time for someone else.

How can I use arrayExists function when the array contains a null value?

I have a nullable array column in my table: Array(Nullable(UInt16)). I want to be able to query this column using arrayExists (or arrayAll) to check if it contains a value above a certain threshold but I'm getting an exception when the array contains a null value:
Exception: Expression for function arrayExists must return UInt8, found Nullable(UInt8)
My query is below where distance is the array column:
SELECT * from TracabEvents_ArrayTest
where arrayExists(x -> x > 9, distance);
I've tried updating the comparison in the lambda to "(isNotNull(x) and x > 9)" but I'm still getting the error. Is there any way of handling nulls in these expressions or are they not supported yet?
Add a condition to filter rows with empty list using notEmpty and assumeNotNull for x in arrayExists.
SELECT * FROM TracabEvents_ArrayTest WHERE notEmpty(distance) AND arrayExists(x -> assumeNotNull(x) > 9, distance)

How can I select the maximum value in NHibernate?

I need to get maximum page order from database:
int maxOrder = GetSession.Query<Page>().Max(x => x.PageOrder);
The above works if there are rows in the database table, but when table is empty I'm getting:
Value cannot be null.
Parameter name: item
In the way you are doing it is normal to get an exception as the enumerable, that the GetSession.Query<Page>() returns, is empty (because the table is empty as you mentioned).
The exception that you should be getting is: Sequence contains no elements.
The exception you mention in your question is because the item variable (which is irrelevant with the NHiberanate query you list above) is null (line 54 assigns the item property to null).
A safer way to get the max from a property in a table would be the following:
var max = GetSession.CreateCriteria<Page>()
.SetProjection(Projections.Max("PageOrder"))
.UniqueResult();
or using QueryOver with NHibenrate 3.0:
var max = GetSession.QueryOver<Page>()
.Select(
Projections
.ProjectionList()
.Add(Projections.Max<Page>(x => x.PageOrder)))
.List<int>().First();
If the table is empty you will get max = 0
Session.Query<Page>().Max(x => (int?)x.PageOrder)
Note the cast (I'm assuming PageOrder is an int)
If you are having problems with the QueryOver example by tolism7 (InvalidCastException), here's how I got it working:
var max = session.QueryOver<Page>()
.Select(Projections.Max<Page>(x => x.PageOrder))
.SingleOrDefault<object>();
return max == null ? 0 : Convert.ToInt32(max);