Optaplanner implements multiple constraint configurations - optaplanner

Optaplaner cannot solve different problems according to different constraint configurations in the same project. If it is the same solution, but the solution types are different, can the same constraint configuration be used? For example, different types of solutions are constrained by different pickingtypes in the code.Or have a better idea to realize
return constraintFactory.forEach(TrolleyStep.class)
.filter(ele -> ele.getPickingType() == 0) //only constraint pickingType == 0
.groupBy(trolleyStep -> trolleyStep.getOrderNumber(),
countDistinctLong(TrolleyStep::getTrolley))
.penalizeLong("Minimize order split by trolley",
HardSoftLongScore.ONE_SOFT, (order, trolleySpreadCount) -> trolleySpreadCount * 10000);

See penalizeConfigurable() and constraint configuration in OptaPlanner documentation. That feature is there for what you seem to be trying to do.

Related

Exception when using a HardMediumSoftScore in a constraint in OptaPlanner

I am trying to use a HardMediumSoftScore in a constraint but I get the following exception:
java.lang.IllegalArgumentException: The constraintWeight (1hard/0medium/0soft) of class (class org.optaplanner.core.api.score.buildin.hardmediumsoft.HardMediumSoftScore) for constraintPackage (xxx) and constraintName (xxx) must be of the scoreClass (class org.optaplanner.core.api.score.buildin.hardsoft.HardSoftScore).
I cannot see anything in the documentation as to why I cannot use a medium score, or that I need to do anything different over using a hard or soft score.
I have the same problem using v8.9.1-FINAL and v8.10.0-FINAL.
Any ideas? Thanks in advance.
Some part of your planning domain will contain a reference to HardSoftScore. From this exception message, which is coming from constraints, I'm guessing that your planning solution is using HardSoftScore and not HardMediumSoftScore.
You are free to use either, but you need to consistently use one or the other.

Intershop EDL modelling - How to add dependency with on cascade delete

We have some custom objects modelled through EDL which have foreign keys to system Intershop objects (ISPRODUCT and ISORDER). We need our objects to get deleted when referenced order or product is deleted.
This is the extract from the EDL file:
/**
* Relation to product PO (tariff item)
*/
dependency tariff: ProductPO
{
foreign key(tariffID);
}
/*
* Order relation
*/
dependency order: OrderPO
{
foreign key(orderID);
}
As I can see, it is possible to add delete actions on EDL relations but it is not possible to add delete actions on dependencies.
What we are doing at the moment is modifying the statements in the generated dbconstraints.oracle.ddl files like this:
EXEC staging_ddl.add_constraint('A1APPLICATIONFORM', 'A1APPLICATIONFORM_CO_003', 'FOREIGN KEY (TARIFFID) REFERENCES PRODUCT (UUID) ON DELETE SET NULL INITIALLY DEFERRED DEFERRABLE DISABLE NOVALIDATE');
EXEC staging_ddl.add_constraint('A1APPLICATIONFORM', 'A1APPLICATIONFORM_CO_004', 'FOREIGN KEY (ORDERID) REFERENCES ISORDER (UUID) ON DELETE CASCADE INITIALLY DEFERRED DEFERRABLE DISABLE NOVALIDATE');
But it is only the temporary workaround because these files will get overwritten each time we restart the code generator on the EDL.
On relationship it is possible to define the on delete action like this:
relation promotionBenefitPOs : A1PromotionBenefitPO[0..n] inverse promotionPO implements promotionBenefits delete default;
Is it possible to achieve the same thing on the dependency with the system objects?
I didn't know that was possible with EDL, good to know. My problem with this approach is that the orm cache does not know that these objects are being removed by oracle, so it might have phantom object floating around in the orm cache.
I would use this register listener solution to remove these objects so that everything is updated and flushed out of the cache.
I do wonder how the code generator deals with this delete property on the relation.
I'm afraid you need to do that by hand. Meaning once an instance of the types involved is removed, you need to query for your custom glue object and remove that one a subsequent action by your own. As dependency is merely a weak (unidirectional) relation that orm cannot automatically remove.
See here for documentation about EDL-dependency: https://support.intershop.com/kb/index.php/Display/247P28
For example, I checked ProcessPagelet-Delete pipline. In there we first unassign (i.e. remove the assignment) Label objects from the Pagelet to be deleted. The PageletLabelAssingmentPO contains a dependency to Pagelet as you can see here:
orm class PageletLabelAssignmentPO extends LabelAssignmentPO
{
attribute pageletUUID : uuid;
dependency pagelet : PageletPO
{
foreign key(pageletUUID);
}
}

Neo4j APOC trigger and Manual Index on Relationship Properties

I'd like to setup Neo4j APOC trigger that will add all relationship properties to manual index, something like the following:
CALL apoc.trigger.add('HAS_VALUE_ON_INDEX',"UNWIND {createdRelationships} AS r MATCH (Decision)-[r:HAS_VALUE_ON]->(Characteristic) CALL apoc.index.addRelationship(r,['property_1','property_2']) RETURN count(*)", {phase:'after'})
The issue is that I don't know the exact set of HAS_VALUE_ON relationship properties because I use the dynamic properties approach with Spring Data Neo4 5.
Is it possible to change this trigger declaration to be able to add all of the HAS_VALUE_ON relationship properties(existing and ones that will be created in future) to the manual index instead of the preconfigured ones( like ['property_1','property_2'] in the mentioned example) ?
If you do not know the set of properties in advance, then you can use the keys function to add all properties of the created relationships to the index:
CALL apoc.trigger.add(
'HAS_VALUE_ON_INDEX',
'UNWIND {createdRelationships} AS r MATCH (Decision)-[r:HAS_VALUE_ON]->(Characteristic)
CALL apoc.index.addRelationship(r, keys(r)) RETURN count(*)',
{phase:'after'}
)

What is the best way to find out the candidate group a task?

After I loaded a List of tasks with a taskQuery
taskService.createTaskQuery()
.processDefinitionKey(PROCESSKEY)
.taskCandidateGroupIn(list).initializeFormKeys().list()
What is the best way to find out the candidate group of every task?
I want to display it in a JSF view, but the class Task has no corresponding field.
You can get a task's identity links using the task service. Among other relations, the candidate group relation is expressed as an identity link. The following code filters a task's identity links for those that represent candidate groups:
List<IdentityLink> identityLinks = taskService.getIdentityLinksForTask(task.getId());
for (IdentityLink identityLink : identityLinks) {
String type = identityLink.getType();
/* type corresponds to the constants defined in IdentityLinkType.
"candidate" identitifies a candidate relation */
String groupId = identityLink.getGroupId();
if (IdentityLinkType.CANDIDATE.equals(type) && groupId != null) {
// we have found a candidate group; do something
}
}
The best approach is to write a custom query that gives you all task information the way you need them with on select. You do not want to start looping over result lists and sending one or more query per item, especially not in a high performance application as your task list.
Check the custom query documentation for details.
Only select task which have a candidate groups.
List<Task> candidateGroupList = taskService.createTaskQuery().withCandidateGroups().list();

How to check unique constraint violation in nHibernate and DDD before saving?

I've got an Account model object and a UNIQUE constraint on the account's Name. In Domain Driven Design, using nHibernate, how should I check for the name's unicity before inserting or updating an entity?
I don't want to rely on a nHibernate exception to catch the error. I'd like to return a prettier error message to my user than the obscure could not execute batch command.[SQL: SQL not available]
In the question Where should I put a unique check in DDD?, someone suggested using a Specification like so.
Account accountA = _accountRepository.Get(123);
Account accountB = _accountRepository.Get(456);
accountA.Name = accountB.Name;
ISpecification<Account> spec = new Domain.Specifications.UniqueNameSpecification(_accountRepository);
if (spec.IsSatisfiedBy(accountObjA) == false) {
throw new Domain.UnicityException("A duplicate Account name was found");
}
with the Specification code as:
public bool IsSatisfiedBy(Account obj)
{
Account other = _accountRepository.GetAccountByName(obj.Name);
return (other == null);
}
This works for inserts, but not when doing an update because. I tried changing the code to:
public bool IsSatisfiedBy(Account obj)
{
Account other = _accountRepository.GetAccountByName(obj.Name);
if (obj == null) { // nothing in DB
return true;
}
else { // must be the same object.
return other.Equals(obj);
}
}
The problem is that nHibernate will issue an update to the database when it executes GetAccountByName() to recover a possible duplicate...
return session.QueryOver<Account>().Where(x => x.Name == accntName).SingleOrDefault();
So, what should I do? Is the Specification not the right way to do it?
Thanks for your thoughts!
I'm not a fan of the specification pattern for data access, it always seems like jumping hoops to get anything done.
However, what you've suggested, which really just boils down to:
Check if it already exists.
Add if it doesn't; Show user-friendly message if it does.
... is pretty much the easiest way to get it done.
Relying on database exceptions is the other way of doing it, if your database and it's .NET client gracefully propagates the table & column(s) that were infringing the unique constraint. I believe most drivers don't do so (??), as they just throw a generic ConstraintException that says "Constraint XYZ was violated on table ABC". You can of course have a convention on your unique constraint naming to say something like UK_MyTable_MyColumn and do string magic to pull the table & column names out.
NHibernate has a ISQLExceptionConverter that you can plug into the Configuration object when you set NHibernate up. Inside this, you get exposed to the exception from the .NET data client. You can use that exception to extract the table & columns (using the constraint name perhaps?) and throw a new Exception with a user friendly message.
Using the database exception way is more performant and you can push a lot of the detecting-unique-constraint-violation code to the infrastructure layer, as opposed to handling each one case by case.
Another thing worth pointing out with the query-first-then-add method is that to be completely transaction safe, you need to escalate the transaction level to serializable (which gives the worst concurrency) to be totally bullet proof. Whether you need to be totally bullet proof or not, depends on your application needs.
You need to handle it with Session.FlushMode mode to set to FlushMode.Commit and use transaction to rollback if at all update fired.