Constraint rule not working as expected when there is only one planning entity - optaplanner

I have a simple field mapping use case where I need to intelligently find the target field for an input source field based on multiple constraints.
To make it clear, there is only one source Field and there are say 100 target field. The goal is to find the most matching target field for the input source field based on constraints.
#PlanningEntity
public class FieldMapping {
#PlanningId
private Long id;
public FieldMapping()
{
}
protected Field inputField;
#PlanningVariable(valueRangeProviderRefs = {
"targetFieldRange" })
protected Field targetField;
}
#PlanningSolution
public class FieldMappingSolution {
#ValueRangeProvider(id = "targetFieldRange")
#ProblemFactCollectionProperty
private List<PlanningRecommField> targetFields;
#PlanningScore
private HardSoftScore score;
private SolverStatus solverStatus;
The challenge is on writing the constraint rules. Since there is only 1 source field , there will be only one instance of FieldMapping planning entity. One constraint rule attempted is given below
public Constraint requiredLeafNode(ConstraintFactory constraintFactory) {
return
constraintFactory.forEachUniquePair(FieldMapping.class,Joiners.equal(FieldMapping::getTargetField)).
filter((mapping1,mapping2) -> !mapping2.getTargetField().isLeafNode())
.penalize("Not leaf node", HardSoftScore.ONE_HARD);
}
But since there is only 1 fieldmapping instance , the constraint is not working. Am I missing something ?

Usually, the forEachUniquePair is used in constraints that impact score based on a relation between pairs of planning entity instances.
I think that what you are trying to achieve could be expressed by a simpler constraint as follows:
Constraint requiredLeafNode(ConstraintFactory constraintFactory) {
return constraintFactory
.forEach(FieldMapping.class)
.filter((mapping) -> !mapping.getTargetField().isLeafNode())
.penalize("Not leaf node", HardSoftScore.ONE_HARD);
}
That said, if you always deal with just a single planning entity instance, OptaPlanner is probably overkill. A rule engine, of just a couple of hard-coded rules, might be the way to go.

Related

Hibernate manual and auto-generated primary key

I am having a requirement where if the user enters value for the primary key, then I need to use that when creating an entity and if in case the user does not provide value, the primary key needs to be auto-generated like R00001, R0002 etc.I would like to know how I could achieve this and any guidance on that
Try to take advantage of the IdentifierGenerator interface and define an implementation of your own.
public class MyEntityIdGenerator implements IdentifierGenerator{
public Serializable generate(SessionImplementor session, Object object)
throws HibernateException {
MyEntity entity = (MyEntity)object;
if(entity.getId()==null){
Connection con = session.connection();
// retrieve next sequence val from database for example
return nextSeqValue;
}
}
}
Then add appropriate annotations on the id field in your entity:
#Id
#GenericGenerator(name="myCustomGen", strategy="com.example.MyEntityGenerator")
#GeneratedValue(generator="myCustomGen")

OO Software desing handling constraints - which design pattern to use?

I'm looking at a well-known problem and therefore there has to be a design pattern or a mix of patterns to solve it.
With the following classes and properties:
CTask
Name
Duration
TaskArea
CTaskArea
Name
CPerson
Name
Abilities
CAbility
Name
CTool
Name
CleaningTime
CConstraint
Name
Constraint
CTask, CPerson, CTool could have constraints e.g. Task A could only be done by persons with ability X, or person A could not do tasks of TaskArea X and so on.
For example, when I create a new CTask, CPerson or CTool I could imagine a constraint config dialog with dropdowns like:
Class | Operator | Class | Property | Value
CPerson | NOT | CTool | Name | Hammer
What design pattern provides the opportunity to dynamically configure constraints for all the classes, without forcing the classes to know additional information or take additional dependencies on each other?
Can I use an interface for objects to express that they accept constraints being applied somehow, or to discover classes which should be configurable with constraints?
Why not to have contraints_for_xxx property at each object having a constraint for particular xxx property?
When some child property is to be added into a collection, it is first run through constraints collection. If any constraint item returns false... exception is thrown, heaven thunders etc.
Constraints can be filled in object's constructor or later via some setupConstraints() call.
CPerson can look like (PHP example):
class Person
{
protected $constraintsAbc = null;
public function setConstraintsAbc(array $constraints)
{
$this->constraintsAbc = $constraints;
}
public function setABC($value)
{
foreach ($this->constraintsAbc as $constraint) {
if (!$constraint->isValid($value)) {
throw new Exception("Constraint {$constraint->getName()} is not happy with value $value");
}
}
$this->abc = $value;
}
}
class PersonSetup
{
public function setupPerson(Person $person)
{
$constrains[] = new PersonAbcConstraint("Value > 5");
$person->setContraintsABC($constrains);
}
}
This is, of course, fictious example. There is a problem here in some code duplication since you have constraintsAbc, setConstraintsAbc and setAbc as different hard-coded fields. But you can abstract this into some virtual "constraintable" field collection if you like.
this is the solution im ok with:
class CCouldHaveConstraints_Base
{
public virtual GetInstance();
public virtual GetClassName();
public virtual GetPropertyListThatCouldHaveConstraints();
}
class CPerson : CCouldHaveConstraints_Base
{
private String m_PersonName;
private String m_PersonAge;
public String PersonName
{
get {return this.m_PersonName;}
set {this.m_PersonName=value;}
}
public String PersonAge
{
get {return this.m_PersonAge;}
set {this.m_PersonAge=value;}
}
public override GetInstance()
{
return new CPerson;
}
public override GetClassName
{
return "Person";
}
public list<string> GetPropertyListThatCouldHaveConstraints()
{
list <string> ConstraintPropsList = new list<string>;
ConstraintPropsList.Add ("PersonName")
}
}
// class contains a list of all objects that could have constraints
class CConstraint_Lst
{
private list<CConstraint> m_ListOfConstraints;
private list<CCouldHaveConstraints_Base> m_ListOfObjectsThatCouldHaveConstraints;
}
// e.g Person | Person.Name | Tim | NOT | Tool | Tool.Name | "Hammer"
class CConstraint
{
private String m_ClassName_A;
private String m_ClassProperty_A;
private String m_ClassProperty_A_Value;
private String m_Operator;
private String m_ClassName_B;
private String m_ClassProperty_B;
private String m_ClassProperty_B_Value;
}
Is that enough code to figure out how im thinking?
Regards,
Tim
You've already made a great conceptual leap to model the constraints as CConstraint objects. The remaining core of the question seems to be "How do I then organize the execution of the constraints, provide them with the right inputs, and collect their outputs? (the outputs are constraint violations, validation errors, or warnings)"
CConstraints obviously can't be evaluated without any input, but you have some choices on how exactly to provide them with input, which we can explore with questions:
Do they get given a 'global state' which they can explore and look for violations in?
Or do they get given a tuple of objects, or object graph, which they return a success or failure result for?
How do they signal constraint violations? Is it by throwing exceptions, returning results, adding them to a collection of violations, or removing violating objects from the world, or triggering repair rules?
Do they provide an "explanation" output that helpfully explains which object or combination of objects is the offending combination, and what rule it violates?
Compilers might be an interesting place to look for inspiration. We know a good compiler processes some complicated input, and produces one or more easy-to-understand error messages allowing the programmer to fix any problem in their program.
Compilers often have to choose some pattern of organizing the work that they're doing like recursion (recursive descent), or a visitor pattern (visit a tree of objects in some arrangement), or stateful pattern matching on a stream of input approach (syntax token recognition by regex matching, or processing a stream of characters), or a chain-of-responsibility (one processor validates and processes input, passes it to the next processor in the chain). Which is actually a whole family of design patterns you can choose from.
Probably one of the most flexible patterns to look at which is useful for your case is the visitor pattern, because you can extend your domain model with additional classes, all of which know how to do a 'visiting' phase, which is basically what 'validation' often entails - someone visits all the objects in a scenario, and inspects their properties, with an easily extensible set of logics (the validation rules) specific to those types of objects, without needing to worry about the mechanics of the visiting procedure (how you traverse the object graph) in each validation rule.

How to easily access widely different subsets of fields of related objects/DB tables?

Imagine we have a number of related objects (equivalently DB tables), for example:
public class Person {
private String name;
private Date birthday;
private int height;
private Job job;
private House house;
..
}
public class Job {
private String company;
private int salary;
..
}
public class House {
private Address address;
private int age;
private int numRooms;
..
}
public class Address {
private String town;
private String street;
..
}
How to best design a system for easily defining and accessing widely varying subsets of data on these objects/tables? Design patterns, pros and cons, are very welcome. I'm using Java, but this is a more general problem.
For example, I want to easily say:
I'd like some object with (Person.name, Person.height, Job.company, Address.street)
I'd like some object with (Job.company, House.numRooms, Address.town)
Etc.
Other assumptions:
We can assume that we're always getting a known structure of objects on the input, e.g. a Person with its Job, House, and Address.
The resulting object doesn't necessarily need to know the names of the fields it was constructed from, i.e. for subset defined as (Person.name, Person.height, Job.company, Address.street) it can be the array of Objects {"Joe Doe", 180, "ACompany Inc.", "Main Street"}.
The object/table hierarchy is complex, so there are hundreds of data fields.
There may be hundreds of subsets that need to be defined.
A minority of fields to obtain may be computed from actual fields, e.g. I may want to get a person's age, computed as (now().getYear() - Person.birtday.getYear()).
Here are some options I see:
A SQL view for each subset.
Minuses:
They will be almost the same for similar subsets. This is OK just for field names, but not great for the joins part, which could ideally be refactored out to a common place.
Less testable than a solution in code.
Using a DTO assembler, e.g. http://www.genericdtoassembler.org/
This could be used to flatten the complex structure of input objects into a single DTO.
Minuses:
I'm not sure how I'd then proceed to easily define subsets of fields on this DTO. Perhaps if I could somehow set the ones irrelevant to the current subset to null? Not sure how.
Not sure if I can do computed fields easily in this way.
A custom mapper I came up with.
Relevant code:
// The enum has a value for each field in the Person objects hierarchy
// that we may be interested in.
public enum DataField {
PERSON_NAME(new PersonNameExtractor()),
..
PERSON_AGE(new PersonAgeExtractor()),
..
COMPANY(new CompanyExtractor()),
..
}
// This is the container for field-value pairs from a given instance of
// the object hierarchy.
public class Vector {
private Map<DataField, Object> fields;
..
}
// Extractors know how to get the value for a given DataField
// from the object hierarchy. There's one extractor per each field.
public interface Extractor<T> {
public T extract(Person person);
}
public class PersonNameExtractor implements Extractor<String> {
public String extract(Person person) {
return person.getName();
}
}
public class PersonAgeExtractor implements Extractor<Integer> {
public int extract(Person person) {
return now().getYear() - person.getBirthday().getYear();
}
}
public class CompanyExtractor implements Extractor<String> {
public String extract(Person person) {
return person.getJob().getCompany();
}
}
// Building the Vector using all the fields from the DataField enum
// and the extractors.
public class FullVectorBuilder {
public Vector buildVector(Person person) {
Vector vector = new Vector();
for (DataField field : DataField.values()) {
vector.addField(field, field.getExtractor().extract(person));
}
return vector;
}
}
// Definition of a subset of fields on the Vector.
public interface Selector {
public List<DataField> getFields();
}
public class SampleSubsetSelector implements Selector {
private List<DataField> fields = ImmutableList.of(PERSON_NAME, COMPANY);
...
}
// Finally, a builder for the subset Vector, choosing only
// fields pointed to by the selector.
public class SubsetVectorBuilder {
public Vector buildSubsetVector(Vector fullVector, Selector selector) {
Vector subsetVector = new Vector();
for (DataField field : selector.getFields()) {
subsetVector.addField(field, fullVector.getValue(field));
}
return subsetVector;
}
}
Minuses:
Need to create a tiny Extractor class for each of hundreds of data fields.
This is a custom solution that I came up with, seems to work and I like it, but I feel this problem must have been encountered and solved before, likely in a better way.. Has it?
Edit
Each object knows how to turn itself into a Map of fields, keyed on an enum of all fields.
E.g.
public enum DataField {
PERSON_NAME,
..
PERSON_AGE,
..
COMPANY,
..
}
public class Person {
private String name;
private Date birthday;
private int height;
private Job job;
private House house;
..
public Map<DataField, Object> toMap() {
return ImmutableMap
.add(DataField.PERSON_NAME, name)
.add(DataField.BIRTHDAY, birthday)
.add(DataField.HEIGHT, height)
.add(DataField.AGE, now().getYear() - birthday.getYear())
.build();
}
}
Then, I could build a Vector combining all the Maps, and select subsets from it like in 3.
Minuses:
Enum name clashes, e.g. if Job has an Address and House has an Address, then I want to be able to specify a subset taking street name of both. But how do I then define the toMap() method in the Address class?
No obvious place to put code doing computed fields requiring data from more than one object, e.g. physical distance from Address of House to Address of Company.
Many thanks!
Over in-memory object mapping in the application, I would favor database processing of the data for better performance. Views, or more elaborate OLAP/datawarehouse tooling could do the trick. If the calculated fields remain basic, as in "age = now - birth", I see nothing wrong with having that logic in the DB.
On the code side, given the large number of DTOs you have to deal with, you could use classless dynamic (available in some JVM languages) or JSON objects. The idea is that when a data structure changes, you only need to modify the DB and the UI, saving you the cost of changing a whole bunch of classes in between.

How can I stop Fluent NHibernate automapping from creating foreign keys across the database?

I am using latest version of Fluent NHibernate automapping. Is there any convention or property I can set to stop creating the foreign key constraints across all the tables? I have nearly 200 classes, So I cannot go to each individual class and property name and set
ForeignKeyConstraintNames("none", "none")
How can we add ForeignKeyConstraintNames("none", "none") in Automapping? I don't want to hardcode the table name or column name. I would like to have the AutoMapping create all the mappings without foreign keys. Basicall don't create any foreign keys across the database. How can we do this?
There is similar POST HERE but the answer was not clear to me.
a simple convention
public class NoForeignKeys : IReferenceConvention, IHasManyConvention
{
public void Apply(IManyToOneInstance instance)
{
instance.ForeignKey("none");
}
public void Apply(IOneToManyCollectionInstance instance)
{
instance.Key.ForeignKey("none");
}
}
// use it
AutoMap.AssemblyOf().Conventions
.FromAssembly() or .Add(typeof(NoForeignKeys))

FluentNhibernate and References

I was trying to change a convention so that my IDs follow this simple rule: ProductCode, CustomerCode, OrderCode etc etc.
I've found a simple way to do that adding a convention:
public class PrimaryKeyNameConvention : IIdConvention
{
public void Apply(FluentNHibernate.Conventions.Instances.IIdentityInstance instance)
{
instance.Column(instance.EntityType.Name + "Code");
}
}
Now I've got what I wanted but it seems that FluentNhibernate refuses to apply the same rule with column referencing my primary keys.
EX: my table Customer will have a PK called CustomerCode but my table Order will have a reference column called Customer_Id.
I've tried different ways to rename the column Customer_Id in CustomerCode (table Order) but it seems that nothing works properly.
The only solution which seems to work is adding a convention like this:
public class ReferenceConvention : IReferenceConvention
{
public void Apply(FluentNHibernate.Conventions.Instances.IManyToOneInstance instance)
{
instance.Column(instance.Property.PropertyType.Name + "Code");
}
}
but now FluentNhibernate creates two columns which reference my primary key: CostumerCode and Customer_Id.
I can't figure out what I am doing wrong.
Any help would be apreciated.
Regards,
Alberto
Take a look at the ForeignKeyConvention base-class.
The ForeignKeyConvention is an amalgamation of several other conventions to provide an easy way to specify the naming scheme for all foreign-keys in your domain. This is particularly useful because not all the foreign-keys are accessible in the same way, depending on where they are; this convention negates need to know about the underlying structure.
As James suggested I've now applied these two conventions:
public class PrimaryKeyNameConvention : IIdConvention
{
public void Apply(FluentNHibernate.Conventions.Instances.IIdentityInstance instance)
{
instance.Column(instance.EntityType.Name + "Code");
}
}
public class CustomForeignKeyConvention : ForeignKeyConvention
{
protected override string GetKeyName(Member property, Type type)
{
if (property == null)
return (type.Name + "Code"); // many-to-many, one-to-many, join
return (property.Name + "Code"); // many-to-one
}
}
and everything works fine.