I am trying to implement a custom phase in order to clean up the solution provided by the CH phase. This is an overconstrained TWVRP problem with a lot of extra constraints on top, so I understand why the CH is strugglig. My custom phase will just take all stops breaking a hard constraint and assign them to the dummy vehicle, thereby getting me up to a hard score of 0.
However, the scoreDirector passed to the custom phase command does not allow me to access scoreDirector.getIndictmentMap()
My phase so far:
public class CleanUpPhase implements CustomPhaseCommand<Schedule> {
private static final Logger LOG = Logger.getLogger(CleanUpPhase.class);
//Clean up the solution from the construction heuristic phase.
#Override
public void changeWorkingSolution(ScoreDirector<Schedule> scoreDirector) {
ConstraintStreamScoreDirector constraintStreamScoreDirector = (ConstraintStreamScoreDirector<Schedule, HardMediumSoftScore>)scoreDirector;
constraintStreamScoreDirector.getIndictmentMap();
}
}
I tried to trick Optaplanner into giving me access to the indictment map with a cast but no luck:
java.lang.IllegalStateException: When constraintMatchEnabled (false) is disabled in the constructor, this method should not be called.
Is there a way to easily locate the entities breaking the hard constraints some other way, or can I instruct the CH phase to assign offending entities to the dummy vehicle through configuration somehow? All I need is a feasible solution when entering the local search phase.
UPDATE:
It seems that if I implement my own phase completely, I get access to a score director which can give me the indictment map. However, I get stuck on
java.lang.IllegalArgumentException: Unknown PhaseConfig type: (org.acme.CleanUpPhaseConfig).
at org.optaplanner.core.impl.phase.PhaseFactory.create(PhaseFactory.java:52)
How can I get the phase factory to recognize my newly created phase?
If you use a CustomPhase, there is no need for a new Config. The Existing CustomPhaseConfig accepts the CustomPhaseCommand implementation as a part of its configuration.
Please refer to this section of the documentation.
However, the CustomPhase might not be a solution to your problem. You may run into the same issue in Local Search after your CustomPhase unless you make sure your constraints take the dummy vehicle into account. There is a chapter about overconstrained planning, that describes two approaches: either making the planning variable nullable or using virtual values, as is your dummy vehicle. If you follow the chapter, you can avoid the CustomPhase.
Related
Suppose I have a slight variant of the cloud balancing problem, in which the Process has not just one weight, but a map of (positive) weights, such as
Map<Long, Long> groupMap = new HashMap<>();
where the the key is specific to my domain and the value is the weight.
On the class Computer (still referring to the cloud balancing example) I have a shadow variable hist which is also a (Hash)Map<Long, Long>, and a custom listener updating hist:
public class HistListener implements VariableListener {
#Override
public void beforeVariableChanged(ScoreDirector scoreDirector, Object o) {
Process p = (Process) o;
if (p.getComputer() != null) {
Computer kc = p.getComputer();
//update hist Map
scoreDirector.beforeVariableChanged(kc, "hist");
for (Map.Entry<Long, Long> entrySet:k.getGroupMap().entrySet()){
kc.getHist().put(entrySet.getKey(), kc.getHist().get(entrySet.getKey()) - k.getGroupMap().get(entrySet.getKey()));
}
scoreDirector.afterVariableChanged(kc, "hist");
}
}
and pretty much the same for afterVariableChanged just with reversed sign.
I annotate both Process and Computer as #PlanningEntity and register them in the solverConfig.
There are no constraints, so the solver should be able to assign the computers to the processes arbitrarily. As a result, I expect hist only to have natural numbers (incl. 0) as values.
When running it with <moveThreadCount>NONE</moveThreadCount>, this is indeed the case:
<"Computer"+computer.id: hist>
Computer0: {0=0, 1=0, 2=20, 3=0, 4=10, 5=20, 6=0, 7=10, 8=10, 9=20}
Computer1: {0=0, 1=10, 2=0, 3=0, 4=10, 5=0, 6=10, 7=0, 8=0, 9=0}
Computer2: {0=0, 1=0, 2=0, 3=0, 4=0, 5=0, 6=0, 7=0, 8=0, 9=0}
When running exactly the same code with <moveThreadCount>AUTO</moveThreadCount>, I partially get negative values in hist:
Computer0: {0=0, 1=-20, 2=30, 3=0, 4=-40, 5=50, 6=-10, 7=30, 8=40, 9=150}
Computer1: {0=0, 1=-40, 2=-20, 3=0, 4=-90, 5=-50, 6=-40, 7=-20, 8=-20, 9=-30}
Computer2: {0=0, 1=80, 2=-20, 3=0, 4=30, 5=-30, 6=50, 7=0, 8=-20, 9=-50}
This discrepancy disappears when I refactor the keys of groupMap on process and those of hist on computer as individual shadow variables.
The trace logs suggest a race condition, where several threads access hist simultaneously. (According the Oracle docs, I only need a synchronizedMap implementation if the map is structurally changed, i.e., if keys are added or removed - I'm not doing that.)
The use of a Map as a shadow variable greatly enhances the flexibility of my solution, it would be great if this were supported with multithreading. I know I could probably fix this very simply example with an appropriate ConstraintProvider. My actual problem is much more complex than this and is not amenable to be treated with ConstraintProviders.
Question: Is it possible to have a Map based structure as a shadow variable in a multithreading context?
If it is not possible , I recommend adding a short note in the docs of optaplanner 8.29.0.Final (the version I'm using).
I had a look at questions regarding Lists as PlanningVariables in optaplanner, but I don't see how these questions relate to mine.
Is it possible to have a Map based structure as a shadow variable in a multithreading context?
Yes, because each move thread in a multithreading context has it's own ScoreDirector and own workingSolution internally. From a shadow variable's point of view and that map, it's single threaded.
What can mess this up?
Bad #PlanningId's in your dataset so the Move.rebase() operations go wrong. Duplicate IDs or lack of IDs. OptaPlanner detects most of these. Unlikely that this is your problem.
Incomplete planning cloning in your model. That's probably it. This will also cause issues you haven't seen yet in a single threaded context, especially when the last working solution greatly differs from the last best found solution when the termination runs out. FULL_ASSERT should detect those, but they might not occur on every run...
Each move thread has their own workingSolution internally. That's not entirely true. They all have a planning clone from the original. But if the planning clone doesn't clone all of the shadow variable affected data, it's corrupted. In a multithreaded solving context this will cause issues much faster.
Ok, this is getting complex. How do I solve this?
Experiment with adding a #DeepPlanningClone annotation on your Map field. But making a shadow variable already implies deep planning cloning it automatically IIRC. My guess it's keys or values in that map that need to get planning cloned too. Read the planning clone section in the docs.
I have a #PlanningSolution class, that has one field with a custom List implementation as type.
When solving I run into the following issue (as described in the optaplanner documentation):
java.lang.IllegalStateException: The cloneCollectionClass (class java.util.ArrayList) created for originalCollectionClass (class Solution$1) is not assignable to the field's type (class CustomListImpl).
Maybe consider replacing the default SolutionCloner.
As this field has no impact on planning, can I prevent FieldAccessingSolutionCloner from trying to clone that particular field e.g. by adding some annotation? I dont want to provide a complete custom SolutionCloner.
When inspecting the sources of FieldAccessingSolutionCloner I found out that I only needed to override the method retrieveCachedFields(...) or constructCloneCollection(...) so I tried to extend FieldAccessingSolutionCloner but then I need a public no-args-constructor. There I dont know how to initialise the field solutionDescriptor in the no-args-constructor to use my ExtendedFieldAccessingSolutionCloner as solution cloner.
If the generic solution cloner decided to clone that List, there is probably a good reason for it do so: one of the the elements in that list probably has a reference to a planning entity or the planning solution - and therefore the entire list needs to be planning cloned.
If that's not the case, this is a bug in OptaPlanner. Please provide the classes source code of the class with that field and the CustomListImpl class too, so we can reproduce and fix it.
To supply a custom SolutionCloner, follow the docs which will show something like this (but this is a simple case without chained variables, so it's easy to get right, but solution cloning is notoriously difficult!).
#PlanningSolution(solutionCloner = VaccinationSolutionCloner.class)
public class VaccinationSolution {...}
public class VaccinationSolutionCloner implements SolutionCloner<VaccinationSolution> {
#Override
public VaccinationSolution cloneSolution(VaccinationSolution solution) {
List<PersonAssignment> personAssignmentList = solution.getPersonAssignmentList();
List<PersonAssignment> clonedPersonAssignmentList = new ArrayList<>(personAssignmentList.size());
for (PersonAssignment personAssignment : personAssignmentList) {
PersonAssignment clonedPersonAssignment = new PersonAssignment(personAssignment);
clonedPersonAssignmentList.add(clonedPersonAssignment);
}
return new VaccinationSolution(solution.getVaccineTypeList(), solution.getVaccinationCenterList(), solution.getAppointmentList(),
solution.getVaccinationSlotList(), clonedPersonAssignmentList, solution.getScore());
}
}
I'm building an app that generates random sequences of musical notes and displays them to the user as musical notation. These sequences can be generated according to several parameters, including density and maximum consecutive notes of the same pitch.
Musical sequences are captured by a sequence object whose notes property is a simple string of notes such as "abcdaba".
My early attempts to generate random sequences involved a SequenceGenerator class that compiled random sequences using several private methods. This looks like a service to me. But I'm trying to honour the principle expressed in Domain-Driven Design (Evans 2003) to only use services where necessary and to prefer associating behaviour with domain objects.
So my question is:
Should the job of producing random sequences be taken care of by a public method on sequence itself (such as generateRandom()) or should it be kept separate?
I considered the possibility that my original design is more along the lines of a builder or factory pattern than a service, but the the code is very different for creating a random sequence than for creating one with a supplied string of notes.
One concern I have with the method route is that generateRandom() as a method on sequence changes the content of sequence but isn't actually generating a new sequence object. This just feels wrong, but I can't express why.
I'm still getting my head around some the core OO design principles, so any help is greatly appreciated.
Should the job of producing random sequences be taken care of by a public method on sequence itself (such as generateRandom()) or should it be kept separate?
I usually find that I get cleaner designs if I treat "random" the same way that I treat "time", or "I/O" -- as an input to the model, rather than as an aspect of the model itself.
If you don't consider time an input value, think about it until you do -- it is an important concept (John Carmack, 1998).
Within the constraints of DDD, that could either mean passing a "domain service" as an argument to your method, allowing your aggregate to invoke the service as needed, or it could mean having a method on the aggregate, so that the application can pass in random numbers when needed.
So any creation of a sequence would involve passing in some pattern or seed, but whether that is random or not is decided outside of the sequence itself?
Yes, exactly.
The creation of an object is not usually considered part of the logic for the object.
How you do that technically is a different matter. You could potentially use delegation. For example:
public interface NoteSequence {
void play();
}
public final class LettersNoteSequence implements NoteSequence {
public LettersNoteSequence(String letters) {
...
}
...
}
public final class RandomNoteSequence implements NoteSequence {
...
#Override
public void play() {
new LetterNoteSequence(generateRandomLetters()).play();
}
}
This way you don't have to have a "service" or a "factory", but this is only one alternative, may or may not fit your use-case.
TL;DR
How do you test a value object in isolation from its dependencies without stubbing or injecting them?
In Misko Hevery's blog post To “new” or not to “new”… he advocates the following (quoted from the blog post):
An Injectable class can ask for other Injectables in its constructor.(Sometimes I refer to Injectables as Service Objects, but
that term is overloaded.). Injectable can never ask for a non-Injectable (Newable) in its constructor.
Newables can ask for other Newables in their constructor, but not for Injectables (Sometimes I refer to Newables as Value Object, but
again, the term is overloaded)
Now if I have a Quantity value object like this:
class Quantity{
$quantity=0;
public function __construct($quantity){
$intValidator = new Zend_Validate_Int();
if(!$intValidator->isValid($quantity)){
throw new Exception("Quantity must be an integer.");
}
$gtValidator = new Zend_Validate_GreaterThan(0);
if(!$gtvalidator->isValid($quantity)){
throw new Exception("Quantity must be greater than zero.");
}
$this->quantity=$quantity;
}
}
My Quantity value object depends on at least 2 validators for its proper construction. Normally I would have injected those validators through the constructor, so that I can stub them during testing.
However, according to Misko a newable shouldn't ask for injectables in its constructor. Frankly a Quantity object that looks like this
$quantity=new Quantity(1,$intValidator,$gtValidator); looks really awkward.
Using a dependency injection framework to create a value object is even more awkward. However now my dependencies are hard coded in the Quantity constructor and I have no way to alter them if the business logic changes.
How do you design the value object properly for testing and adherence to the separation between injectables and newables?
Notes:
This is just a very very simplified example. My real object my have serious logic in it that may use other dependencies as well.
I used a PHP example just for illustration. Answers in other languages are appreciated.
A Value Object should only contain primitive values (integers, strings, boolean flags, other Value Objects, etc.).
Often, it would be best to let the Value Object itself protect its invariants. In the Quantity example you supply, it could easily do that by checking the incoming value without relying on external dependencies. However, I realize that you write
This is just a very very simplified example. My real object my have serious logic in it that may use other dependencies as well.
So, while I'm going to outline a solution based on the Quantity example, keep in mind that it looks overly complex because the validation logic is so simple here.
Since you also write
I used a PHP example just for illustration. Answers in other languages are appreciated.
I'm going to answer in F#.
If you have external validation dependencies, but still want to retain Quantity as a Value Object, you'll need to decouple the validation logic from the Value Object.
One way to do that is to define an interface for validation:
type IQuantityValidator =
abstract Validate : decimal -> unit
In this case, I patterned the Validate method on the OP example, which throws exceptions upon validation failures. This means that if the Validate method doesn't throw an exception, all is good. This is the reason the method returns unit.
(If I hadn't decided to pattern this interface on the OP, I'd have preferred using the Specification pattern instead; if so, I'd instead have declared the Validate method as decimal -> bool.)
The IQuantityValidator interface enables you to introduce a Composite:
type CompositeQuantityValidator(validators : IQuantityValidator list) =
interface IQuantityValidator with
member this.Validate value =
validators
|> List.iter (fun validator -> validator.Validate value)
This Composite simply iterates through other IQuantityValidator instances and invokes their Validate method. This enables you to compose arbitrarily complex validator graphs.
One leaf validator could be:
type IntegerValidator() =
interface IQuantityValidator with
member this.Validate value =
if value % 1m <> 0m
then
raise(
ArgumentOutOfRangeException(
"value",
"Quantity must be an integer."))
Another one could be:
type GreaterThanValidator(boundary) =
interface IQuantityValidator with
member this.Validate value =
if value <= boundary
then
raise(
ArgumentOutOfRangeException(
"value",
"Quantity must be greater than zero."))
Notice that the GreaterThanValidator class takes a dependency via its constructor. In this case, boundary is just a decimal, so it's a Primitive Dependency, but it could just as well have been a polymorphic dependency (A.K.A a Service).
You can now compose your own validator from these building blocks:
let myValidator =
CompositeQuantityValidator([IntegerValidator(); GreaterThanValidator(0m)])
When you invoke myValidator with e.g. 9m or 42m, it returns without errors, but if you invoke it with e.g. 9.8m, 0m or -1m it throws the appropriate exception.
If you want to build something a bit more complicated than a decimal, you can introduce a Factory, and compose the Factory with the appropriate validator.
Since Quantity is very simple here, we can just define it as a type alias on decimal:
type Quantity = decimal
A Factory might look like this:
type QuantityFactory(validator : IQuantityValidator) =
member this.Create value : Quantity =
validator.Validate value
value
You can now compose a QuantityFactory instance with your validator of choice:
let factory = QuantityFactory(myValidator)
which will let you supply decimal values as input, and get (validated) Quantity values as output.
These calls succeed:
let x = factory.Create 9m
let y = factory.Create 42m
while these throw appropriate exceptions:
let a = factory.Create 9.8m
let b = factory.Create 0m
let c = factory.Create -1m
Now, all of this is very complex given the simple nature of the example domain, but as the problem domain grows more complex, complex is better than complicated.
Avoid value types with dependencies on non-value types. Also avoid constructors that perform validations and throw exceptions. In your example I'd have a factory type that validates and creates quantities.
Your scenario can also be applied to entities. There are cases where an entity requires some dependency in order to perform some behaviour. As far as I can tell the most popular mechanism to use is double-dispatch.
I'll use C# for my examples.
In your case you could have something like this:
public void Validate(IQuantityValidator validator)
As other answers have noted a value object is typically simple enough to perform its invariant checking in the constructor. An e-mail value object would be a good example as an e-mail has a very specific structure.
Something a bit more complex could be an OrderLine where we need to determine, totally hypothetical, whether it is, say, taxable:
public bool IsTaxable(ITaxableService service)
In the article you reference I would assert that the 'newable' relates quite a bit to the 'transient' type of life cycle that we find in DI containers as we are interested in specific instances. However, when we need to inject specific values the transient business does not really help. This is the case for entities where each is a new instance but has very different state. A repository would hydrate the object but it could just as well use a factory.
The 'true' dependencies typically have a 'singleton' life-cycle.
So for the 'newable' instances a factory could be used if you would like to perform validation upon construction by having the factory call the relevant validation method on your value object using the injected validator dependency as Mark Seemann has mentioned.
This gives you the freedom to still test in isolation without coupling to a specific implementation in your constructor.
Just a slightly different angle on what has already been answered. Hope it helps :)
Does it affect the time in loading the application?
or any other issues in doing so?
The question is vague on what "long" means. Here are some possible interpretations:
Interpretation #1: The constructor has many parameters
Constructors with many parameters can lead to poor readability, and better alternatives exist.
Here's a quote from Effective Java 2nd Edition, Item 2: Consider a builder pattern when faced with many constructor parameters:
Traditionally, programmers have used the telescoping constructor pattern, in which you provide a constructor with only the required parameters, another with a single optional parameters, a third with two optional parameters, and so on...
The telescoping constructor pattern is essentially something like this:
public class Telescope {
final String name;
final int levels;
final boolean isAdjustable;
public Telescope(String name) {
this(name, 5);
}
public Telescope(String name, int levels) {
this(name, levels, false);
}
public Telescope(String name, int levels, boolean isAdjustable) {
this.name = name;
this.levels = levels;
this.isAdjustable = isAdjustable;
}
}
And now you can do any of the following:
new Telescope("X/1999");
new Telescope("X/1999", 13);
new Telescope("X/1999", 13, true);
You can't, however, currently set only the name and isAdjustable, and leaving levels at default. You can provide more constructor overloads, but obviously the number would explode as the number of parameters grow, and you may even have multiple boolean and int arguments, which would really make a mess out of things.
As you can see, this isn't a pleasant pattern to write, and even less pleasant to use (What does "true" mean here? What's 13?).
Bloch recommends using a builder pattern, which would allow you to write something like this instead:
Telescope telly = new Telescope.Builder("X/1999").setAdjustable(true).build();
Note that now the parameters are named, and you can set them in any order you want, and you can skip the ones that you want to keep at default values. This is certainly much better than telescoping constructors, especially when there's a huge number of parameters that belong to many of the same types.
See also
Wikipedia/Builder pattern
Effective Java 2nd Edition, Item 2: Consider a builder pattern when faced with many constructor parameters (excerpt online)
Related questions
When would you use the Builder Pattern?
Is this a well known design pattern? What is its name?
Interpretation #2: The constructor does a lot of work that costs time
If the work must be done at construction time, then doing it in the constructor or in a helper method doesn't really make too much of a difference. When a constructor delegates work to a helper method, however, make sure that it's not overridable, because that could lead to a lot of problems.
Here's some quote from Effective Java 2nd Edition, Item 17: Design and document for inheritance, or else prohibit it:
There are a few more restrictions that a class must obey to allow inheritance. Constructors must not invoke overridable methods, directly or indirectly. If you violate this rule, program failure will result. The superclass constructor runs before the subclass constructor, so the overriding method in the subclass will be invoked before the subclass constructor has run. If the overriding method depends on any initialization performed by the subclass constructor, the method will not behave as expected.
Here's an example to illustrate:
public class ConstructorCallsOverride {
public static void main(String[] args) {
abstract class Base {
Base() { overrideMe(); }
abstract void overrideMe();
}
class Child extends Base {
final int x;
Child(int x) { this.x = x; }
#Override void overrideMe() {
System.out.println(x);
}
}
new Child(42); // prints "0"
}
}
Here, when Base constructor calls overrideMe, Child has not finished initializing the final int x, and the method gets the wrong value. This will almost certainly lead to bugs and errors.
Interpretation #3: The constructor does a lot of work that can be deferred
The construction of an object can be made faster when some work is deferred to when it's actually needed; this is called lazy initialization. As an example, when a String is constructed, it does not actually compute its hash code. It only does it when the hash code is first required, and then it will cache it (since strings are immutable, this value will not change).
However, consider Effective Java 2nd Edition, Item 71: Use lazy initialization judiciously. Lazy initialization can lead to subtle bugs, and don't always yield improved performance that justifies the added complexity. Do not prematurely optimize.
Constructors are a little special in that an unhandled exception in a constructor may have weird side effects. Without seeing your code I would assume that a long constructor increases the risk of exceptions. I would make the constructor as simple as needed and utilize other methods to do the rest in order to provide better error handling.
The biggest disadvantage is probably the same as writing any other long function -- that it can get complex and difficult to understand.
The rest is going to vary. First of all, length and execution time don't necessarily correlate -- you could have a single line (e.g., function call) that took several seconds to complete (e.g., connect to a server) or lots of code that executed entirely within the CPU and finished quickly.
Startup time would (obviously) only be affected by constructors that were/are invoked during startup. I haven't had an issue with this in any code I've written (at all recently anyway), but I've seen code that did. On some types of embedded systems (for one example) you really want to avoid creating and destroying objects during normal use, so you create almost everything statically during bootup. Once it's running, you can devote all the processor time to getting the real work done.
Constructor is yet another function. You need very long functions called many times to make the program work slow. So if it's only called once it usually won't matter how much code is inside.
It affects the time it takes to construct that object, naturally, but no more than having an empty constructor and calling methods to do that work instead. It has no effect on the application load time
In case of copy constructor if we use donot use reference in that case
it will create an object and call the copy constructor and passing the
value to the copy constructor and each time a new object is created and
each time it will call the copy constructor it goes to infinite and
fill the memory then it display the error message .
if we pass the reference it will not create the new object for storing
the value. and no recursion will take place
I would avoid doing anything in your constructor that isn't absolutely necessary. Initialize your variables in there, and try not to do much else. Additional functionality should reside in separate functions that you call only if you need to.