I am thinking about the chain of responsibility pattern, and cannot understand a few things:
Can handlers be conditional? Like if (foo) { callHandlerA() } else { callHandlerB() }, or they must be inline (i.e. always call next handler it current cannot handle the request?)?
If handlers handled a request, can it break call chain? It seems like, yes, because it is handling also, but instead of doing something, it just doing nothing.
In general, I see it is the same thing as calling handlers directly, i.e.
function doItHandlingMaster(data) {
if (handlerA(data)) {
return;
}
if (handlerB(data)) {
return;
}
if (handlerC(data)) {
return;
}
... and so on
}
so the main ideas of the pattern are flexibility (we can define steps once and reuse them) and louse coupling (we know only about the entrance, and do not know all handling steps), right?
I saw this pattern many times (Express, DOM, Angular, almost everywhere), and want to understand it clearly.
Thanks!
Choice handlers in another handler with help conditions are bad practice because it makes hard dependencies between different handlers.
Yes, If handlers handled a request chain of calls might be broken.
You are right. The main idea of chain of responsibility pattern is flexibility and keeping of low coupling between different classes
Related
I use the command design pattern to deal with player actions.
For example, below is the command that handles a dice roll.
interface ICommand
{
public function execute(Game $game) : void;
}
class RollDiceCommand implements ICommand
{
private $player;
public function __construct(Player $player)
{
$this->player = $player;
}
public function execute(Game $game) : void
{
$dice = DiceFacade::roll(new NumberGenerator());
// Currently a business logic goes here
if ($dice->isDouble()) {
$player->incrementDoubleCount();
if ($player->getDoubleCount() === 3) {
$command = new GoToJailCommand();
$command->execute();
return;
}
} else {
// The next player turn
$game->nextPlayer();
}
$command = MoveForwardCommand($this->player);
$command->execute($dice->getValue());
// ...
}
}
Is it good idea to store an additional business logic in the command?
Should I call an another command directly from the roll command or I need to avoid it? The idea of throwing an event in the command seems better to me. What do you think about it?
Thank you!
The most used form of Command pattern in DDD (the one from CQRS) is not the same as the Go4 Command pattern. It is just a DTO with no execute() method.
In DDD the applicative logic is in the command handler/application service, not the command itself.
Note that a large part of the logic you currently have in execute() is domain logic and shouldn't even be in a command handler. Go to jail, Next player, Move forward - these look like domain rules that should be in the Domain layer.
Should I call an another command directly from the roll command or I
need to avoid it? The idea of throwing an event in the command seems
better to me. What do you think about it?
It depends if you consider the followup move to be part of the main action or an indirect consequence. Indirect commands are often executed as part of a separate transaction.
The Command pattern is useful when you want to encapsulate requests as an object. That allows you to pass parameters to them when they're instantiated, to group them together (executing them as a block), to log them, and even undo them.
I'm not seeing (yet) a reason you need this.
Is it good idea to store an additional business logic in the command?
One reason it's bad to store business logic (in the presentation layer) is that if you want to add another version of your application (say, a mobile version), you have to repeat the business logic code in the new application (in its presentation layer). Also, it's harder to maintain and test the business logic, because it's not really very well encapsulated (it's spread out).
Here, however, you've encapsulated some of it in a Command object, which may not be bad (depending on "where" you see this code). For the game of Monopoly, will you have multiple clients (different presentation layers?) or is this a pet project (one implementation)? If there are going to be different presentation layers, then it's best to keep the domain logic out of them. There's nothing in your sample code (but I'm not good with PHP) with Command that looks too tied to presentation, so it's probably OK.
Generally, if you're trying to encapsulate domain logic, the GoF Façade pattern is useful. You'd have a Façade class in the domain layer that handles the high-level operations (e.g., rollAndMove($dice)). It seems you already use a Façade for the dice roll. Player could alternatively be a class that plays the roll of the Façade, since the domain logic of taking a turn would be a reasonable responsibility for that class (IMO). Of course, if Player ends up with too many methods, it's perhaps better to use a separate class for the Façade.
The idea of throwing an event in the command seems better to me. What do you think about it?
I don't see a problem with combining both patterns, but perhaps you don't really need Command for what it's intended to be?
You're right that the execute() would be very short code (just call the Facade's method). However, using a Command object allows you to save the state of the game (e.g., GoF Memento pattern) if you wanted to undo the command later, or as stated above you could log the information in a standard way, etc. If you don't need those things, I would avoid using Command as it's adding complexity without the intent of the pattern.
class Person {
private state ="normal" //cripple
run() {
if (this.state === "normal") {
console.log("run")
} else {
console.log("hobble")
}
}
}
//vs
abstract class AttemptRun {
abstract run();
}
class NormalRun extends AttemptRun {
run() {
console.log("run")
}
}
class CrippleRun extends AttemptRun {
run() {
console.log("hobble")
}
}
class Person {
protected runAbility: AttemptRun;
run() {
this.runAbility.run()
}
}
Assuming that I understood the concept here is my question.
Why is polymorphism better than if and logic.
Because it seems to me you will still need a factory or another method to set the type of person's ability. So if the logic is not here it's going to be somewhere else.
Why is this repeated some much in books i read, like clean code.
It's listed as a code smell.
I feel like it can make unit test a little bit easier because then you only test the abilities and you don't need to test the actual other class that is using it.
Is that all it has to offer.
An if/else and not you need to write a different class and a factory?
Hardly seems fair?
Perhaps it's more work but in the long run, it will be better.
What are the weaknesses of each case?
Is this something you would do if it's a small class? Basically, I don't know if I understand the concept, and then assuming i do. How practical is it to use.
A wise developer might use it only when they need something specific. I don't know any of the specifics.
There's a couple of things about your simple example that don't demonstrate the benefits of the polymorphic approach.
In your case there's just one if statement (in run), just 2 variations of person, and the behaviour in each case is very similar (they both just log some text).
Consider a larger case where you've got more functionality though. If you add an attemptToDance you'd introduce a new if/else block, if you add other variation of person all your functions need a new if or case in the existing functions. As you add more functionality you end up with many cases in many if/else blocks, and there's no way the compiler can verify for you that you haven't missed one of the person types in of the if cases.
Catching errors with unit tests is great, but choosing a design that makes the error impossible is even better - the compiler never misses things like that, and you know the compiler ran and worked (you hope the unit tests were run successfully - but you're never quite as certain)
If you have an abstract base class defining an interface that all types of person implement, then the compiler will tell you if you fail to implement one of the methods for one of the derived person classes.
In a larger, real case, the implementation of each method on each type of person can and probably will vary more than just outputting different text. if these stay in an if case, all that different functionality ends up in one place, you have code that depends on many things at once, this makes testing and maintenance harder. If the people classes have state such that the methods interact, this complicates things even more, and polymorphism allows you to wrap up that behaviour in a class, without that class needing to concern itself with the other types of person class.
In the simple case the if/else version works, it just doesn't scale very well in many cases.
You may still need an if/else or switch in a factory method somewhere, but one switch that's just responsible for construction is easier to maintain than many switches or if blocks.
Using polymorphism instead of if-else statements has pros and cons.
PROs
OOP is a way to improve and promote the reuse of code, using if/else statements instead of an abstract class/interface AttemptRunwith many implementantions you could bump in a situation in which you need to add another class e.g. Animal and you'll need to rewrite also the cases in common with class Person.
Use of polimorphism aids to maintain the code by improving its readability. Very long if-else statement are tiring to read and you have to remember for that is useful, instead a class name directly remind you the function of the specific case.
CONs
Pro 2 is also a cons: as you said, polymorphism force you to test all related classes that you're using to inject responsabilities and change behaviours.
Polymorphism hits on performance: the correct implementantion of a method is retrieved by compiler by forwarding; surely a list of if-else is faster.
In this post I wanna to show you a little code example with several JS classes and ask you, whether this code is okay because of LSP or it violates encapsulation principles.
The _framesMonitor variable in this example is an instance of some 3-d party library vqt that we use inside Job class. _framesMonitor.stopListen() can throw exceptions, particularly vqt.Errors.ProcessExitError.
In this example below, is it okay to expose vqt.Errors.ProcessExitError type to the JobManager class (and it's okay because of LSP) or it violates
encapsulation revealing inner implementation details.
// Job.js
class Job {
constructor() {
this._framesMonitor = new FramesMonitor();
}
async stop() {
await this._framesMonitor.stopListen();
}
}
// JobsManager.js
class JobsManager {
async deleteJob() {
try {
await job.stop();
} catch(err) {
// vqt.Errors.ProcessExitError here
}
}
}
In this case Job is a concrete class and JobManager depends on it directly so it is allowed to know the specifics of Job as long as your intent is to have only one kind of Job.
However, if you conceptually plan to have many different Job implementations (whether the language supports explicit interfaces or not) then ideally JobManager should attempt to treat all Job generically and should do it's best not to depend on any Job details.
Therefore, JobManager may catch any error, but shouldn't ideally drive it's workflow based on arbitrary error details. If JobManager must know some level of details to issue the appropriate compensating actions then you should try to come up with a normalized JobFailureError (or even a return value) that provides the necessary information (it could as well carry the low-level error cause for logging purpose).
Finally, if somehow you need special case handling for every kind of job which cannot be normalized then it will be a question of trade-offs.
To keep error handling out of Job concerns you could allow the registration of special job handling strategies with the JobManager. Although this approach adheres to the Open-Closed Principle (OCP) it is also an implementation of the Service Locator which some sees as an anti-pattern. When adding new Job implementations you would also need to remember to add a corresponding handler.
E.g.
jobManager.registerHandler('some-job-type', function (job) {
//Special handling code job of some-job-type
});
If you do not mind some level of coupling between the concept of an error handler and a Job then you could do something like new SomeJob(errorHandler) or even having SomeJob couple to a specific handler.
I'd usually go for the Service Locator approach here, but I'm not sure if we can say it's an absolute best in every similar scenarios.
For instance, if you were using a static typed language perhaps using double-dispatch techniques or even pattern matching if available would be better because you could get compile-time feedback.
This is most certainly a language agnostic question and one that has bothered me for quite some time now. An example will probably help me explain the dilemma I am facing:
Let us say we have a method which is responsible for reading a file, populating a collection with some objects (which store information from the file), and then returning the collection...something like the following:
public List<SomeObject> loadConfiguration(String filename);
Let us also say that at the time of implementing this method, it would seem infeasible for the application to continue if the collection returned was empty (a size of 0). Now, the question is, should this validation (checking for an empty collection and perhaps the subsequent throwing of an exception) be done within the method? Or, should this methods sole responsibility be to perform the load of the file and ignore the task of validation, allowing validation to be done at some later stage outside of the method?
I guess the general question is: is it better to decouple the validation from the actual task being performed by a method? Will this make things, in general, easier at a later stage to change or build upon - in the case of my example above, it may be the case at a later stage where a different strategy is added to recover from the event of an empty collection being return from the 'loadConfiguration' method..... this would be difficult if the validation (and resulting exception) was being done in the method.
Perhaps I am being overly pedantic in the quest for some dogmatic answer, where instead it simply just relies on the context in which a method is being used. Anyhow, I would be very interested in seeing what others have to say regarding this.
Thanks all!
My recommendation is to stick to the single responsibility principle which says, in a nutshell, that each object should have 1 purpose. In this instance, your method has 3 purposes and then 4 if you count the validation aspect.
Here's my recommendation on how to handle this and how to provide a large amount of flexibility for future updates.
Keep your LoadConfig method
Have it call the a new method for reading the file.
Pass the previous method's return value to another method for loading the file into the collection.
Pass the object collection into some validation method.
Return the collection.
That's taking 1 method initially and breaking it into 4 with one calling 3 others. This should allow you to change pieces w/o having any impact on others.
Hope this helps
I guess the general question is: is it
better to decouple the validation from
the actual task being performed by a
method?
Yes. (At least if you really insist on answering such a general question – it’s always quite easy to find a counter-example.) If you keep both the parts of the solution separate, you can exchange, drop or reuse any of them. That’s a clear plus. Of course you must be careful not to jeopardize your object’s invariants by exposing the non-validating API, but I think you are aware of that. You’ll have to do some little extra typing, but that won’t hurt you.
I will answer your question by a question: do you want various validation methods for the product of your method ?
This is the same as the 'constructor' issue: is it better to raise an exception during the construction or initialize a void object and then call an 'init' method... you are sure to raise a debate here!
In general, I would recommend performing the validation as soon as possible: this is known as the Fail Fast which advocates that finding problems as soon as possible is better than delaying the detection since diagnosis is immediate while later you would have to revert the whole flow....
If you're not convinced, think of it this way: do you really want to write 3 lines every time you load a file ? (load, parse, validate) Well, that violates the DRY principle.
So, go agile there:
write your method with validation: it is responsible for loading a valid configuration (1)
if you ever need some parametrization, add it then (like a 'check' parameter, with a default value which preserves the old behavior of course)
(1) Of course, I don't advocate a single method to do all this at once... it's an organization matter: under the covers this method should call dedicated methods to organize the code :)
To deflect the question to a more basic one, each method should do as little as possible. So in your example, there should be a method that reads in the file, a method that extracts the necessary data from the file, another method to write that data to the collection, and another method that calls these methods. The validation can go in a separate method, or in one of the others, depending on where it makes the most sense.
private byte[] ReadFile(string fileSpec)
{
// code to read in file, and return contents
}
private FileData GetFileData(string fileContents)
{
// code to create FileData struct from file contents
}
private void FileDataCollection: Collection<FileData> { }
public void DoItAll (string fileSpec, FileDataCollection filDtaCol)
{
filDtaCol.Add(GetFileData(ReadFile(fileSpec)));
}
Add validation, verification to each of the methods as appropriate
You are designing an API and should not make any unnecessary assumptions about your client. A method should take only the information that it needs, return only the information requested, and only fail when it is unable to return a meaningful value.
So, with that in mind, if the configuration is loadable but empty, then returning an empty list seems correct to me. If your client has an application specific requirement to fail when provided an empty list, then it may do so, but future clients may not have that requirement. The loadConfiguration method itself should fail when it really fails, such as when it is unable to read or parse the file.
But you can continue to decouple your interface. For example, why must the configuration be stored in a file? Why can't I provide a URL, a row in a database, or a raw string containing the configuration data? Very few methods should take a file path as an argument since it binds them tightly to the local file system and makes them responsible for opening, reading, and closing files in addition to their core logic. Consider accepting an input stream as an alternative. Or if you want to allow for elaborate alternatives -- like data from a database -- consider accepting a ConfigurationReader interface or similar.
Methods should be highly cohesive ... that is single minded. So my opinion would be to separate the responsibilities as you have described. I sometimes feel tempted to say...it is just a short method so it does not matter...then I regret it 1.5 weeks later.
I think this depends on the case: If you could think of a scenario where you would use this method and it returned an empty list, and this would be okay, then I would not put the validation inside the method. But for e.g. a method which inserts data into a database which have to be validated (is the email address correct, has a name been specified, ... ) then it should be ok to put validation code inside the function and throw an exception.
Another alternative, not mentioned above, is to support Dependency Injection and have the method client inject a validator. This would allow the preservation of the "strong" Resource Acquisition Is Initialization principle, that is to say Any Object which Loads Successfully is Ready For Business (Matthieu's mention of Fail Fast is much the same notion).
It also allows a resource implementation class to create its own low-level validators which rely on the structure of the resource without exposing clients to implementation details unnecessarily, which can be useful when dealing with multiple disparate resource providers such as Ryan listed.
I have the habit of always validating property setters against bad data, even if there's no where in my program that would reasonably input bad data. My QA person doesn't want me throwing exceptions unless I can explain where they would occur. Should I be validating all properties? Is there a standard on this I could point to?
Example:
public void setName(String newName){
if (newName == null){
throw new IllegalArgumentException("Name cannot be null");
}
name = newName;
}
...
//Only call to setName(String)
t.setName("Jim");
You're enforcing your method's preconditions, which are an important part of its contract. There's nothing wrong with doing that, and it also serves as self-documenting code (if I read your method's code, I immediately see what I shouldn't pass to it), though asserts may be preferable for that.
Personally I prefer using Asserts in these wildly improbable cases just to avoid difficult to read code but to make it clear that assumptions are being made in the function's algorithms.
But, of course, this is very much a judgement call that has to be made on a case-by-case basis. You can see it (and I have seen it) get completely out of hand - to the point where a simple function is turned into a tangle of if statements that pretty much never evaluate to true.
You are doing ok !
Whether it's a setter or a function - always validate and throw meaningfull exception. you never know when you'll need it, and you will...
In general I don't favor this practice. It's not that performing validation is bad, but rather, on things like simple setters it tends to create more clutter than its worth in protecting from bugs. I prefer using unit tests to insure there are no bugs.
Well, it's been awhile since this question was posted but I'd like to give a different point of view on this topic.
Using the specific example you posted, IMHO you should doing validation, but in a different way.
The key to archieving validation lies in the question itself. Think about it: you're dealing with names, not strings.
A string is a name when it's not null. We can also think of additional characteristics that make a string a name: is cannot be empty nor contain spaces.
Suppose you need to add those validation rules: if you stick with your approach you'll end up cluttering your setter as #SingleShot said.
Also, what would you do if more than one domain object has a setName setter?
Even if you use helper classes as #dave does, code will still be duplicated: calls to the helper instances.
Now, think for a moment: what if all the arguments you could ever receive in the setName method were valid? No validation would be needed for sure.
I might sound too optimistic, but it can be done.
Remember you're dealing with names, so why don't model the concept of a name?
Here's a vanilla, dummy implementation to show the idea:
public class Name
public static Name From(String value) {
if (string.IsNullOrEmpty(value)) throw new ...
if (value.contains(' ')) throw new ...
return new Name(value);
}
private Name(string value) {
this.value = value;
}
// other Name stuff goes here...
}
Because validation is happening at the moment of creation, you can only get valid Name instances. There's no way to create a Name instance from an "invalid" string.
Not only the validation code has been centralized, but also exceptions are thrown in a context that have meaning to them (the creation of a Name instance).
You can read about great design principles in Hernan Wilkinson's "Design Principles Behind Patagonia" (the name example is taken from it). Be sure to check the ESUG 2010 Video and the presentation slides
Finally, I think you might find Jim Shore's "Fail Fast" article interesting.
It's a tradeoff. It's more code to write, review and maintain, but you'll probably find problems quicker if somehow a null name gets through.
I lean towards having it because eventually you find you do need it.
I used to have utility classes to keep the code to a minimum. So instead of
if (name == null) { throw new ...
you could have
Util.assertNotNull(name)
Then Java added asserts to the language and you could do it more directly. And turn it off if you wanted.
It's well done in my opinion. For null values throw IllegalArgumentException. For other kind of validations you should consider using a customized exceptions hierarchy related to your domain objects.
I'm not aware of any documented standard that says 'validate all user input' but it's a very good idea. In the current version of the program it may not be possible to access this particular property with invalid data but that's not going to prevent it from happening in the future. All sorts of interesting things happen during maintenance. And hey, you never know when someone will reuse the class in another application that doesn't validate data before passing it in.