Liskov substitution principle or encapsulation violation - oop

In this post I wanna to show you a little code example with several JS classes and ask you, whether this code is okay because of LSP or it violates encapsulation principles.
The _framesMonitor variable in this example is an instance of some 3-d party library vqt that we use inside Job class. _framesMonitor.stopListen() can throw exceptions, particularly vqt.Errors.ProcessExitError.
In this example below, is it okay to expose vqt.Errors.ProcessExitError type to the JobManager class (and it's okay because of LSP) or it violates
encapsulation revealing inner implementation details.
// Job.js
class Job {
constructor() {
this._framesMonitor = new FramesMonitor();
}
async stop() {
await this._framesMonitor.stopListen();
}
}
// JobsManager.js
class JobsManager {
async deleteJob() {
try {
await job.stop();
} catch(err) {
// vqt.Errors.ProcessExitError here
}
}
}

In this case Job is a concrete class and JobManager depends on it directly so it is allowed to know the specifics of Job as long as your intent is to have only one kind of Job.
However, if you conceptually plan to have many different Job implementations (whether the language supports explicit interfaces or not) then ideally JobManager should attempt to treat all Job generically and should do it's best not to depend on any Job details.
Therefore, JobManager may catch any error, but shouldn't ideally drive it's workflow based on arbitrary error details. If JobManager must know some level of details to issue the appropriate compensating actions then you should try to come up with a normalized JobFailureError (or even a return value) that provides the necessary information (it could as well carry the low-level error cause for logging purpose).
Finally, if somehow you need special case handling for every kind of job which cannot be normalized then it will be a question of trade-offs.
To keep error handling out of Job concerns you could allow the registration of special job handling strategies with the JobManager. Although this approach adheres to the Open-Closed Principle (OCP) it is also an implementation of the Service Locator which some sees as an anti-pattern. When adding new Job implementations you would also need to remember to add a corresponding handler.
E.g.
jobManager.registerHandler('some-job-type', function (job) {
//Special handling code job of some-job-type
});
If you do not mind some level of coupling between the concept of an error handler and a Job then you could do something like new SomeJob(errorHandler) or even having SomeJob couple to a specific handler.
I'd usually go for the Service Locator approach here, but I'm not sure if we can say it's an absolute best in every similar scenarios.
For instance, if you were using a static typed language perhaps using double-dispatch techniques or even pattern matching if available would be better because you could get compile-time feedback.

Related

Chain of Responsibility, what exactly does this pattern mean?

I am thinking about the chain of responsibility pattern, and cannot understand a few things:
Can handlers be conditional? Like if (foo) { callHandlerA() } else { callHandlerB() }, or they must be inline (i.e. always call next handler it current cannot handle the request?)?
If handlers handled a request, can it break call chain? It seems like, yes, because it is handling also, but instead of doing something, it just doing nothing.
In general, I see it is the same thing as calling handlers directly, i.e.
function doItHandlingMaster(data) {
if (handlerA(data)) {
return;
}
if (handlerB(data)) {
return;
}
if (handlerC(data)) {
return;
}
... and so on
}
so the main ideas of the pattern are flexibility (we can define steps once and reuse them) and louse coupling (we know only about the entrance, and do not know all handling steps), right?
I saw this pattern many times (Express, DOM, Angular, almost everywhere), and want to understand it clearly.
Thanks!
Choice handlers in another handler with help conditions are bad practice because it makes hard dependencies between different handlers.
Yes, If handlers handled a request chain of calls might be broken.
You are right. The main idea of chain of responsibility pattern is flexibility and keeping of low coupling between different classes

Business logic in command design pattern

I use the command design pattern to deal with player actions.
For example, below is the command that handles a dice roll.
interface ICommand
{
public function execute(Game $game) : void;
}
class RollDiceCommand implements ICommand
{
private $player;
public function __construct(Player $player)
{
$this->player = $player;
}
public function execute(Game $game) : void
{
$dice = DiceFacade::roll(new NumberGenerator());
// Currently a business logic goes here
if ($dice->isDouble()) {
$player->incrementDoubleCount();
if ($player->getDoubleCount() === 3) {
$command = new GoToJailCommand();
$command->execute();
return;
}
} else {
// The next player turn
$game->nextPlayer();
}
$command = MoveForwardCommand($this->player);
$command->execute($dice->getValue());
// ...
}
}
Is it good idea to store an additional business logic in the command?
Should I call an another command directly from the roll command or I need to avoid it? The idea of throwing an event in the command seems better to me. What do you think about it?
Thank you!
The most used form of Command pattern in DDD (the one from CQRS) is not the same as the Go4 Command pattern. It is just a DTO with no execute() method.
In DDD the applicative logic is in the command handler/application service, not the command itself.
Note that a large part of the logic you currently have in execute() is domain logic and shouldn't even be in a command handler. Go to jail, Next player, Move forward - these look like domain rules that should be in the Domain layer.
Should I call an another command directly from the roll command or I
need to avoid it? The idea of throwing an event in the command seems
better to me. What do you think about it?
It depends if you consider the followup move to be part of the main action or an indirect consequence. Indirect commands are often executed as part of a separate transaction.
The Command pattern is useful when you want to encapsulate requests as an object. That allows you to pass parameters to them when they're instantiated, to group them together (executing them as a block), to log them, and even undo them.
I'm not seeing (yet) a reason you need this.
Is it good idea to store an additional business logic in the command?
One reason it's bad to store business logic (in the presentation layer) is that if you want to add another version of your application (say, a mobile version), you have to repeat the business logic code in the new application (in its presentation layer). Also, it's harder to maintain and test the business logic, because it's not really very well encapsulated (it's spread out).
Here, however, you've encapsulated some of it in a Command object, which may not be bad (depending on "where" you see this code). For the game of Monopoly, will you have multiple clients (different presentation layers?) or is this a pet project (one implementation)? If there are going to be different presentation layers, then it's best to keep the domain logic out of them. There's nothing in your sample code (but I'm not good with PHP) with Command that looks too tied to presentation, so it's probably OK.
Generally, if you're trying to encapsulate domain logic, the GoF Façade pattern is useful. You'd have a Façade class in the domain layer that handles the high-level operations (e.g., rollAndMove($dice)). It seems you already use a Façade for the dice roll. Player could alternatively be a class that plays the roll of the Façade, since the domain logic of taking a turn would be a reasonable responsibility for that class (IMO). Of course, if Player ends up with too many methods, it's perhaps better to use a separate class for the Façade.
The idea of throwing an event in the command seems better to me. What do you think about it?
I don't see a problem with combining both patterns, but perhaps you don't really need Command for what it's intended to be?
You're right that the execute() would be very short code (just call the Facade's method). However, using a Command object allows you to save the state of the game (e.g., GoF Memento pattern) if you wanted to undo the command later, or as stated above you could log the information in a standard way, etc. If you don't need those things, I would avoid using Command as it's adding complexity without the intent of the pattern.

What OOP patterns can be used to implement a process over multiple "step" classes?

In OOP everything is an object with own attributes and methods. However, often you want to run a process that spans over multiple steps that need to be run in sequence. For example, you might need to download an XML file, parse it and run business actions accordingly. This includes at least three steps: downloading, unmarshalling, interpreting the decoded request.
In a really bad design you would do this all in one method. In a slightly better design you would put the single steps into methods or, much better, new classes. Since you want to test and reuse the single classes, they shouldn't know about each other. In my case, a central control class runs them all in sequence, taking the output of one step and passing it to the next. I noticed that such control-and-command classes tend to grow quickly and are rather not flexible or extendible.
My question therefore is: what OOP patterns can be used to implement a business process and when to apply which one?
My research so far:
The mediator pattern seems to be what I'm using right now, but some definitions say it's only managing "peer" classes. I'm not sure it applies to a chain of isolated steps.
You could probably call it a strategy pattern when more than one of the aforementioned mediators is used. I guess this would solve the problem of the mediator not being very flexible.
Using events (probably related to the Chain of Responsibility pattern) I could make the single steps listen for special events and send different events. This way the pipeline is super-flexible, but also hard to follow and to control.
Chain of Responsibility is the best for this case. It is pretty much definition of CoR.
If you are using spring you can consider interesting spring based implementation of this pattern:
https://www.javacodegeeks.com/2012/11/chain-of-responsibility-using-spring-autowired-list.html
Obviously without spring it is very similar.
Is dependency injection not sufficient ? This makes your code reusable and testable (as you requested) and no need to use some complicated design pattern.
public final class SomeBusinessProcess {
private final Server server;
private final Marshaller marshaller;
private final Codec codec;
public SomeBusinessProcess(Server server, Marshaller marshaller, Codec codec) {
this.server = server;
this.marshaller = marshaller;
this.codec = codec;
}
public Foo retrieve(String filename) {
File f = server.download(filename);
byte[] content = marshaller.unmarshal(f);
return codec.decode(content);
}
}
I believe that a Composite Command (a vairation of the Command Pattern) would fit what you describe. The application of those is frequent in Eclipse.

What is aspect-oriented programming?

I understand object oriented programming, and have been writing OO programs for a long time. People seem to talk about aspect-oriented programming, but I've never really learned what it is or how to use it. What is the basic paradigm?
This question is related, but doesn't quite ask it:
Aspect-Oriented Programming vs. Object Oriented Programming
AOP addresses the problem of cross-cutting concerns, which would be any kind of code that is repeated in different methods and can't normally be completely refactored into its own module, like with logging or verification. So, with AOP you can leave that stuff out of the main code and define it vertically like so:
function mainProgram()
{
var x = foo();
doSomethingWith(x);
return x;
}
aspect logging
{
before (mainProgram is called):
{
log.Write("entering mainProgram");
}
after (mainProgram is called):
{
log.Write( "exiting mainProgram with return value of "
+ mainProgram.returnValue);
}
}
aspect verification
{
before (doSomethingWith is called):
{
if (doSomethingWith.arguments[0] == null)
{
throw NullArgumentException();
}
if (!doSomethingWith.caller.isAuthenticated)
{
throw Securityexception();
}
}
}
And then an aspect-weaver is used to compile the code into this:
function mainProgram()
{
log.Write("entering mainProgram");
var x = foo();
if (x == null) throw NullArgumentException();
if (!mainProgramIsAuthenticated()) throw Securityexception();
doSomethingWith(x);
log.Write("exiting mainProgram with return value of "+ x);
return x;
}
Unfortunately, it seems to be surprisingly difficult to make AOP really useful in a normal mid-large size organization. (Editor support, sense of control, the fact that you start with the not-so-important things leading to code-rot, people going home to their families, etc.)
I put my hopes to composite oriented programming, which is something more and more realistic. It connects to many popular ideas and gives you something really cool.
Look at an up and coming implementation here: qi4j.org/
PS. Actually, I think that one of the beauties with AOP is also its achilles heel: Its non-intrusive, letting people ignore it if they can, so it will be treated as a secondary concern in most organizations.
Copied from Spring in Action
AOP is often defined as a technique that promotes separation of
concerns in a software system. Systems are composed of several
components, each responsible for a specific piece of functionality.
But often these components also carry additional responsibilities
beyond their core functionality. System services such as logging,
transaction management, and security often find their way into
components whose core responsibilities is something else. These system
services are commonly referred to as cross-cutting concerns because
they tend to cut across multiple components in a system.
Copied from a duplicate for completeness (Einstein):
The classic examples are security and logging. Instead of writing code within your application to log occurance of x or check object z for security access control there is a language contraption "out of band" of normal code which can systematically inject security or logging into routines that don't nativly have them in such a way that even though your code doesn't supply it -- its taken care of.
A more concrete example is the operating system providing access controls to a file. A software program does not need to check for access restrictions because the underlying system does that work for it.
If you think you need AOP in my experience you actually really need to be investing more time and effort into appropriate meta-data management within your system with a focus on well thought structural / systems design.
Copied from a duplicate for completeness (Buzzer):
Class and method attributes in .NET are a form of aspect-oriented programming. You decorate your classes/methods with attributes. Behind the scenes this adds code to your class/method that performs the particular functions of the attribute. For example, marking a class serializable allows it to be serialized automatically for storage or transmission to another system. Other attributes might mark certain properties as non-serializable and these would be automatically omitted from the serialized object. Serialization is an aspect, implemented by other code in the system, and applied to your class by the application of a "configuration" attribute (decoration) .
There is an example of AOP, it used spring AOP as an example. The example is quite easy to understand.
Spring AOP (Aspect-oriented programming) framework is used to modularize cross-cutting concerns in aspects. Put it simple, it’s just an interceptor to intercept some processes, for example, when a method is execute, Spring AOP can hijack the executing method, and add extra functionality before or after the method execution.
Reference: http://www.mkyong.com/spring/spring-aop-examples-advice/
AOP is a way to better modularize your application for functionality that spans across multiple boundaries. AOP is another way to encapsulate these features and follow Single Responsiblity by moving these cross-cutting concerns (logging, error handling, etc.) out of the main components of your application. When used appropriately AOP can lead to higher levels of maintainability and extensibility in your application over time.

When do you stop encapsulating?

I have some event handler on a boundary class that manages a persistence mechanism for a given generic transaction:
void MyBoundaryClass::MyEventHandler(...)
{
//retrieve stuff from the UI
//...
//declare and initialize trasaction to persist
SimpleTransaction myTransaction(.../*pass down stuff*/);
//do some other checks
//...
//declare transaction persistor
TransactionPersistor myPersistor(myTransaction, .../*pass down connection to DB and other stuff*/);
//persist transaction
try
{
myPersistor.Persist();
}
catch(...)
{
//handle errors
}
}
Would it be better to have some kind of TransactionManager to wrap SimpleTransaction and TransactionPErsistor objects?
Is there any useful rule of thumb to understand if I need a further level of encapsulation?
At the moment the rule of thumb I follow is "if the method gets too big - do something about it". It is hard sometimes to find the right balance between procedural and object oriented when dealing with boundary event handlers.
Any opinion?
Cheers
Considering that:
the concept of encapsulation is about defining a container, and
object-oriented design is based on the concept of message passing (invocation of methods)
I would argue that the API is a good indication about the pertinence of a new high-level encapsulation (I.e. the definition of a new object)
If the services (i.e the API) offered by this new object are coherent, and are better exposed to the rest of the program when regrouped in one special object, then by all means, use a new object.
Otherwise, it is probable an overkill.
Since you expose a public API by creating a new object, the notion of test may be easier to do within that new object (and a few other mock objects), rather than create many legacy objects in order to test those same operations.
In your case, if you want to test the transaction, you must actually test MyEventHandler of MyBoundaryClass, in order to retrieve data from the UI.
But if you define a TransactionManager, that gives you the opportunity to lower coupling of different architecture levels (GUI vs. data) present in MyBoundaryClass, and to export data management into a dedicated class.
Then, you can test data persistence in independent test scenario, focusing especially on limit values, and database failure, and not-nominal conditions, and so on.
Testing scenario can help you refine the cohesion (great point mentioned by Daok) of your different objects. If your tests are simple and coherent, chances are that your objects have a well-define service boundary.
Since it can be argued that Coupling and Cohesion are two cornerstones of OO Programming, the cohesion of a new class like TransactionManager can be evaluated in term of the set of actions it will perform.
Cohesive means that a certain class performs a set of closely related actions. A lack of cohesion, on the other hand, means that a class is performing several unrelated tasks. [...] the application software will eventually become unmanageable as more and more behaviors become scattered and end up in wrong places.
If you regroup behaviors otherwise implemented in several different places into your TransactionManager, it should be fine, provided that its public API represent clear steps of what a transaction involves and not "stuff about transaction" like various utility functions. A name in itself is not enough to judge the cohesiveness of a class. The combination of the name and its public API is needed.
For instance, one interesting aspect of a TransactionManager would be to completely encapsulate the notion of Transaction, which would :
become virtually unkown by the rest f the system, and would lower coupling between the other classes and 'Transaction'
reinforce the cohesiveness of TransactionManager by centering its API around transaction steps (like initTransaction(), persistTransaction(), ...), avoiding any getter or setter for any Transaction instance.
Elaborating on VonC's suggestion, consider the following guidelines:
If you expect to invoke the same functions elsewhere, in the same way, it's reasonable to encapsulate them in a new object.
If one function (or one object) provides a set of facilities that are useful individually, it's reasonable to refactor it into smaller components.
VonC's point about the API is an excellent litmus test: create effective interfaces, and the objects often become apparent.
The level of encapsulating should be directly linked to the cohesion of your object. Your object must do a single task or must be divided in multiple class and encapsulate all its behaviors and properties.
A rule of thumb is when it's time to test your object. If you are doing Unit Testing and you realize that you are testing multiple different thing (not in the same area action) than you might try to split it up.
For you case, I would encapsulate with your idea of "TransactionManager". This way the "TransactionManager" will handle how transaction works and not "MyBoundaryClass".