Does the Strategy Pattern violate the Single Responsibility Principle? - oop

If the Single Responsibility Principle states that every object must have a single reason to change and a single strategy class implemented with the Strategy pattern (by definition) has multiple methods that can change for any number of reasons, does that mean that it's impossible to implement the Strategy pattern without violating the SRP?

How so?
Strategy pattern if I recollect is basically a way to decouple the logic/algorithm being used. So Client has m_IAlgorithm. IAlgorithm should have a small set of methods if not one.
So the only reason that a AlgoImplementation class can change is
if there is a change in the algorithm it implements. (change in its responsibility/behavior)
or if IAlgoritm changes.. which would be rare unless you made a mistake in defining the interface. (Its a change in its own public interface - so don't think its a violation of SRP.)

I actually see the opposite. The strategy pattern lets you decouple two things, the (potential) algorithms used to get some job done and the decision making logic about these algorithms.
I'm not sure if you rather have a class which does both conditional logic on which algorithm to use and also encloses those algorithms. Also, I'm not saying you've implied that but you didn't gave an example where Strategy would break SRP and what's, in your opinion, a better design.

The context which I am most familiar with the single responsibility principle is in overall system design, and can compliment the strategy pattern with regard to the grouping of components within the system.
Use the strategy pattern to define a set of algorithms that a client uses interchangeably, then the single responsibility principle can be used to decide where to group the client and algorithms the client uses within the system. You don't want to have to disturb the code for algorithm A if your work is solely in algorithm B and vice versa. For compiled languages this can have a significant impact in the complexity of the re-factor, version and deploy cycle. Why version and re-compile the client and algorithms A, C, and D when the only changes needed where to algorithm B.
With this understanding of the single responsibility principle I don't see where having a class that implements the strategy pattern violates SRP. The purpose of the client class is to implement the strategy pattern, that is the clients responsibility. The purpose of the algorithms are to implement the logic that they are responsible for, and the single responsibility principle says don't group them all together within the system since they will be changing for different reasons. That's my $0.02.

Good Point :) I guess it's more a single responsibility guideline, which makes sense for many cases, but for some also not, like the strategy pattern..

Depends on the point of view IMO.
If you look at strategy (class for example) as a mixture of algorithms/logic which is decided on run-time and they are quite different from one another.... yeah kinda violates the single responsibility pattern... but if you think about this strategy class it's a decoupling entity which encapsulates the whole decision making and making it a single-job class - to decide which algorithm/logic is goint to be used and does nothing more, nothing less

Related

Use-cases as objects in OOP

on current project, we area trying to implement use-cases as objects, e.g.:
public class SaveSalesOrderUseCase {
public void Execute(SalesOrderUseCaseModel salesOrderModel) {
// implementation as list steps defined in use-case
}
}
Does it make any sense to design system this way? How it could (positively and negatively) affect design of system with regards of OOP, SOLID principles and domain model etc.?
Any experience?
Thank you.
Use cases are not intended to reflect the structure of a system they are meant to reflect it's behavior.
Does it make any sense to design system this way?
Not really, requirements frequently change.
This can result in the class no longer reflecting it's intent.
How it could (positively and negatively) affect design of system with
regards of OOP, SOLID principles and domain model etc.?
It doesn't really make sense from an OOP perspective.
Use case titles are meant to describe the whole problem benign solved.
this will lead to class names that don't reflect a single responsibility.
Which can lead to all kinds of design flaws (especially down the read).
This approach may introduce the duplicate code and one use case can span in different domain area where you can find difficulty in maintaining the code.
Coming to OOP's , I feel it is a deviation as Use case's are not really Objects.
This approach may feel you based on the SOLID principle, but I feel it is not as your use case can span between domains in you case saveOrder may touch Customer domain and Inventory as well with a bit of finance and each has its own SOLID class and you should have Facade class saveOrder to use them.
You'd probably do that if you wanted to "industrialize" the execution of use cases, for instance processing queues or batches of them. However, a use case being closely related to a manual action taken by a human user, I can't see much use for that.
Another reason can be if you want to define operations about use cases in general : undo/redo, record execution, etc.
Your use cases would then probably have to inherit a common base class though.
Otherwise, I'd say KISS/YAGNI applies and it is overkill.
I think it is perfectly acceptable to do this. There is an architectural pattern called DCI (Data, context and interaction where an use case is implemented in the context and domain objects play a certain role to fulfill the scenario in that context.
One of the reasons is that the behaviour of the system shouldn't be scattered around in your application. It can make it difficult to understand what is going on.
DCI is not better or worse than other architectures, it is just an option that you can choose.

Multimethods vs Interfaces

Are there languages that idiomatically use both notions at the same time? When will that be necessary if ever? What are the pros and cons of each approach?
Background to the question:
I am a novice (with some python knowledge) trying to build a better picture of how multimethods and interfaces are meant to be used (in general).
I assume that they are not meant to be mixed: Either one declares available logic in terms of interfaces (and implements it as methods of the class) or one does it in terms of multimethods. Is this correct?
Does it make sense to speak of a spectrum of OOP notions where:
one starts with naive subclassing (data and logic(methods) and logic implementation(methods) are tightly coupled)
then passes through interfaces (logic is in the interface, data and logic implementation is in the class)
and ends at multimethods (logic is in the signature of the multimethod, logic implementation is scattered, data is in the class(which is only a datastructure with nice handles))?
This answer, to begin, largely derives from my primary experience developing in common-lisp and clojure.
Yes, multimethods do carry some penalty in cost, but offer almost unlimited flexibility in the ability to craft a dispatch mechanism that precisely models whatever you might look to accomplish by their specialization.
Protocols and Interfaces, on one hand, are also involved with sone of these same matters of specializations and dispatch, but they work and are used in a very different manner. These are facilities that follow a convention wherein single dispatch provides only a straightforward mapping of one specialized implementation for a given class. The power of protocols and interfaces is in their typical use to define some group of abstract capabilities that, when taken together, fully specify the API for thus concept. For example, a "pointer" interface might contain the 3 or 4 concepts that represent the notion of what a pointer is. So the general interface of a pointer might look like REFERENCE, DEREFERENCE, ALLOCATE, and DISPOSE. Thus the power of an interface comes from its composition of a group of related definitions that, together, express a compete abstraction -- when implementing an interface in a specific situation, it is normally an all-or-nothing endeavor. Either all four of those functions are present, or whatever this thing us does not represent our definition of pointer.
Hope this helped a little.
Dan Lentz

Single Responsibility Principle(SRP) and my Service Class

I have YoutubeVideoService class which does CRUD(Create, Read, Update, and Delete) operations. In my view Create, Read, Update, and Delete are four reasons for a class to change. Does this class violates Single Responsibility Principle?
If it violates, then should we have four classes like CreateYoutubeVideoService, ReadYoutubeVideoService, UpdateYoutubeVideoService and DeleteYoutubeVideoService. Isn't it an overkill to have lots of classes?
I think you're taking the Single Reposibility Principle a bit to the extreme on a class level, without taking into consideration cohesion.
If you follow that route, you could justify having lots of classes with just one or two methods, which in turn would increase the number of dependencies to the sky.
I think the spirit of SRP is Simplify as much as you can, but not more.
How long should a method be? One could say there is no reason to have more than 2 lines. But this is certainly overkill in some situations. Same with SRP - you have to decide when enough is enough. CRUD looks like a cohesive set of operations which are perfectly fit for single class, because they operate on same type of data.
A good way of measuring coherence to the Single Responsibility Principle is to think about how many reason to change this class has. If you can think more than one reason to change, probably it's violating the SRP.
The only reason to change a CRUD class like this, is a change in the underlaying data structure. So this respects the SRP.
On the other hand, if you had in that class any other operation (es. checking the video length or type before inserting it), that would violate SRP, since it could change independently from the persistency layer.
SRP is not a dogma, when following SOLID principles, we always have to be careful to not introduce needles complexity. As per Bob Martin's masterpiece, speaking about when two responsibility should be separated:
If, on the other hand, the application is not changing in ways that cause the two responsibilities to change at different times, there is no need to separate them. Indeed, separating them would smell of needless complexity.
(…) is not wise to apply SRP (or any other principle, for that matter) if there is no symptom.
Service classes are SRP killers. By definition they are an aggregation of operations - which is contrary to SRP. Often single method of the service would require some dependency, that all other methods might not care at all, then with each such method dependencies multiply, it leads to mess. Manager, Service, sometime Repository - these patterns are plain bad from dependencies point of view. In Commands/Queries/Requests world you would have these 3 commands and a query just grouped into a domain/directory. That leads to cleaner, smaller, easier to read and extendable code. Also to cleaner processes.

How do you determine how coarse or fine-grained a 'responsibility' should be when using the single responsibility principle?

In the SRP, a 'responsibility' is usually described as 'a reason to change', so that each class (or object?) should have only one reason someone should have to go in there and change it.
But if you take this to the extreme fine-grain you could say that an object adding two numbers together is a responsibility and a possible reason to change. Therefore the object should contain no other logic, because it would produce another reason for change.
I'm curious if there is anyone out there that has any strategies for 'scoping', the single-responsibility principle that's slightly less objective?
it comes down to the context of what you are modeling. I've done some extensive writing and presenting on the SOLID principles and I specifically address your question in my discussions of Single Responsibility.
The following first appeared in the Jan/Feb 2010 issue of Code Magazine, and is available online at "S.O.L.I.D. Software Development, One Step at a Time"
The Single Responsibility Principle
says that a class should have one, and
only one, reason to change.
This may seem counter-intuitive at
first. Wouldn’t it be easier to say
that a class should only have one
reason to exist? Actually, no-one
reason to exist could very easily be
taken to an extreme that would cause
more harm than good. If you take it to
that extreme and build classes that
have one reason to exist, you may end
up with only one method per class.
This would cause a large sprawl of
classes for even the most simple of
processes, causing the system to be
difficult to understand and difficult
to change.
The reason that a class should have
one reason to change, instead of one
reason to exist, is the business
context in which you are building the
system. Even if two concepts are
logically different, the business
context in which they are needed may
necessitate them becoming one and the
same. The key point of deciding when a
class should change is not based on a
purely logical separation of concepts,
but rather the business’s perception
of the concept. When the business
perception and context has changed,
then you have a reason to change the
class. To understand what
responsibilities a single class should
have, you need to first understand
what concept should be encapsulated by
that class and where you expect the
implementation details of that concept
to change.
Consider an engine in a car, for
example. Do you care about the inner
working of the engine? Do you care
that you have a specific size of
piston, camshaft, fuel injector, etc?
Or, do you only care that the engine
operates as expected when you get in
the car? The answer, of course,
depends entirely on the context in
which you need to use the engine.
If you are a mechanic working in an
auto shop, you probably care about the
inner workings of the engine. You need
to know the specific model, the
various part sizes, and other
specifications of the engine. If you
don’t have this information available,
you likely cannot service the engine
appropriately. However, if you are an
average everyday person that only
needs transportation from point A to
point B, you will likely not need that
level of information. The notion of
the individual pistons, spark plugs,
pulleys, belts, etc., is almost
meaningless to you. You only care that
the car you are driving has an engine
and that it performs correctly.
The engine example drives straight to
the heart of the Single Responsibility
Principle. The contexts of driving the
car vs. servicing the engine provide
two different notions of what should
and should not be a single concept-a
reason for change. In the context of
servicing the engine, every individual
part needs to be separate. You need to
code them as single classes and ensure
they are all up to their individual
specifications. In the context of
driving a car, though, the engine is a
single concept that does not need to
be broken down any further. You would
likely have a single class called
Engine, in this case. In either case,
the context has determined what the
appropriate separation of
responsibilities is.
I tend to think in term of "velocity of change" of the business requirements rather than "reason to change" .
The question is indeed how likely stuffs will change together, not whether they could change or not.
The difference is subtle, but helps me. Let's consider the example on wikipedia about the reporting engine:
if the likelihood that the content and the template of the report change at the same time is high, it can be one component because they are apparently related. (It can also be two)
but if the likelihood that the content change without the template is important, then it must be two components, because they are not related. (Would be dangerous to have one)
But I know that's a personal interpretation of the SRP.
Also, a second technique that I like is: "Describe your class in one sentence". It usually helps me to identify if there is a clear responsibility or not.
I don't see performing a task like adding two numbers together as a responsibility. Responsibilities come in different shapes and sizes but they certainly should be seen as something larger than performing a single function.
To understand this better, it is probably helpful to clearly differentiate between what a class is responsible for and what a method does. A method should "do only one thing" (e.g. add two numbers, though for most purposes '+' is a method that does that already) while a class should present a single clear "responsibility" to it's consumers. It's responsibility is at a much higher level than a method.
A class like Repository has a clear and singular responsibility. It has multiple methods like Save and Load, but a clear responsibility to provide persistence support for Person entities. A class may also co-ordinate and/or abstract the responsibilities of dependent classes, again presenting this as a single responsibility to other consuming classes.
The bottom line is if the application of SRP is leading to single-method classes who's whole purpose seems to be just to wrap the functionality of that method in a class then SRP is not being applied correctly.
A simple rule of thumb I use is that: the level or grainularity of responsibility should match the level or grainularity of the "entity" in question. Obviously the purpose of a method will always be more precise than that of a class, or service, or component.
A good strategiy for evaluating the level of responsibility can be to use an appropriate metaphor. If you can relate what you are doing to something that exists in the real world it can help give you another view of the problem you're trying to solve - including being able to identify appropriate levels of abstraction and responsibility.
#Derick bailey: nice explanation
Some additions: It is totally acceptable that application of SRP is contextual base.
The question still remains: are there any objective ways to define if a given class violates SRP ?
Some design contexts are quite obvious ( like the car example by Derick ) but otherwise contexts in which a class's behaviour has to defined remains fuzzy many-a-times.
For such cases, it might well be helpful if the fuzzy class behaviour is analysed by splitting it's responsibilities into different classes and then measuring the impact of new behavioural and structural relations that has emanated because of the split.
As soon the split is done, the reasons to keep the splitted responsibilities or to back-merge them into single responsibility becomes obvious at once.
I have applied this approach and which has lead good results for me.
But my search to look for 'objective ways of defining a class responsibility' still continues.
I respectful don't agree when Chris Nicola's above says that "a class should presents a single clear "responsibility" to it's consumers
I think SRP is about having a good design inside the class, not class' customers.
To me it's not very clear what a responsability is, and the prove is the number of questions that this concept arises.
"single reason to change"
or
"if the description contains the word
"and" then it needs to be split"
leads to the question: where is the limit? At the end, any class with 2 public methods has 2 reasons to change, isn't it?
For me, the true SRP leads to the Facade pattern, where you have a class that simply delegades the calls to other classes
For example:
class Modem
send()
receive()
Refactors to ==>
class ModemSender
class ModelReceiver
+
class Modem
send() -> ModemSender.send()
receive() -> ModemReceiver.receive()
Opinions are wellcome

How do you define a Single Responsibility?

I know about "class having a single reason to change". Now, what is that exactly? Are there some smells/signs that could tell that class does not have a single responsibility? Or could the real answer hide in YAGNI and only refactor to a single responsibility the first time your class changes?
The Single Responsibility Principle
There are many obvious cases, e.g. CoffeeAndSoupFactory. Coffee and soup in the same appliance can lead to quite distasteful results. In this example, the appliance might be broken into a HotWaterGenerator and some kind of Stirrer. Then a new CoffeeFactory and SoupFactory can be built from those components and any accidental mixing can be avoided.
Among the more subtle cases, the tension between data access objects (DAOs) and data transfer objects (DTOs) is very common. DAOs talk to the database, DTOs are serializable for transfer between processes and machines. Usually DAOs need a reference to your database framework, therefore they are unusable on your rich clients which neither have the database drivers installed nor have the necessary privileges to access the DB.
Code Smells
The methods in a class start to be grouped by areas of functionality ("these are the Coffee methods and these are the Soup methods").
Implementing many interfaces.
Write a brief, but accurate description of what the class does.
If the description contains the word "and" then it needs to be split.
Well, this principle is to be used with some salt... to avoid class explosion.
A single responsibility does not translate to single method classes. It means a single reason for existence... a service that the object provides for its clients.
A nice way to stay on the road... Use the object as person metaphor... If the object were a person, who would I ask to do this? Assign that responsibility to the corresponding class. However you wouldn't ask the same person to do your manage files, compute salaries, issue paychecks, and verify financial records... Why would you want a single object to do all these? (it's okay if a class takes on multiple responsibilities as long as they are all related and coherent.)
If you employ a CRC card, it's a nice subtle guideline. If you're having trouble getting all the responsibilities of that object on a CRC card, it's probably doing too much... a max of 7 would do as a good marker.
Another code smell from the refactoring book would be HUGE classes. Shotgun surgery would be another... making a change to one area in a class causes bugs in unrelated areas of the same class...
Finding that you are making changes to the same class for unrelated bug-fixes again and again is another indication that the class is doing too much.
A simple and practical method to check single responsibility (not only classes but also method of classes) is the name choice. When you design a class, if you easily find a name for the class that specify exactly what it defines, you're in the right way.
A difficulty to choose a name is near always a symptom of bad design.
the methods in your class should be cohesive...they should work together and make use of the same data structures internally. If you find you have too many methods that don't seem entirely well related, or seem to operate on different things, then quite likely you don't have a good single responsibility.
Often it's hard to initially find responsibilities, and sometimes you need to use the class in several different contexts and then refactor the class into two classes as you start to see the distinctions. Sometimes you find that it's because you are mixing an abstract and concrete concept together. They tend to be harder to see, and, again, use in different contexts will help clarify.
The obvious sign is when your class ends up looking like a Big Ball of Mud, which is really the opposite of SRP (single responsibility principle).
Basically, all the object's services should be focused on carrying out a single responsibility, meaning every time your class changes and adds a service which does not respect that, you know you're "deviating" from the "right" path ;)
The cause is usually due to some quick fixes hastily added to the class to repair some defects. So the reason why you are changing the class is usually the best criteria to detect if you are about to break the SRP.
Martin's Agile Principles, Patterns, and Practices in C# helped me a lot to grasp SRP. He defines SRP as:
A class should have only one reason to change.
So what is driving change?
Martin's answer is:
[...] each responsibility is an axis of change. (p. 116)
and further:
In the context of the SRP, we define a responsibility to be a reason for change. If you can think of more than one motive for changing a class, that class has more than one responsibility (p. 117)
In fact SRP is encapsulating change. If change happens, it should just have a local impact.
Where is YAGNI?
YAGNI can be nicely combined with SRP: When you apply YAGNI, you wait until some change is actually happening. If this happens you should be able to clearly see the responsibilities which are inferred from the reason(s) for change.
This also means that responsibilities can evolve with each new requirement and change. Thinking further SRP and YAGNI will provide you the means to think in flexible designs and architectures.
Perhaps a little more technical than other smells:
If you find you need several "friend" classes or functions, that's usually a good smell of bad SRP - because the required functionality is not actually exposed publically by your class.
If you end up with an excessively "deep" hierarchy (a long list of derived classes until you get to leaf classes) or "broad" hierarchy (many, many classes derived shallowly from a single parent class). It's usually a sign that the parent class does either too much or too little. Doing nothing is the limit of that, and yes, I have seen that in practice, with an "empty" parent class definition just to group together a bunch of unrelated classes in a single hierarchy.
I also find that refactoring to single responsibility is hard. By the time you finally get around to it, the different responsibilities of the class will have become entwined in the client code making it hard to factor one thing out without breaking the other thing. I'd rather err on the side of "too little" than "too much" myself.
Here are some things that help me figure out if my class is violating SRP:
Fill out the XML doc comments on a class. If you use words like if, and, but, except, when, etc., your classes probably is doing too much.
If your class is a domain service, it should have a verb in the name. Many times you have classes like "OrderService", which should probably be broken up into "GetOrderService", "SaveOrderService", "SubmitOrderService", etc.
If you end up with MethodA that uses MemberA and MethodB that uses MemberB and it is not part of some concurrency or versioning scheme, you might be violating SRP.
If you notice that you have a class that just delegates calls to a lot of other classes, you might be stuck in proxy class hell. This is especially true if you end up instantiating the proxy class everywhere when you could just use the specific classes directly. I have seen a lot of this. Think ProgramNameBL and ProgramNameDAL classes as a substitute for using a Repository pattern.
I've also been trying to get my head around the SOLID principles of OOD, specifically the single responsibility principle, aka SRP (as a side note the podcast with Jeff Atwood, Joel Spolsky and "Uncle Bob" is worth a listen). The big question for me is: What problems is SOLID trying to address?
OOP is all about modeling. The main purpose of modeling is to present a problem in a way that allows us to understand it and solve it. Modeling forces us to focus on the important details. At the same time we can use encapsulation to hide the "unimportant" details so that we only have to deal with them when absolutely necessary.
I guess you should ask yourself: What problem is your class trying to solve? Has the important information you need to solve this problem risen to the surface? Are the unimportant details tucked away so that you only have to think about them when absolutely necessary?
Thinking about these things results in programs that are easier to understand, maintain and extend. I think this is at the heart of OOD and the SOLID principles, including SRP.
Another rule of thumb I'd like to throw in is the following:
If you feel the need to either write some sort of cartesian product of cases in your test cases, or if you want to mock certain private methods of the class, Single Responsibility is violated.
I recently had this in the following way:
I had a cetain abstract syntax tree of a coroutine which will be generated into C later. For now, think of the nodes as Sequence, Iteration and Action. Sequence chains two coroutines, Iteration repeats a coroutine until a userdefined condition is true and Action performs a certain userdefined action. Furthermore, it is possible to annotate Actions and Iterations with codeblocks, which define the actions and conditions to evaluate as the coroutine walks ahead.
It was necessary to apply a certain transformation to all of these code blocks (for those interested: I needed to replace the conceptual user variables with actual implementation variables in order to prevent variable clashes. Those who know lisp macros can think of gensym in action :) ). Thus, the simplest thing that would work was a visitor which knows the operation internally and just calls them on the annotated code block of the Action and Iteration on visit and traverses all the syntax tree nodes. However, in this case, I'd have had to duplicate the assertion "transformation is applied" in my testcode for the visitAction-Method and the visitIteration-Method. In other words, I had to check the product test cases of the responsibilities Traversion (== {traverse iteration, traverse action, traverse sequence}) x Transformation (well, codeblock transformed, which blew up into iteration transformed and action transformed). Thus, I was tempted to use powermock to remove the transformation-Method and replace it with some 'return "I was transformed!";'-Stub.
However, according to the rule of thumb, I split the class into a class TreeModifier which contains a NodeModifier-instance, which provides methods modifyIteration, modifySequence, modifyCodeblock and so on. Thus, I could easily test the responsibility of traversing, calling the NodeModifier and reconstructing the tree and test the actual modification of the code blocks separately, thus removing the need for the product tests, because the responsibilities were separated now (into traversing and reconstructing and the concrete modification).
It also is interesting to notice that later on, I could heavily reuse the TreeModifier in various other transformations. :)
If you're finding troubles extending the functionality of the class without being afraid that you might end up breaking something else, or you cannot use class without modifying tons of its options which modify its behavior smells like your class doing too much.
Once I was working with the legacy class which had method "ZipAndClean", which was obviously zipping and cleaning specified folder...