Is there any difference between UML Use Case "extends" and inherance? - oop

Or the "extends" is a use case inheriting another?
--update
Just a clarification, I've read books and made a lot of diagrams. But I just can't see any difference between extends on UML and on inherance. As Bill said, UML extends indicates optional behavior, but in inherance either you get new behavior you can or not use. So, what's the difference?

I think in UML the difference is in that "extends" is based on extension points, which means there has to be a named point in the use case where the extension will be applied. The semantics are not very precise about this. Inheritance for use cases means changing some behaviour, not exzactly specifying where.
Another important point is about the inheritance and the Liskov substitution principle. You should be able to use one use case, which inherits from another, in any place you can use the another one. This does not hold for the way "extends" is understood. When one use case is extended by another, it means that one might be modified by another, but still it contains the main scenario path, which might be forked and joined by the extending use case. This is, I think, about the difference between structural and behavioural inheritance. Inheritance is about achieving the same goal and satisfying the same interests - same responsibility and behaviour constraints, where extension is about modification of the structure of the path of scenario, which might be triggered by additional interests - like error checking.
Inheritance actually is not a very good mechanism to be used for use cases, combined with actor inheritance, which makes more sense, it can lead to unwanted paradoxes. Following the advice of Alistair Cockburn (Writing effective Use Cases), inheritance should be used only for expressing variations in technical details or data formats for a particular use case.

This is very similar to inheritance. See a detailed description of this concept here.
Enjoy!

In the tool I use in my company we have modeling restrictions.
Inheritance:
From Actor to Actor, from UseCase to Usecase, from System to System. You can't have other inheritances in your diagram because I got a forbidden sign.
Extend can only be done between two Usecases and not between other elements.
I don't really see the difference between inheritance and extends between two usecases.
I will keep on reading the next posts because after reading all answers I still don't understand:-)

Extends is used to add additional, optional, behavior to the use case that is being extended, but does not change any of the behavior in the base use case.
An inheriting use case would replace one or more of the behaviors of the inherited use case. In other words changing the behavior of the base use case, rather than simply adding new functionality. Note that use case inheritance is not quite the same as class inheritance.
See this article.

Related

Is the builder pattern suitable for the same representation/output but a slightly different process?

My API has different parameters. Depending on those, I have two slightly different processes. But output almost same. Should I use builders pattern ?
For example, suppose there are different query parameters A and B:
// For case A:
if (A){
funX(A);
funY();
funZ();
}
// For case B :
if(B){
funW(B);
funX(B);
funY();
funZ();
}
Note : here process is little bit complex. Such as Third party api call, filter, sorting, complex calculation and so on.
The simple fact of having slightly different processes does not justify nor call for a builder pattern.
Unfortunately, your question doesn't show anything about your object structure, so it's difficult to advise how to implement it. Typically, here some examples:
polymorphism: different classes implement the behavior promised in their base class a little bit differently. Depending on your parameters, you would instantiate the right class.
template method is a particular use of polymorphism, where the core structure of the process is implemented in the base class, and make use of auxiliary functions that are implemented slightly differently.
composition: parts of the behavior are implemented in different classes/objecs. the process is can be assembled by composing the different parts. It is more flexible than inheritance.
strategy, decorator, command, chain of responsibility are all design patterns that use composition (sometimes in relation with inheritance), that could allow to implement the behavior slightly (or radically) differently. Plenty of other approaches use composition to achieve your goals.
Anyway, try not to use design patterns as a catalogue of basic ingredients to solve your problem. THe right way is the opposite: think of how you want to solve the problem, and then use design patterns if you see that they address some of your desing intentions.
To come back on the builder pattern: this is a pattern that is meant to construct complex classes, and apply a same construction process to different classes that follow a same logic for their construction. I don't see this kind of challenge in your code example.

OOP Reuse without Inheritance: How "real-world" practical is this?

This article describes an approach to OOP I find interesting:
What if objects exist as
encapsulations, and the communicate
via messages? What if code re-use has
nothing to do with inheritance, but
uses composition, delegation, even
old-fashioned helper objects or any
technique the programmer deems fit?
The ontology does not go away, but it
is decoupled from the implementation.
The idea of reuse without inheritance or dependence to a class hierarchy is what I found most astounding, but how feasible is this?
Examples were given but I can't quite see how I can change my current code to adapt this approach.
So how feasible is this approach? Or is there really not a need for changing code but rather a scenario-based approach where "use only when needed or optimal"?
EDIT: oops, I forgot the link: here it is link
I'm sure you've heard of "always prefer composition over inheritance".
The basic idea of this premise is multiple objects with different functionalities are put together to create one fully-featured object. This should be preferred over inheriting functionality from disparate objects that have nothing to do with each other.
The main argument regarding this is contained in the definition of the Liskov Substitution Principle and playfully illustrated by this poster:
If you had a ToyDuck object, which object should you inherit from, from a purely inheritance standpoint? Should you inherit from Duck? No -- most likely you should inherit from Toy.
Bottomline is you should be using the correct method of abstraction -- whether inheritance or composition -- for your code.
For your current objects, consider if there are objects that ought to be removed from the inheritance tree and included merely as a property that you can call and invoke.
Inheritance is not well suited for code reuse. Inheriting for code reuse usually leads to:
Classes with inherited methods that must not be called on them (violating the Liskov substitution principle), which confuses programmers and leads to bugs.
Deep hierarchies where it takes inordinate amount of time to find the method you need when it can be declared anywhere in dozen or more classes.
Generally the inheritance tree should not get more than two or three levels deep and usually you should only inherit interfaces and abstract base classes.
There is however no point in rewriting existing code just for sake of it. However when you need to modify, try to switch to composition where possible. That will usually allow you to modify the code in smaller pieces, since there will be less coupling between the classes.
I just skimmed the text over, but it seems to say what OO design was always about: Inheritance is not meant as a code reuse tool and loose coupling is good. This has been written dozens times before, see the linked references on the article bottom. This does not mean you should skip inheritance entirely, you just have to use it conciously and only when it makes sense. The article also states this.
As for the duck typing, I find the examples and thoughts questionable. Like this one:
function good (foo) {
if ( !foo.baz || !foo.quux ) {
throw new TypeError("We need foo to have baz and quux methods.");
}
return foo.baz(foo.quux(10));
}
What’s the point in adding three new lines just to report an error that would be reported by the runtime automatically?
Inheritance is fundamental
no inheritance, no OOP.
prototyping and delegation can be used to effect inheritance (like in JavaScript), which is fine, and is functionally equivalent to inheritance
objects, messages, and composition but no inheritance is object-based, not object-oriented. VB5, not Java. Yes it can be done; plan on writing a lot of boilerplate code to expose interfaces and forward operations.
Those that insist inheritance is unnecessary, or that it is 'bad' are creating strawmen: it is easy to imagine scenarios where inheritance is used badly; this is not a reflection on the tool, but on the tool-user.

How do you determine how coarse or fine-grained a 'responsibility' should be when using the single responsibility principle?

In the SRP, a 'responsibility' is usually described as 'a reason to change', so that each class (or object?) should have only one reason someone should have to go in there and change it.
But if you take this to the extreme fine-grain you could say that an object adding two numbers together is a responsibility and a possible reason to change. Therefore the object should contain no other logic, because it would produce another reason for change.
I'm curious if there is anyone out there that has any strategies for 'scoping', the single-responsibility principle that's slightly less objective?
it comes down to the context of what you are modeling. I've done some extensive writing and presenting on the SOLID principles and I specifically address your question in my discussions of Single Responsibility.
The following first appeared in the Jan/Feb 2010 issue of Code Magazine, and is available online at "S.O.L.I.D. Software Development, One Step at a Time"
The Single Responsibility Principle
says that a class should have one, and
only one, reason to change.
This may seem counter-intuitive at
first. Wouldn’t it be easier to say
that a class should only have one
reason to exist? Actually, no-one
reason to exist could very easily be
taken to an extreme that would cause
more harm than good. If you take it to
that extreme and build classes that
have one reason to exist, you may end
up with only one method per class.
This would cause a large sprawl of
classes for even the most simple of
processes, causing the system to be
difficult to understand and difficult
to change.
The reason that a class should have
one reason to change, instead of one
reason to exist, is the business
context in which you are building the
system. Even if two concepts are
logically different, the business
context in which they are needed may
necessitate them becoming one and the
same. The key point of deciding when a
class should change is not based on a
purely logical separation of concepts,
but rather the business’s perception
of the concept. When the business
perception and context has changed,
then you have a reason to change the
class. To understand what
responsibilities a single class should
have, you need to first understand
what concept should be encapsulated by
that class and where you expect the
implementation details of that concept
to change.
Consider an engine in a car, for
example. Do you care about the inner
working of the engine? Do you care
that you have a specific size of
piston, camshaft, fuel injector, etc?
Or, do you only care that the engine
operates as expected when you get in
the car? The answer, of course,
depends entirely on the context in
which you need to use the engine.
If you are a mechanic working in an
auto shop, you probably care about the
inner workings of the engine. You need
to know the specific model, the
various part sizes, and other
specifications of the engine. If you
don’t have this information available,
you likely cannot service the engine
appropriately. However, if you are an
average everyday person that only
needs transportation from point A to
point B, you will likely not need that
level of information. The notion of
the individual pistons, spark plugs,
pulleys, belts, etc., is almost
meaningless to you. You only care that
the car you are driving has an engine
and that it performs correctly.
The engine example drives straight to
the heart of the Single Responsibility
Principle. The contexts of driving the
car vs. servicing the engine provide
two different notions of what should
and should not be a single concept-a
reason for change. In the context of
servicing the engine, every individual
part needs to be separate. You need to
code them as single classes and ensure
they are all up to their individual
specifications. In the context of
driving a car, though, the engine is a
single concept that does not need to
be broken down any further. You would
likely have a single class called
Engine, in this case. In either case,
the context has determined what the
appropriate separation of
responsibilities is.
I tend to think in term of "velocity of change" of the business requirements rather than "reason to change" .
The question is indeed how likely stuffs will change together, not whether they could change or not.
The difference is subtle, but helps me. Let's consider the example on wikipedia about the reporting engine:
if the likelihood that the content and the template of the report change at the same time is high, it can be one component because they are apparently related. (It can also be two)
but if the likelihood that the content change without the template is important, then it must be two components, because they are not related. (Would be dangerous to have one)
But I know that's a personal interpretation of the SRP.
Also, a second technique that I like is: "Describe your class in one sentence". It usually helps me to identify if there is a clear responsibility or not.
I don't see performing a task like adding two numbers together as a responsibility. Responsibilities come in different shapes and sizes but they certainly should be seen as something larger than performing a single function.
To understand this better, it is probably helpful to clearly differentiate between what a class is responsible for and what a method does. A method should "do only one thing" (e.g. add two numbers, though for most purposes '+' is a method that does that already) while a class should present a single clear "responsibility" to it's consumers. It's responsibility is at a much higher level than a method.
A class like Repository has a clear and singular responsibility. It has multiple methods like Save and Load, but a clear responsibility to provide persistence support for Person entities. A class may also co-ordinate and/or abstract the responsibilities of dependent classes, again presenting this as a single responsibility to other consuming classes.
The bottom line is if the application of SRP is leading to single-method classes who's whole purpose seems to be just to wrap the functionality of that method in a class then SRP is not being applied correctly.
A simple rule of thumb I use is that: the level or grainularity of responsibility should match the level or grainularity of the "entity" in question. Obviously the purpose of a method will always be more precise than that of a class, or service, or component.
A good strategiy for evaluating the level of responsibility can be to use an appropriate metaphor. If you can relate what you are doing to something that exists in the real world it can help give you another view of the problem you're trying to solve - including being able to identify appropriate levels of abstraction and responsibility.
#Derick bailey: nice explanation
Some additions: It is totally acceptable that application of SRP is contextual base.
The question still remains: are there any objective ways to define if a given class violates SRP ?
Some design contexts are quite obvious ( like the car example by Derick ) but otherwise contexts in which a class's behaviour has to defined remains fuzzy many-a-times.
For such cases, it might well be helpful if the fuzzy class behaviour is analysed by splitting it's responsibilities into different classes and then measuring the impact of new behavioural and structural relations that has emanated because of the split.
As soon the split is done, the reasons to keep the splitted responsibilities or to back-merge them into single responsibility becomes obvious at once.
I have applied this approach and which has lead good results for me.
But my search to look for 'objective ways of defining a class responsibility' still continues.
I respectful don't agree when Chris Nicola's above says that "a class should presents a single clear "responsibility" to it's consumers
I think SRP is about having a good design inside the class, not class' customers.
To me it's not very clear what a responsability is, and the prove is the number of questions that this concept arises.
"single reason to change"
or
"if the description contains the word
"and" then it needs to be split"
leads to the question: where is the limit? At the end, any class with 2 public methods has 2 reasons to change, isn't it?
For me, the true SRP leads to the Facade pattern, where you have a class that simply delegades the calls to other classes
For example:
class Modem
send()
receive()
Refactors to ==>
class ModemSender
class ModelReceiver
+
class Modem
send() -> ModemSender.send()
receive() -> ModemReceiver.receive()
Opinions are wellcome

Inheritance vs. Aggregation [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
There are two schools of thought on how to best extend, enhance, and reuse code in an object-oriented system:
Inheritance: extend the functionality of a class by creating a subclass. Override superclass members in the subclasses to provide new functionality. Make methods abstract/virtual to force subclasses to "fill-in-the-blanks" when the superclass wants a particular interface but is agnostic about its implementation.
Aggregation: create new functionality by taking other classes and combining them into a new class. Attach an common interface to this new class for interoperability with other code.
What are the benefits, costs, and consequences of each? Are there other alternatives?
I see this debate come up on a regular basis, but I don't think it's been asked on
Stack Overflow yet (though there is some related discussion). There's also a surprising lack of good Google results for it.
It's not a matter of which is the best, but of when to use what.
In the 'normal' cases a simple question is enough to find out if we need inheritance or aggregation.
If The new class is more or less as the original class. Use inheritance. The new class is now a subclass of the original class.
If the new class must have the original class. Use aggregation. The new class has now the original class as a member.
However, there is a big gray area. So we need several other tricks.
If we have used inheritance (or we plan to use it) but we only use part of the interface, or we are forced to override a lot of functionality to keep the correlation logical. Then we have a big nasty smell that indicates that we had to use aggregation.
If we have used aggregation (or we plan to use it) but we find out we need to copy almost all of the functionality. Then we have a smell that points in the direction of inheritance.
To cut it short. We should use aggregation if part of the interface is not used or has to be changed to avoid an illogical situation. We only need to use inheritance, if we need almost all of the functionality without major changes. And when in doubt, use Aggregation.
An other possibility for, the case that we have an class that needs part of the functionality of the original class, is to split the original class in a root class and a sub class. And let the new class inherit from the root class. But you should take care with this, not to create an illogical separation.
Lets add an example. We have a class 'Dog' with methods: 'Eat', 'Walk', 'Bark', 'Play'.
class Dog
Eat;
Walk;
Bark;
Play;
end;
We now need a class 'Cat', that needs 'Eat', 'Walk', 'Purr', and 'Play'. So first try to extend it from a Dog.
class Cat is Dog
Purr;
end;
Looks, alright, but wait. This cat can Bark (Cat lovers will kill me for that). And a barking cat violates the principles of the universe. So we need to override the Bark method so that it does nothing.
class Cat is Dog
Purr;
Bark = null;
end;
Ok, this works, but it smells bad. So lets try an aggregation:
class Cat
has Dog;
Eat = Dog.Eat;
Walk = Dog.Walk;
Play = Dog.Play;
Purr;
end;
Ok, this is nice. This cat does not bark anymore, not even silent. But still it has an internal dog that wants out. So lets try solution number three:
class Pet
Eat;
Walk;
Play;
end;
class Dog is Pet
Bark;
end;
class Cat is Pet
Purr;
end;
This is much cleaner. No internal dogs. And cats and dogs are at the same level. We can even introduce other pets to extend the model. Unless it is a fish, or something that does not walk. In that case we again need to refactor. But that is something for an other time.
At the beginning of GOF they state
Favor object composition over class inheritance.
This is further discussed here
The difference is typically expressed as the difference between "is a" and "has a". Inheritance, the "is a" relationship, is summed up nicely in the Liskov Substitution Principle. Aggregation, the "has a" relationship, is just that - it shows that the aggregating object has one of the aggregated objects.
Further distinctions exist as well - private inheritance in C++ indicates a "is implemented in terms of" relationship, which can also be modeled by the aggregation of (non-exposed) member objects as well.
Here's my most common argument:
In any object-oriented system, there are two parts to any class:
Its interface: the "public face" of the object. This is the set of capabilities it announces to the rest of the world. In a lot of languages, the set is well defined into a "class". Usually these are the method signatures of the object, though it varies a bit by language.
Its implementation: the "behind the scenes" work that the object does to satisfy its interface and provide functionality. This is typically the code and member data of the object.
One of the fundamental principles of OOP is that the implementation is encapsulated (ie:hidden) within the class; the only thing that outsiders should see is the interface.
When a subclass inherits from a subclass, it typically inherits both the implementation and the interface. This, in turn, means that you're forced to accept both as constraints on your class.
With aggregation, you get to choose either implementation or interface, or both -- but you're not forced into either. The functionality of an object is left up to the object itself. It can defer to other objects as it likes, but it's ultimately responsible for itself. In my experience, this leads to a more flexible system: one that's easier to modify.
So, whenever I'm developing object-oriented software, I almost always prefer aggregation over inheritance.
I gave an answer to "Is a" vs "Has a" : which one is better?.
Basically I agree with other folks: use inheritance only if your derived class truly is the type you're extending, not merely because it contains the same data. Remember that inheritance means the subclass gains the methods as well as the data.
Does it make sense for your derived class to have all the methods of the superclass? Or do you just quietly promise yourself that those methods should be ignored in the derived class? Or do you find yourself overriding methods from the superclass, making them no-ops so no one calls them inadvertently? Or giving hints to your API doc generation tool to omit the method from the doc?
Those are strong clues that aggregation is the better choice in that case.
I see a lot of "is-a vs. has-a; they're conceptually different" responses on this and the related questions.
The one thing I've found in my experience is that trying to determine whether a relationship is "is-a" or "has-a" is bound to fail. Even if you can correctly make that determination for the objects now, changing requirements mean that you'll probably be wrong at some point in the future.
Another thing I've found is that it's very hard to convert from inheritance to aggregation once there's a lot of code written around an inheritance hierarchy. Just switching from a superclass to an interface means changing nearly every subclass in the system.
And, as I mentioned elsewhere in this post, aggregation tends to be less flexible than inheritance.
So, you have a perfect storm of arguments against inheritance whenever you have to choose one or the other:
Your choice will likely be the wrong one at some point
Changing that choice is difficult once you've made it.
Inheritance tends to be a worse choice as it's more constraining.
Thus, I tend to choose aggregation -- even when there appears to be a strong is-a relationship.
The question is normally phrased as Composition vs. Inheritance, and it has been asked here before.
I wanted to make this a comment on the original question, but 300 characters bites [;<).
I think we need to be careful. First, there are more flavors than the two rather specific examples made in the question.
Also, I suggest that it is valuable not to confuse the objective with the instrument. One wants to make sure that the chosen technique or methodology supports achievement of the primary objective, but I don't thing out-of-context which-technique-is-best discussion is very useful. It does help to know the pitfalls of the different approaches along with their clear sweet spots.
For example, what are you out to accomplish, what do you have available to start with, and what are the constraints?
Are you creating a component framework, even a special purpose one? Are interfaces separable from implementations in the programming system or is it accomplished by a practice using a different sort of technology? Can you separate the inheritance structure of interfaces (if any) from the inheritance structure of classes that implement them? Is it important to hide the class structure of an implementation from the code that relies on the interfaces the implementation delivers? Are there multiple implementations to be usable at the same time or is the variation more over-time as a consequence of maintenance and enhancememt? This and more needs to be considered before you fixate on a tool or a methodology.
Finally, is it that important to lock distinctions in the abstraction and how you think of it (as in is-a versus has-a) to different features of the OO technology? Perhaps so, if it keeps the conceptual structure consistent and manageable for you and others. But it is wise not to be enslaved by that and the contortions you might end up making. Maybe it is best to stand back a level and not be so rigid (but leave good narration so others can tell what's up). [I look for what makes a particular portion of a program explainable, but some times I go for elegance when there is a bigger win. Not always the best idea.]
I'm an interface purist, and I am drawn to the kinds of problems and approaches where interface purism is appropriate, whether building a Java framework or organizing some COM implementations. That doesn't make it appropriate for everything, not even close to everything, even though I swear by it. (I have a couple of projects that appear to provide serious counter-examples against interface purism, so it will be interesting to see how I manage to cope.)
I'll cover the where-these-might-apply part. Here's an example of both, in a game scenario. Suppose, there's a game which has different types of soldiers. Each soldier can have a knapsack which can hold different things.
Inheritance here?
There's a marine, green beret & a sniper. These are types of soldiers. So, there's a base class Soldier with Marine, Green Beret & Sniper as derived classes
Aggregation here?
The knapsack can contain grenades, guns (different types), knife, medikit, etc. A soldier can be equipped with any of these at any given point in time, plus he can also have a bulletproof vest which acts as armor when attacked and his injury decreases to a certain percentage. The soldier class contains an object of bulletproof vest class and the knapsack class which contains references to these items.
I think it's not an either/or debate. It's just that:
is-a (inheritance) relationships occur less often than has-a (composition) relationships.
Inheritance is harder to get right, even when it's appropriate to use it, so due diligence has to be taken because it can break encapsulation, encourage tight coupling by exposing implementation and so forth.
Both have their place, but inheritance is riskier.
Although of course it wouldn't make sense to have a class Shape 'having-a' Point and a Square classes. Here inheritance is due.
People tend to think about inheritance first when trying to design something extensible, that is what's wrong.
Favour happens when both candidate qualifies. A and B are options and you favour A. The reason is that composition offers more extension/flexiblity possiblities than generalization. This extension/flexiblity refers mostly to runtime/dynamic flexibility.
The benefit is not immediately visible. To see the benefit you need to wait for the next unexpected change request. So in most cases those sticked to generlalization fails when compared to those who embraced composition(except one obvious case mentioned later). Hence the rule. From a learning point of view if you can implement a dependency injection successfully then you should know which one to favour and when. The rule helps you in making a decision as well; if you are not sure then select composition.
Summary: Composition :The coupling is reduced by just having some smaller things you plug into something bigger, and the bigger object just calls the smaller object back. Generlization: From an API point of view defining that a method can be overridden is a stronger commitment than defining that a method can be called. (very few occassions when Generalization wins). And never forget that with composition you are using inheritance too, from a interface instead of a big class
Both approaches are used to solve different problems. You don't always need to aggregate over two or more classes when inheriting from one class.
Sometimes you do have to aggregate a single class because that class is sealed or has otherwise non-virtual members you need to intercept so you create a proxy layer that obviously isn't valid in terms of inheritance but so long as the class you are proxying has an interface you can subscribe to this can work out fairly well.

How do you define a Single Responsibility?

I know about "class having a single reason to change". Now, what is that exactly? Are there some smells/signs that could tell that class does not have a single responsibility? Or could the real answer hide in YAGNI and only refactor to a single responsibility the first time your class changes?
The Single Responsibility Principle
There are many obvious cases, e.g. CoffeeAndSoupFactory. Coffee and soup in the same appliance can lead to quite distasteful results. In this example, the appliance might be broken into a HotWaterGenerator and some kind of Stirrer. Then a new CoffeeFactory and SoupFactory can be built from those components and any accidental mixing can be avoided.
Among the more subtle cases, the tension between data access objects (DAOs) and data transfer objects (DTOs) is very common. DAOs talk to the database, DTOs are serializable for transfer between processes and machines. Usually DAOs need a reference to your database framework, therefore they are unusable on your rich clients which neither have the database drivers installed nor have the necessary privileges to access the DB.
Code Smells
The methods in a class start to be grouped by areas of functionality ("these are the Coffee methods and these are the Soup methods").
Implementing many interfaces.
Write a brief, but accurate description of what the class does.
If the description contains the word "and" then it needs to be split.
Well, this principle is to be used with some salt... to avoid class explosion.
A single responsibility does not translate to single method classes. It means a single reason for existence... a service that the object provides for its clients.
A nice way to stay on the road... Use the object as person metaphor... If the object were a person, who would I ask to do this? Assign that responsibility to the corresponding class. However you wouldn't ask the same person to do your manage files, compute salaries, issue paychecks, and verify financial records... Why would you want a single object to do all these? (it's okay if a class takes on multiple responsibilities as long as they are all related and coherent.)
If you employ a CRC card, it's a nice subtle guideline. If you're having trouble getting all the responsibilities of that object on a CRC card, it's probably doing too much... a max of 7 would do as a good marker.
Another code smell from the refactoring book would be HUGE classes. Shotgun surgery would be another... making a change to one area in a class causes bugs in unrelated areas of the same class...
Finding that you are making changes to the same class for unrelated bug-fixes again and again is another indication that the class is doing too much.
A simple and practical method to check single responsibility (not only classes but also method of classes) is the name choice. When you design a class, if you easily find a name for the class that specify exactly what it defines, you're in the right way.
A difficulty to choose a name is near always a symptom of bad design.
the methods in your class should be cohesive...they should work together and make use of the same data structures internally. If you find you have too many methods that don't seem entirely well related, or seem to operate on different things, then quite likely you don't have a good single responsibility.
Often it's hard to initially find responsibilities, and sometimes you need to use the class in several different contexts and then refactor the class into two classes as you start to see the distinctions. Sometimes you find that it's because you are mixing an abstract and concrete concept together. They tend to be harder to see, and, again, use in different contexts will help clarify.
The obvious sign is when your class ends up looking like a Big Ball of Mud, which is really the opposite of SRP (single responsibility principle).
Basically, all the object's services should be focused on carrying out a single responsibility, meaning every time your class changes and adds a service which does not respect that, you know you're "deviating" from the "right" path ;)
The cause is usually due to some quick fixes hastily added to the class to repair some defects. So the reason why you are changing the class is usually the best criteria to detect if you are about to break the SRP.
Martin's Agile Principles, Patterns, and Practices in C# helped me a lot to grasp SRP. He defines SRP as:
A class should have only one reason to change.
So what is driving change?
Martin's answer is:
[...] each responsibility is an axis of change. (p. 116)
and further:
In the context of the SRP, we define a responsibility to be a reason for change. If you can think of more than one motive for changing a class, that class has more than one responsibility (p. 117)
In fact SRP is encapsulating change. If change happens, it should just have a local impact.
Where is YAGNI?
YAGNI can be nicely combined with SRP: When you apply YAGNI, you wait until some change is actually happening. If this happens you should be able to clearly see the responsibilities which are inferred from the reason(s) for change.
This also means that responsibilities can evolve with each new requirement and change. Thinking further SRP and YAGNI will provide you the means to think in flexible designs and architectures.
Perhaps a little more technical than other smells:
If you find you need several "friend" classes or functions, that's usually a good smell of bad SRP - because the required functionality is not actually exposed publically by your class.
If you end up with an excessively "deep" hierarchy (a long list of derived classes until you get to leaf classes) or "broad" hierarchy (many, many classes derived shallowly from a single parent class). It's usually a sign that the parent class does either too much or too little. Doing nothing is the limit of that, and yes, I have seen that in practice, with an "empty" parent class definition just to group together a bunch of unrelated classes in a single hierarchy.
I also find that refactoring to single responsibility is hard. By the time you finally get around to it, the different responsibilities of the class will have become entwined in the client code making it hard to factor one thing out without breaking the other thing. I'd rather err on the side of "too little" than "too much" myself.
Here are some things that help me figure out if my class is violating SRP:
Fill out the XML doc comments on a class. If you use words like if, and, but, except, when, etc., your classes probably is doing too much.
If your class is a domain service, it should have a verb in the name. Many times you have classes like "OrderService", which should probably be broken up into "GetOrderService", "SaveOrderService", "SubmitOrderService", etc.
If you end up with MethodA that uses MemberA and MethodB that uses MemberB and it is not part of some concurrency or versioning scheme, you might be violating SRP.
If you notice that you have a class that just delegates calls to a lot of other classes, you might be stuck in proxy class hell. This is especially true if you end up instantiating the proxy class everywhere when you could just use the specific classes directly. I have seen a lot of this. Think ProgramNameBL and ProgramNameDAL classes as a substitute for using a Repository pattern.
I've also been trying to get my head around the SOLID principles of OOD, specifically the single responsibility principle, aka SRP (as a side note the podcast with Jeff Atwood, Joel Spolsky and "Uncle Bob" is worth a listen). The big question for me is: What problems is SOLID trying to address?
OOP is all about modeling. The main purpose of modeling is to present a problem in a way that allows us to understand it and solve it. Modeling forces us to focus on the important details. At the same time we can use encapsulation to hide the "unimportant" details so that we only have to deal with them when absolutely necessary.
I guess you should ask yourself: What problem is your class trying to solve? Has the important information you need to solve this problem risen to the surface? Are the unimportant details tucked away so that you only have to think about them when absolutely necessary?
Thinking about these things results in programs that are easier to understand, maintain and extend. I think this is at the heart of OOD and the SOLID principles, including SRP.
Another rule of thumb I'd like to throw in is the following:
If you feel the need to either write some sort of cartesian product of cases in your test cases, or if you want to mock certain private methods of the class, Single Responsibility is violated.
I recently had this in the following way:
I had a cetain abstract syntax tree of a coroutine which will be generated into C later. For now, think of the nodes as Sequence, Iteration and Action. Sequence chains two coroutines, Iteration repeats a coroutine until a userdefined condition is true and Action performs a certain userdefined action. Furthermore, it is possible to annotate Actions and Iterations with codeblocks, which define the actions and conditions to evaluate as the coroutine walks ahead.
It was necessary to apply a certain transformation to all of these code blocks (for those interested: I needed to replace the conceptual user variables with actual implementation variables in order to prevent variable clashes. Those who know lisp macros can think of gensym in action :) ). Thus, the simplest thing that would work was a visitor which knows the operation internally and just calls them on the annotated code block of the Action and Iteration on visit and traverses all the syntax tree nodes. However, in this case, I'd have had to duplicate the assertion "transformation is applied" in my testcode for the visitAction-Method and the visitIteration-Method. In other words, I had to check the product test cases of the responsibilities Traversion (== {traverse iteration, traverse action, traverse sequence}) x Transformation (well, codeblock transformed, which blew up into iteration transformed and action transformed). Thus, I was tempted to use powermock to remove the transformation-Method and replace it with some 'return "I was transformed!";'-Stub.
However, according to the rule of thumb, I split the class into a class TreeModifier which contains a NodeModifier-instance, which provides methods modifyIteration, modifySequence, modifyCodeblock and so on. Thus, I could easily test the responsibility of traversing, calling the NodeModifier and reconstructing the tree and test the actual modification of the code blocks separately, thus removing the need for the product tests, because the responsibilities were separated now (into traversing and reconstructing and the concrete modification).
It also is interesting to notice that later on, I could heavily reuse the TreeModifier in various other transformations. :)
If you're finding troubles extending the functionality of the class without being afraid that you might end up breaking something else, or you cannot use class without modifying tons of its options which modify its behavior smells like your class doing too much.
Once I was working with the legacy class which had method "ZipAndClean", which was obviously zipping and cleaning specified folder...