Coupling, Cohesion and the Law of Demeter - oop

The Law of Demeter indicates that you should only speak to objects that you know about directly. That is, do not perform method chaining to talk to other objects. When you do so, you are establishing improper linkages with the intermediary objects, inappropriately coupling your code to other code.
That's bad.
The solution would be for the class you do know about to essentially expose simple wrappers that delegate the responsibility to the object it has the relationship with.
That's good.
But, that seems to result in the class having low cohesion. No longer is it simply responsible for precisely what it does, but it also has the delegates that in a sense, making the code less cohesive by duplicating portions of the interface of its related object.
That's bad.
Does it really result in lowering cohesion? Is it the lesser of two evils?
Is this one of those gray areas of development, where you can debate where the line is, or are there strong, principled ways of making a decision of where to draw the line and what criteria you can use to make that decision?

Grady Booch in "Object Oriented Analysis and Design":
"The idea of cohesion also comes from structured design. Simply stated, cohesion
measures the degree of connectivity among the elements of a single module (and
for object-oriented design, a single class or object). The least desirable form of
cohesion is coincidental cohesion, in which entirely unrelated abstractions are
thrown into the same class or module. For example, consider a class comprising
the abstractions of dogs and spacecraft, whose behaviors are quite unrelated. The
most desirable form of cohesion is functional cohesion, in which the elements of
a class or module all work together to provide some well-bounded behavior.
Thus, the class Dog is functionally cohesive if its semantics embrace the behavior
of a dog, the whole dog, and nothing but the dog."
Subsitute Dog with Customer in the above and it might be a bit clearer. So the goal is really just to aim for functional cohesion and to move away from coincidental cohesion as much as possible. Depending on your abstractions, this may be simple or could require some refactoring.
Note cohesion applies just as much to a "module" than to a single class, ie a group of classes working together. So in this case the Customer and Order classes still have decent cohesion because they have this strong relationshhip, customers create orders, orders belong to customers.
Martin Fowler says he'd be more comfortable calling it the "Suggestion of Demeter" (see the article Mocks aren't stubs):
"Mockist testers do talk more about avoiding 'train wrecks' - method chains of style of getThis().getThat().getTheOther(). Avoiding method chains is also known as following the Law of Demeter. While method chains are a smell, the opposite problem of middle men objects bloated with forwarding methods is also a smell. (I've always felt I'd be more comfortable with the Law of Demeter if it were called the Suggestion of Demeter .)"
That sums up nicely where I'm coming from: it is perfectly acceptable and often necessary to have a lower level of cohesion than the strict adherence to the "law" might require. Avoid coincidental cohesion and aim for functional cohesion, but don't get hung up on tweaking where needed to fit in more naturally with your design abstraction.

If you are violating the Law of Demeter by having
int price = customer.getOrder().getPrice();
the solution is not to create a getOrderPrice() and transform the code into
int price = customer.getOrderPrice();
but instead to note that this is a code smell and make the relevant changes that hopefully both increase cohesion and lower coupling. Unfortunately there is no simple refactoring here that always applies, but you should probably apply tell don't ask

I think you may have misunderstood what cohesion means. A class that is implemented in terms of several other classes does not necessarily have low cohesion, as long as it represents a clear concept, and has a clear purpose. For example, you may have a class Person, which is implemented in terms of classes Date (for date of birth), Address, and Education (a list of schools the person went to). You may provide wrappers in Person for getting the year of birth, the last school the person went to, or the state where he lives, to avoid exposing the fact that Person is implemented in terms of those other classes. This would reduce coupling, but it would make Person no less cohesive.

It’s a grey area.
These principals are meant to help you in your work, if you find you’re working for them (i.e. they’re getting in your way and/or you find it over complicates your code) then you’re conforming too hard and you need to back off.
Make it work for you, don’t work for it.

I don't know if this actually lowers cohesion.
Aggregation/composition are all about a class utilising other classes to meet the contract it exposes through its public methods.
The class does not need to duplicate the interface of it's related objects. It's actually hiding any knwowledge about these aggregated classes from the method caller.
To obey the law of Demeter in the case of multiple levels of class dependency, you just need to apply aggregation/composition and good encapsulation at each level.
In other words each class has one or more dependencies on other classes, however these are only ever dependencies on the referenced class and not on any objects returned from properies/methods.

In the situations where there seems to be a tradeoff between coupling and cohesion, I'd probably ask myself "if somebody else had already written this logic, and I were looking for a bug in it, where would I look first?", and write the code that way.

Related

Abstraction as a definition

I am trying to understand the basic OOP concept called abstraction. When I say "understand", I mean not just to learn a definition, but really have a deep understanding.
On the internet, I have seen many definitions such as:
Hiding the low level implementation and providing high level specification
and
focusing on essential qualities rather than specific examples.
I understand that the iPhone button is a great example of abstraction, since I, as a user, don't have to know how the screen is displayed, all I have to know is to press the button.
What do you think of the following conclusion, when it comes to abstraction:
Abstraction takes many specific instances of objects and extracts their common information and functions by providing a single, generalised concept.
So based on this, a class is actually an abstraction of many instances, right?
I disagree with both of your examples. An iPhone button is not an abstraction of the screen, it is an interface to use the phone. A class is also not an abstraction of its instances.
An abstraction can be thought of treating a specific concept as a form of a more general concept.
To repeat an overused example: all vehicles can move. Cars rotate wheels, airplanes use jets, trains run on tracks.
Given a collection of vehicles, instead of being burdened with knowing the specifics of each vehicles' inner workings, and having to:
car.RotateWheel();
airplane.StartJet();
train.MoveOnTrack();
we could treat these objects as the more abstract vehicle, and tell them to
vehicle.Move();
In this case vehicle is an abstraction. It does not represent any specific object, but represents the common functionality of cars, airplanes and trains and allows us to interact with these specific objects without knowing anything about them except that they are a type of vehicle.
In the context of OOP, vehicle would most likely be a base class of the more specific types of vehicles.
IMHO there are actually 2 underlying concepts that needs to be understood here.
Abstraction: The idea of dealing only with "What" of something rather than "How" of something. For example: When you call an object method you only care about what the method does and not how it does what it does. There are layers of abstraction i.e the upper layer is only interested in what the below layer does and not how it does it. Another example: When you are writing assembly instruction you only care what a particular instruction does and not how the underlying circuit in the CPU execute the instruction.
Generalization: The idea of comparing a bunch of things (objects, functions, basically anything) and figure out the commonality between them and then extracting that commonality. A class with a bunch of properties is the generalization of the instances of the classes as all the instances have the same properties but different values for those properties.
The goal of object-oriented programming is to take the real-world thinking into software development as much as possible. That is, abstraction means what any dictionary may define.
For example, one of possible definitions of abstraction in Oxford Dictionary:
The quality of dealing with ideas rather than events.
WordReference.com's definition is even more eloquent:
the act of considering something as a general quality or characteristic, apart from concrete realities, specific objects, or actual instances.
In fact, WordReference.com's one is one of possible definitions of abstraction and you should be surprised because it's not a programming explanation of abstraction.
Perhaps you want a more programming alike definition of abstraction, and I'll try to provide a good summary:
Abstraction is the process of turning concrete realities into object representations which could be used as archetypes. Usually, in most OOP languages, archetypes are represented by types which in turn could be defined by classes, structures and interfaces. Types may abstract data or behaviors.
One good example of abstraction would be that a chair made of oak wood is still a chair. That's the way our mind works. You learn that certain forms are the most basic definition of many things. Your brain doesn't see all details of a given chair, but it sees that it fulfills the requirements to consider something a chair. Object-oriented programming and abstraction just mirrors this.

Misunderstanding of a sentence in "UML Reference Manual" book (Booch, Rumbaugh, Jacobson)

Recently, i go back to read some parts of the "UML Reference Manual" book, second edition (obviously by: Booch, Rumbaugh, Jacobson).
(see: http://www.amazon.com/Unified-Modeling-Language-Reference-Manual/dp/020130998X)
Meanwhile, i have found these "strange" words in the first chapiter "UML overview" at "Complexity of UML" section:
There is far too much use of generalization at the expense of essential distinctions. The myth that inheritance is always good has been a curse of object orientation from earliest days.
I can't see how this sentence can be fully in line with Object Oriented Paradigm which states that inheritance is a fundamental principle.
Any idea/help please?
You seem to believe the two points are mutually exclusive. They are not. Inheritance is a fundamental and powerful principle of object-oriented programming, and it is overused.
It is overused typically by inexperienced developers who are so captivated with the idea of inheritance that they are more focused on the inheritance tree than solving the problem. They try to factor out as much code as possible to some parent base class so they can just reuse it throughout the tree, and as a result they have a brittle design.
One of the greatest evils of software engineering is tight coupling between classes. That's the sort of thing that causes you to have to work through the weekend after the customer asks for a simple change. Why? Because making a change in one class has an effect on another class, and fixing that class has an effect on another, and so on.
Well, there is no tighter coupling than inheritance.
When you factor too much out to the "top level," every derived class is coupled to it. And as you find more and more code you want to factor out to various levels, you eventually have these deep trees, and every change made at the top cascades throughout the tree. As a result, you start to have methods that return null or are empty. They're unnecessary for the class, but the inheritance contract demands they be there. This violates the Liskov Substitution Principle.
So use inheritance of course. But do it smartly. Favor delegation to inheritance if you have any doubt. And when you do use inheritance, make sure you aren't factoring commonalities to the top level (of the whole tree or a subtree) just to reuse common code, but rather do so because there is a commonality of behavior from top to bottom.
If your tree is more than two or three levels deep (and I think three is really pushing it), you are almost certainly setting yourself up for trouble.
Everything is good in moderation. Remember that the quote is not saying do not use it, or avoid, etc. Rather it is saying it is an overused principal when other OO abstractions or principals work better. Inheritance is powerful but it's coupling is tight.
Wisely or rather randomly the author of the UML book is saying pointing out this current truism that inheritance is often over-used and over-referenced. What about all the other principals and abstractions. I find that developers typically only hit the OO highlights (inheritance being one) and use that abstraction to excess.
For me in UML it is a good reminder that UML is OO generally, but it is not limited to Java or .Net OO features. Many languages only offer of the abstractions available across all languages. UML attempts to help you model and express many of them.
Remember the author only said 'too much use', not bad or incorrect. Also remember that maybe you are an expert developer who does not apply inheritance incorrectly.

Cohesion and Decoupling, what do they represent?

What are Cohesion and Decoupling? I found information about coupling but not about decoupling.
That article from Aaron is very good for understanding, also I'd recommend that you read manning publications Spring in Action book, they give very good examples on how the spring solves that problem it will definitely improve your understanding of this.
EDIT :
I came accross this in this great book called Growing object oriented software guided by tests :
Coupling :
Elements are coupled if a change in
one forces a change in the other. For
example, if two classes inherit from a
common parent, then a change in one
class might require a change in the
other. Think of a combo audio system:
It’s tightly coupled because if we
want to change from analog to digital
radio, we must rebuild the whole
system. If we assemble a system from
separates, it would have low coupling
and we could just swap out the
receiver. “Loosely” coupled features
(i.e., those with low coupling) are
easier to maintain.
Cohesion:
An element’s cohesion is a measure
of whether its responsibilities form a
meaningful unit. For example, a class
that parses both dates and URLs is not
coherent, because they’re unrelated
concepts. Think of a machine that
washes both clothes and dishes—it’s
unlikely to do both well.2 At the
other extreme, a class that parses
only the punctuation in a URL is
unlikely to be coherent, because it
doesn’t represent a whole concept. To
get anything done, the programmer will
have to find other parsers for
protocol, host, resource, and so on.
Features with “high” coherence are
easier to maintain.
Cohesion - related to the principle that a class/method should be responsible for one thing only i.e there are no stray methods that don't belong in the encapsulation; a method only does one thing. High/Low cohesion is the degree to which this holds.
Coupling - how interdependent different parts of the system are. e.g how and where there are dependencies. If two classes make calls to methods of each other then they are tightly coupled, as changing one would mean having to change the other. Decoupling is the process of making something that was tightly coupled less so, or not at all.
Flexible systems have High Cohesion and Loose Coupling.
For coupling, this Wikipedia article should answer all your questions. This article deals with cohesion.
"Decoupling" is just another name for "little/low coupling".
So these terms answer these questions:
How much does each part of your project depend on another part?
If you wanted to use just a part of your project (like to solve a specific problem) how much do you need to know about all the rest of the project?
Is every part of your project focused on a single solution to a specific problem or do solutions "leak" to other parts?
Here are my thoughts on cohesion. Imagine there is a module. Inside that module, we have some tasks. When those tasks are highly related to each other, we say it has high cohesion. When those tasks are not related, we say it has low cohesion. My best attempt to explain decoupling is that decoupling is the act of removing coupling.
Low Coupling helps us get to high cohesion! Remember that we want our module to have related tasks and one single responsibility. But what is coupling? Coupling is the degree of dependency on other modules to achieve our single responsibility for that module. So by low coupling, we are saying that we are not very dependent on external modules hence we have high cohesion.
However, if we have many dependencies to external modules, we would have high coupling and low cohesion. Get it?
Other more decorated thinkers and groups say:
Cohesion is the degree to which the tasks performed by a single module are functionally related." IEEE, 1983 "Cohesion is the "glue" that holds a module together. It can be thought of as the type of association among the component elements of a module. Generally, one wants the highest level of cohesion possible." Bergland, 1981
A software component is said to exhibit a high degree of cohesion if the elements in that unit exhibit a high degree of functional relatedness. This means that each element in the program unit should be essential for that unit to achieve its purpose. Sommerville, 1989
decoupling allows the separation of object interaction from classes and inheritance into distinct layers of abstraction used to polymorphic-ally decouple the encapsulation which is the practice of using re-usable code to prevent discrete code modules from interacting with each other.

Objectively Good OO Design Principles

Premise
I believe that there is a way to objectively define "Good" and "Bad" Object-Oriented design techniques and that, as a community we can determine what these are. This is an academic exercise. If done with seriousness and resolve, I believe it can be of great benefit to the community as a whole. The community will benefit by having a place we can all point to to say, "This technique is 'Good' or 'Bad' and we should or should not use it unless there are special circumstances."
Plan
For this effort, we should focus on Object-Oriented principles (as opposed to Functional, Set-based, or other type of languages).
I'm not planning on accepting one answer, instead I'd like the answers to contribute to the final collection or be a rational debate of the issues.
I realize that this may controversial, but I believe we can iron something out. There are exceptions to most every rule and I believe this is where the disagreement will fall. We should make declarations and then note relevant exceptions and objections from dissenters.
Basis
I'd like to take a stab at defining "Good" and "Bad":
"Good" - This technique will work the first time and be a lasting solution. It will be easy to change later and will pay the time investment of its implementation quickly. It can be consistently applied and easily recognized by maintenance programmers in the future. Overall, it contributes to the good function and lowers cost of maintenance over the life of the product.
"Bad" - This technique may work in the short term, but soon becomes a liability. It is immediately difficult to change or becomes more difficult over time. The initial investment may be small or large, but it quickly becomes a growing cost, eventually becoming a sunk cost and must be removed or worked around constantly. It is subjectively applied and inconsistent and may be a surprise or not easily recognizable by maintenance programmers in the future. Overall, it contributes to the ultimate increasing cost of maintaining and/or operating the product and inhibits or prevents changes to the product. By inhibiting or preventing change, it becomes not just a direct cost, but an opportunity cost and a significant liability.
Starter
As an example of what I think a good contribution would look like, I'd like to propose a "Good" principle:
Separation of Concerns
[Short description]
Example
[Code or some other type of example]
Goals
[Explanation of what problems this principle prevents]
Applicability
[Why, where, and when would I use this principle?]
Exceptions
[When wouldn't I use this principle, or where might it actually be harmful?]
Objections
[Note any dissenting opinions or objections from the community here]
There are some well understood principles that might form a good starting point:
Open/Closed Principle
Liskov Substitution Principle
Law of Demeter
It is also a good idea to study existing design patterns to find principles behind them, the most important one is to (generally) prefer composition over inheritance.
Separation of Concerns
Prefer Aggregation to Mixin-style Inheritance
While functionality can be gained by inheriting from a utility class, in many cases it can all be gained using a member of said class.
Example (Boost.Noncopyable):
Boost.Noncopyable is a C++ class that lacks a copy constructor or assignment operator. It can be used as a base class to prevent the subclass from being copied or assigned (this is the common behavior). It can also be used as a direct member
Convert this:
class Foo : private boost::noncopyable { ... };
To this:
class Foo {
...
private:
boost::noncopyable noncopyable_;
};
Example (Lockable object):
Java introduced the synchronized keyword as an idiom to allow any object to be used in a threadsafe manner. This can be mirrored in other languages to provide mutexes to arbitrary objects. A common example is data structures:
class ThreadsafeVector<T> : public Vector<T>, public Mutex { ... };
Instead, the two classes could be aggregated together.
struct ThreadsafeVector<T> {
Vector<T> vector;
Mutex mutex;
}
Goals
Inheritance is frequently abused as a code-reuse mechanism. If inheritance is used for anything besides an Is-A relationship, overall code clarity is reduced.
With deeper chains, mixin base classes greatly increase the likelihood of a "Diamond of Death" scenario, wherein a subclass ends up inheriting multiple copies of a mixin class.
Applicability
Any language that supports multiple inheritance.
Exceptions
Any case where the mixin class provides or requires overloading members. In this case, inheritance usually implies an Is-Implemented-In-Terms-Of relationship, and an aggregate will not be sufficient.
Objections
The result of this transformation may lead to public members (e.g. MyThreadSafeDataStructure may have a publicly-accessible Mutex as a component).
I think the short answer is that "good" OO designs are robust under change, with the least code breakage for any requirements change. If you consider all the usual rules, they all tend to that same conclusion.
The difficulty is that you can't evaluate the "goodness" of the design without context; it is, I believe, a theorem that for any modularization, there exists a change in requirements that will maximize breakage, causing every class to be touched in each method.
If you want to be rigorous about it, you can develop a collection of "change cases" and order them in probability order, so that you minimize the breakage for the highest probability changes.
On most cases, though, some well-developed intuition helps a lot: device-specific or platform specific things tend to change, business rules and business process tend to change, while the implementations of, say, arithmetic, change very rarely. (Not, as you might imagine, never. Consider, for example, a business system that may or may not be able to make use of platform-supported BCD arithmetic.)

Inheritance vs. Aggregation [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
There are two schools of thought on how to best extend, enhance, and reuse code in an object-oriented system:
Inheritance: extend the functionality of a class by creating a subclass. Override superclass members in the subclasses to provide new functionality. Make methods abstract/virtual to force subclasses to "fill-in-the-blanks" when the superclass wants a particular interface but is agnostic about its implementation.
Aggregation: create new functionality by taking other classes and combining them into a new class. Attach an common interface to this new class for interoperability with other code.
What are the benefits, costs, and consequences of each? Are there other alternatives?
I see this debate come up on a regular basis, but I don't think it's been asked on
Stack Overflow yet (though there is some related discussion). There's also a surprising lack of good Google results for it.
It's not a matter of which is the best, but of when to use what.
In the 'normal' cases a simple question is enough to find out if we need inheritance or aggregation.
If The new class is more or less as the original class. Use inheritance. The new class is now a subclass of the original class.
If the new class must have the original class. Use aggregation. The new class has now the original class as a member.
However, there is a big gray area. So we need several other tricks.
If we have used inheritance (or we plan to use it) but we only use part of the interface, or we are forced to override a lot of functionality to keep the correlation logical. Then we have a big nasty smell that indicates that we had to use aggregation.
If we have used aggregation (or we plan to use it) but we find out we need to copy almost all of the functionality. Then we have a smell that points in the direction of inheritance.
To cut it short. We should use aggregation if part of the interface is not used or has to be changed to avoid an illogical situation. We only need to use inheritance, if we need almost all of the functionality without major changes. And when in doubt, use Aggregation.
An other possibility for, the case that we have an class that needs part of the functionality of the original class, is to split the original class in a root class and a sub class. And let the new class inherit from the root class. But you should take care with this, not to create an illogical separation.
Lets add an example. We have a class 'Dog' with methods: 'Eat', 'Walk', 'Bark', 'Play'.
class Dog
Eat;
Walk;
Bark;
Play;
end;
We now need a class 'Cat', that needs 'Eat', 'Walk', 'Purr', and 'Play'. So first try to extend it from a Dog.
class Cat is Dog
Purr;
end;
Looks, alright, but wait. This cat can Bark (Cat lovers will kill me for that). And a barking cat violates the principles of the universe. So we need to override the Bark method so that it does nothing.
class Cat is Dog
Purr;
Bark = null;
end;
Ok, this works, but it smells bad. So lets try an aggregation:
class Cat
has Dog;
Eat = Dog.Eat;
Walk = Dog.Walk;
Play = Dog.Play;
Purr;
end;
Ok, this is nice. This cat does not bark anymore, not even silent. But still it has an internal dog that wants out. So lets try solution number three:
class Pet
Eat;
Walk;
Play;
end;
class Dog is Pet
Bark;
end;
class Cat is Pet
Purr;
end;
This is much cleaner. No internal dogs. And cats and dogs are at the same level. We can even introduce other pets to extend the model. Unless it is a fish, or something that does not walk. In that case we again need to refactor. But that is something for an other time.
At the beginning of GOF they state
Favor object composition over class inheritance.
This is further discussed here
The difference is typically expressed as the difference between "is a" and "has a". Inheritance, the "is a" relationship, is summed up nicely in the Liskov Substitution Principle. Aggregation, the "has a" relationship, is just that - it shows that the aggregating object has one of the aggregated objects.
Further distinctions exist as well - private inheritance in C++ indicates a "is implemented in terms of" relationship, which can also be modeled by the aggregation of (non-exposed) member objects as well.
Here's my most common argument:
In any object-oriented system, there are two parts to any class:
Its interface: the "public face" of the object. This is the set of capabilities it announces to the rest of the world. In a lot of languages, the set is well defined into a "class". Usually these are the method signatures of the object, though it varies a bit by language.
Its implementation: the "behind the scenes" work that the object does to satisfy its interface and provide functionality. This is typically the code and member data of the object.
One of the fundamental principles of OOP is that the implementation is encapsulated (ie:hidden) within the class; the only thing that outsiders should see is the interface.
When a subclass inherits from a subclass, it typically inherits both the implementation and the interface. This, in turn, means that you're forced to accept both as constraints on your class.
With aggregation, you get to choose either implementation or interface, or both -- but you're not forced into either. The functionality of an object is left up to the object itself. It can defer to other objects as it likes, but it's ultimately responsible for itself. In my experience, this leads to a more flexible system: one that's easier to modify.
So, whenever I'm developing object-oriented software, I almost always prefer aggregation over inheritance.
I gave an answer to "Is a" vs "Has a" : which one is better?.
Basically I agree with other folks: use inheritance only if your derived class truly is the type you're extending, not merely because it contains the same data. Remember that inheritance means the subclass gains the methods as well as the data.
Does it make sense for your derived class to have all the methods of the superclass? Or do you just quietly promise yourself that those methods should be ignored in the derived class? Or do you find yourself overriding methods from the superclass, making them no-ops so no one calls them inadvertently? Or giving hints to your API doc generation tool to omit the method from the doc?
Those are strong clues that aggregation is the better choice in that case.
I see a lot of "is-a vs. has-a; they're conceptually different" responses on this and the related questions.
The one thing I've found in my experience is that trying to determine whether a relationship is "is-a" or "has-a" is bound to fail. Even if you can correctly make that determination for the objects now, changing requirements mean that you'll probably be wrong at some point in the future.
Another thing I've found is that it's very hard to convert from inheritance to aggregation once there's a lot of code written around an inheritance hierarchy. Just switching from a superclass to an interface means changing nearly every subclass in the system.
And, as I mentioned elsewhere in this post, aggregation tends to be less flexible than inheritance.
So, you have a perfect storm of arguments against inheritance whenever you have to choose one or the other:
Your choice will likely be the wrong one at some point
Changing that choice is difficult once you've made it.
Inheritance tends to be a worse choice as it's more constraining.
Thus, I tend to choose aggregation -- even when there appears to be a strong is-a relationship.
The question is normally phrased as Composition vs. Inheritance, and it has been asked here before.
I wanted to make this a comment on the original question, but 300 characters bites [;<).
I think we need to be careful. First, there are more flavors than the two rather specific examples made in the question.
Also, I suggest that it is valuable not to confuse the objective with the instrument. One wants to make sure that the chosen technique or methodology supports achievement of the primary objective, but I don't thing out-of-context which-technique-is-best discussion is very useful. It does help to know the pitfalls of the different approaches along with their clear sweet spots.
For example, what are you out to accomplish, what do you have available to start with, and what are the constraints?
Are you creating a component framework, even a special purpose one? Are interfaces separable from implementations in the programming system or is it accomplished by a practice using a different sort of technology? Can you separate the inheritance structure of interfaces (if any) from the inheritance structure of classes that implement them? Is it important to hide the class structure of an implementation from the code that relies on the interfaces the implementation delivers? Are there multiple implementations to be usable at the same time or is the variation more over-time as a consequence of maintenance and enhancememt? This and more needs to be considered before you fixate on a tool or a methodology.
Finally, is it that important to lock distinctions in the abstraction and how you think of it (as in is-a versus has-a) to different features of the OO technology? Perhaps so, if it keeps the conceptual structure consistent and manageable for you and others. But it is wise not to be enslaved by that and the contortions you might end up making. Maybe it is best to stand back a level and not be so rigid (but leave good narration so others can tell what's up). [I look for what makes a particular portion of a program explainable, but some times I go for elegance when there is a bigger win. Not always the best idea.]
I'm an interface purist, and I am drawn to the kinds of problems and approaches where interface purism is appropriate, whether building a Java framework or organizing some COM implementations. That doesn't make it appropriate for everything, not even close to everything, even though I swear by it. (I have a couple of projects that appear to provide serious counter-examples against interface purism, so it will be interesting to see how I manage to cope.)
I'll cover the where-these-might-apply part. Here's an example of both, in a game scenario. Suppose, there's a game which has different types of soldiers. Each soldier can have a knapsack which can hold different things.
Inheritance here?
There's a marine, green beret & a sniper. These are types of soldiers. So, there's a base class Soldier with Marine, Green Beret & Sniper as derived classes
Aggregation here?
The knapsack can contain grenades, guns (different types), knife, medikit, etc. A soldier can be equipped with any of these at any given point in time, plus he can also have a bulletproof vest which acts as armor when attacked and his injury decreases to a certain percentage. The soldier class contains an object of bulletproof vest class and the knapsack class which contains references to these items.
I think it's not an either/or debate. It's just that:
is-a (inheritance) relationships occur less often than has-a (composition) relationships.
Inheritance is harder to get right, even when it's appropriate to use it, so due diligence has to be taken because it can break encapsulation, encourage tight coupling by exposing implementation and so forth.
Both have their place, but inheritance is riskier.
Although of course it wouldn't make sense to have a class Shape 'having-a' Point and a Square classes. Here inheritance is due.
People tend to think about inheritance first when trying to design something extensible, that is what's wrong.
Favour happens when both candidate qualifies. A and B are options and you favour A. The reason is that composition offers more extension/flexiblity possiblities than generalization. This extension/flexiblity refers mostly to runtime/dynamic flexibility.
The benefit is not immediately visible. To see the benefit you need to wait for the next unexpected change request. So in most cases those sticked to generlalization fails when compared to those who embraced composition(except one obvious case mentioned later). Hence the rule. From a learning point of view if you can implement a dependency injection successfully then you should know which one to favour and when. The rule helps you in making a decision as well; if you are not sure then select composition.
Summary: Composition :The coupling is reduced by just having some smaller things you plug into something bigger, and the bigger object just calls the smaller object back. Generlization: From an API point of view defining that a method can be overridden is a stronger commitment than defining that a method can be called. (very few occassions when Generalization wins). And never forget that with composition you are using inheritance too, from a interface instead of a big class
Both approaches are used to solve different problems. You don't always need to aggregate over two or more classes when inheriting from one class.
Sometimes you do have to aggregate a single class because that class is sealed or has otherwise non-virtual members you need to intercept so you create a proxy layer that obviously isn't valid in terms of inheritance but so long as the class you are proxying has an interface you can subscribe to this can work out fairly well.