This is one of the past year's exams that have no answers:
[1]: https://i.stack.imgur.com/RDzz0.jpg
It shows a diagram with two classes, Customer and Supplier, inheriting both from a class Partner. Another class Customer_Supplier inherits from both, Customer and Supplier.
The question asks what SOLID principle this design would violate. Despite attentive verification, I could not find any, and would really like to know.
The following diagram is the famous diamond of death, a very delicate problem when working with multiple inheritance:
The academic answer
The academic answer to your question is that this design violates the Single Responsibility Principle. The reasoning is the following:
A class should have a single responsibility
Supplier and Customer both have a single responsibility
Customer_Supplier inherits of at least two responsibilities
In reality, the SRP is about reason to change. But the general tendency is to apply the same reasoning: Customer_Supplier might have to change because of changes in Supplier or changes in Customer.
Elimination of the other candidates
It is in principle compliant with the Open/Close Principle by design. Each class could be extended, and unless the contrary is proven, there is no need to modify them.
A class diagram is rarely sufficient to confirm or deny compliance with the Liskov Substitution Principle, since this principle is about the contracts/promises of the classes and their subclasses. LSP requires that an object of the subclass can be used whenever an object of the superclass is expected. At first sight, a Customer_Supplier could be used instead of a Supplier, as well as instead of a Customer. Of course, one could easily imagine a class that breaks this. But this is equally true with any of the simplest inheritance. The fact is that nothing in the diagram lets us assume the opposite.
The Interface Segregation Principle is not violated either. On contrary, if a client does not need the Customer_Supplier interface, you could use one of the parent classes. If you use the Customer_Supplier in a client, it's probably because you need the full interface.
Finally, the Dependency Inversion Principle is not relevant here. Nothing indicates that one class is more concrete or more abstract than the other. In fact, these could even all be abstract classes. So there is no reason to think that this design does not comply with "Abstractions should not depend on details. Details (concrete implementations) should depend on abstractions".
Is the academic answer is flawed?
The diamond of death is an extreme case that is often given as example to explain that multiple inheritance would be bad. And multiple inheritance can easily be misused. But so can be single inheritance and any other programming construct.
Let's bring in some more objectivity:
Classes should have a single responsibility. If a class inherits from another it may then have two responsibilities: its own and the responsibility of the superclass. On the other side, nothing tells us that these responsibilities are independent: one could be a sub-responsibility of the other.
Consequently, if Supplier has a sub-responsibility of BusinessPartner, and Customer has a sub-responsibility of BusinessPartner, they both have the sub-responsibilities of the same larger responsibility, especially considering the Open/Closed principle. This means that Customer_Supplier could in the end still just be a sub-responsibility of this single large responsibility. So SRP could perfectly be respected.
This is not advanced computer science, but basic set theory. You can use the same reasoning for the reason to change.
Another reasoning can be used for reason to change: if the subclass uses only the public interface of the superclass (which is a robust practice in view of LSP's history constraint) the subclass would not be impacted by changes to the superclass more than by any other change of a class that its dependent uppon. So the single reason to change could still hold.
For all these reasons, I'd reformulate the academic answer as follows: if this design would violate a SOLID principle, it could be only the the SRP. Nevertheless it does not necessarily violate it.
And to hammer the nail, I'll conclude with a quote from R.C. Martin who invented the SRP concept:
And this gets to the crux of the Single Responsibility Principle. This principle is about people.
And this design does not say anything about people.
And to finish with a philosophical question: is multiple inheritance really needed?
Related
From Wikipedia:
Single responsibility principle states that every class should have a
single responsibility, and that responsibility should be entirely
encapsulated by the class.
Does that mean implementing multiple interfaces violates this principle?
I would say not by itself. A class can have one responsibility, but do multiple things in the process, and implement one interface for each set of things it needs to do to fulfill its responsibility.
Also, interfaces in Java can be used to say things about what properties the class has (for example, Comparable and Serializable), but not really say anything the class's responsibility.
However, if a class implements multiple interfaces, each of which corresponds to one responsibility, then that would be a violation of that principle.
Maybe, but not necessarily.
An interface is not a responsibility. There's a very powerful mode of architecture which views interfaces as defining the role the object may play in the application.
Think of what that means. You can have a Person class with all sorts of interfaces (let's use a .net convention for naming)
class Person : IAmAStudent, IDrawSocialSecurity, IAmACitizen {
public SocialSecurityNumber getSocialSecurityNumber() {
return this.ssn;
}
private SocialSecurityNumber ssn;
public Person(SocialSecurityNumber ssn) { this.ssn = ssn; }
}
Now obviously this cannot violate SRP. It clearly has only one reason for change - if the relationship between people and social security numbers changes. Yet the object implements many interfaces and plays several roles in the application.
Now if you're implementing multiple interfaces that expose different functionality you might be violating SRP but that can be a bit of a judgement call as well. Single Responsibility Principle is a great rule of thumb for achieving loose coupling, but that's not the only ideal in town. There's also high cohesion which states that related code should live together. The two are fundamentally at odds (though there is often ways to achieve good balance). So you might reasonably make a choice in the direction of one over another and decide consciously to violate SRP.
Ultimately, SRP and all the SOLID rules are more about making sure you think along certain lines, not that you follow them blindly every time.
"Single Responsibility" depends on the level of abstraction. For example, a complex system, considering it at a system level, may have one responsibility. For instance, a TV system's responsibility is to show video picture. At the next, lower level, that system is made of sub-systems, monitor, power unit, etc. At this level, each of these units have their own responsibilities.
In the same way, a class, at one level may be considered to have a single responsibility. But, at a lower level, it may have other constituent modules (classes, interfaces etc) that perform parts of its job. For example, a Student class's responsibility is to represent a student abstraction. It may however have another unit (a class) that represents student's address.
In this way, using multiple interfaces do not by itself violate object-oriented principles.
How to ensure maintainability in a class? Can it simply be done by creating class using design patterns or is there something else involved? Also, what are the characteristics of a good method?
You won't do badly by following the SOLID and DRY principles.
SOLID is:
SRP Single responsibility principle
the notion that an object should have only a single responsibility.
OCP
Open/closed principle
the notion that “software entities … should be open for extension, but closed for modification”.
LSP
Liskov substitution principle
the notion that “objects in a program should be replaceable with instances of their subtypes without altering the correctness of that program”. See also design by contract.
ISP
Interface segregation principle
the notion that “many client specific interfaces are better than one general purpose interface.”[5]
DIP
Dependency inversion principle
the notion that one should “Depend upon Abstractions. Do not depend upon concretions.”[5]
Dependency injection is one method of following this principle.
And DRY stands for Don't Repeat Yourself, meaning you should strive to remove any duplication in your code.
Put in a lot of effort to make sure you have a good interface. Once you have that, you can completely rewrite the class, if you want, without affecting any other code in the project. If your class is so big that you can't easily rewrite it, then that is an issue too.
Although Oded's answer is good for ensuring the maintainability of a program or library, this question is about class maintainability and for that, there are only two requirements... a good interface, and strong cohesion.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
There are two schools of thought on how to best extend, enhance, and reuse code in an object-oriented system:
Inheritance: extend the functionality of a class by creating a subclass. Override superclass members in the subclasses to provide new functionality. Make methods abstract/virtual to force subclasses to "fill-in-the-blanks" when the superclass wants a particular interface but is agnostic about its implementation.
Aggregation: create new functionality by taking other classes and combining them into a new class. Attach an common interface to this new class for interoperability with other code.
What are the benefits, costs, and consequences of each? Are there other alternatives?
I see this debate come up on a regular basis, but I don't think it's been asked on
Stack Overflow yet (though there is some related discussion). There's also a surprising lack of good Google results for it.
It's not a matter of which is the best, but of when to use what.
In the 'normal' cases a simple question is enough to find out if we need inheritance or aggregation.
If The new class is more or less as the original class. Use inheritance. The new class is now a subclass of the original class.
If the new class must have the original class. Use aggregation. The new class has now the original class as a member.
However, there is a big gray area. So we need several other tricks.
If we have used inheritance (or we plan to use it) but we only use part of the interface, or we are forced to override a lot of functionality to keep the correlation logical. Then we have a big nasty smell that indicates that we had to use aggregation.
If we have used aggregation (or we plan to use it) but we find out we need to copy almost all of the functionality. Then we have a smell that points in the direction of inheritance.
To cut it short. We should use aggregation if part of the interface is not used or has to be changed to avoid an illogical situation. We only need to use inheritance, if we need almost all of the functionality without major changes. And when in doubt, use Aggregation.
An other possibility for, the case that we have an class that needs part of the functionality of the original class, is to split the original class in a root class and a sub class. And let the new class inherit from the root class. But you should take care with this, not to create an illogical separation.
Lets add an example. We have a class 'Dog' with methods: 'Eat', 'Walk', 'Bark', 'Play'.
class Dog
Eat;
Walk;
Bark;
Play;
end;
We now need a class 'Cat', that needs 'Eat', 'Walk', 'Purr', and 'Play'. So first try to extend it from a Dog.
class Cat is Dog
Purr;
end;
Looks, alright, but wait. This cat can Bark (Cat lovers will kill me for that). And a barking cat violates the principles of the universe. So we need to override the Bark method so that it does nothing.
class Cat is Dog
Purr;
Bark = null;
end;
Ok, this works, but it smells bad. So lets try an aggregation:
class Cat
has Dog;
Eat = Dog.Eat;
Walk = Dog.Walk;
Play = Dog.Play;
Purr;
end;
Ok, this is nice. This cat does not bark anymore, not even silent. But still it has an internal dog that wants out. So lets try solution number three:
class Pet
Eat;
Walk;
Play;
end;
class Dog is Pet
Bark;
end;
class Cat is Pet
Purr;
end;
This is much cleaner. No internal dogs. And cats and dogs are at the same level. We can even introduce other pets to extend the model. Unless it is a fish, or something that does not walk. In that case we again need to refactor. But that is something for an other time.
At the beginning of GOF they state
Favor object composition over class inheritance.
This is further discussed here
The difference is typically expressed as the difference between "is a" and "has a". Inheritance, the "is a" relationship, is summed up nicely in the Liskov Substitution Principle. Aggregation, the "has a" relationship, is just that - it shows that the aggregating object has one of the aggregated objects.
Further distinctions exist as well - private inheritance in C++ indicates a "is implemented in terms of" relationship, which can also be modeled by the aggregation of (non-exposed) member objects as well.
Here's my most common argument:
In any object-oriented system, there are two parts to any class:
Its interface: the "public face" of the object. This is the set of capabilities it announces to the rest of the world. In a lot of languages, the set is well defined into a "class". Usually these are the method signatures of the object, though it varies a bit by language.
Its implementation: the "behind the scenes" work that the object does to satisfy its interface and provide functionality. This is typically the code and member data of the object.
One of the fundamental principles of OOP is that the implementation is encapsulated (ie:hidden) within the class; the only thing that outsiders should see is the interface.
When a subclass inherits from a subclass, it typically inherits both the implementation and the interface. This, in turn, means that you're forced to accept both as constraints on your class.
With aggregation, you get to choose either implementation or interface, or both -- but you're not forced into either. The functionality of an object is left up to the object itself. It can defer to other objects as it likes, but it's ultimately responsible for itself. In my experience, this leads to a more flexible system: one that's easier to modify.
So, whenever I'm developing object-oriented software, I almost always prefer aggregation over inheritance.
I gave an answer to "Is a" vs "Has a" : which one is better?.
Basically I agree with other folks: use inheritance only if your derived class truly is the type you're extending, not merely because it contains the same data. Remember that inheritance means the subclass gains the methods as well as the data.
Does it make sense for your derived class to have all the methods of the superclass? Or do you just quietly promise yourself that those methods should be ignored in the derived class? Or do you find yourself overriding methods from the superclass, making them no-ops so no one calls them inadvertently? Or giving hints to your API doc generation tool to omit the method from the doc?
Those are strong clues that aggregation is the better choice in that case.
I see a lot of "is-a vs. has-a; they're conceptually different" responses on this and the related questions.
The one thing I've found in my experience is that trying to determine whether a relationship is "is-a" or "has-a" is bound to fail. Even if you can correctly make that determination for the objects now, changing requirements mean that you'll probably be wrong at some point in the future.
Another thing I've found is that it's very hard to convert from inheritance to aggregation once there's a lot of code written around an inheritance hierarchy. Just switching from a superclass to an interface means changing nearly every subclass in the system.
And, as I mentioned elsewhere in this post, aggregation tends to be less flexible than inheritance.
So, you have a perfect storm of arguments against inheritance whenever you have to choose one or the other:
Your choice will likely be the wrong one at some point
Changing that choice is difficult once you've made it.
Inheritance tends to be a worse choice as it's more constraining.
Thus, I tend to choose aggregation -- even when there appears to be a strong is-a relationship.
The question is normally phrased as Composition vs. Inheritance, and it has been asked here before.
I wanted to make this a comment on the original question, but 300 characters bites [;<).
I think we need to be careful. First, there are more flavors than the two rather specific examples made in the question.
Also, I suggest that it is valuable not to confuse the objective with the instrument. One wants to make sure that the chosen technique or methodology supports achievement of the primary objective, but I don't thing out-of-context which-technique-is-best discussion is very useful. It does help to know the pitfalls of the different approaches along with their clear sweet spots.
For example, what are you out to accomplish, what do you have available to start with, and what are the constraints?
Are you creating a component framework, even a special purpose one? Are interfaces separable from implementations in the programming system or is it accomplished by a practice using a different sort of technology? Can you separate the inheritance structure of interfaces (if any) from the inheritance structure of classes that implement them? Is it important to hide the class structure of an implementation from the code that relies on the interfaces the implementation delivers? Are there multiple implementations to be usable at the same time or is the variation more over-time as a consequence of maintenance and enhancememt? This and more needs to be considered before you fixate on a tool or a methodology.
Finally, is it that important to lock distinctions in the abstraction and how you think of it (as in is-a versus has-a) to different features of the OO technology? Perhaps so, if it keeps the conceptual structure consistent and manageable for you and others. But it is wise not to be enslaved by that and the contortions you might end up making. Maybe it is best to stand back a level and not be so rigid (but leave good narration so others can tell what's up). [I look for what makes a particular portion of a program explainable, but some times I go for elegance when there is a bigger win. Not always the best idea.]
I'm an interface purist, and I am drawn to the kinds of problems and approaches where interface purism is appropriate, whether building a Java framework or organizing some COM implementations. That doesn't make it appropriate for everything, not even close to everything, even though I swear by it. (I have a couple of projects that appear to provide serious counter-examples against interface purism, so it will be interesting to see how I manage to cope.)
I'll cover the where-these-might-apply part. Here's an example of both, in a game scenario. Suppose, there's a game which has different types of soldiers. Each soldier can have a knapsack which can hold different things.
Inheritance here?
There's a marine, green beret & a sniper. These are types of soldiers. So, there's a base class Soldier with Marine, Green Beret & Sniper as derived classes
Aggregation here?
The knapsack can contain grenades, guns (different types), knife, medikit, etc. A soldier can be equipped with any of these at any given point in time, plus he can also have a bulletproof vest which acts as armor when attacked and his injury decreases to a certain percentage. The soldier class contains an object of bulletproof vest class and the knapsack class which contains references to these items.
I think it's not an either/or debate. It's just that:
is-a (inheritance) relationships occur less often than has-a (composition) relationships.
Inheritance is harder to get right, even when it's appropriate to use it, so due diligence has to be taken because it can break encapsulation, encourage tight coupling by exposing implementation and so forth.
Both have their place, but inheritance is riskier.
Although of course it wouldn't make sense to have a class Shape 'having-a' Point and a Square classes. Here inheritance is due.
People tend to think about inheritance first when trying to design something extensible, that is what's wrong.
Favour happens when both candidate qualifies. A and B are options and you favour A. The reason is that composition offers more extension/flexiblity possiblities than generalization. This extension/flexiblity refers mostly to runtime/dynamic flexibility.
The benefit is not immediately visible. To see the benefit you need to wait for the next unexpected change request. So in most cases those sticked to generlalization fails when compared to those who embraced composition(except one obvious case mentioned later). Hence the rule. From a learning point of view if you can implement a dependency injection successfully then you should know which one to favour and when. The rule helps you in making a decision as well; if you are not sure then select composition.
Summary: Composition :The coupling is reduced by just having some smaller things you plug into something bigger, and the bigger object just calls the smaller object back. Generlization: From an API point of view defining that a method can be overridden is a stronger commitment than defining that a method can be called. (very few occassions when Generalization wins). And never forget that with composition you are using inheritance too, from a interface instead of a big class
Both approaches are used to solve different problems. You don't always need to aggregate over two or more classes when inheriting from one class.
Sometimes you do have to aggregate a single class because that class is sealed or has otherwise non-virtual members you need to intercept so you create a proxy layer that obviously isn't valid in terms of inheritance but so long as the class you are proxying has an interface you can subscribe to this can work out fairly well.
Information-Expert, Tell-Don't-Ask, and SRP are often mentioned together as best practices. But I think they are at odds. Here is what I'm talking about.
Code that favors SRP but violates Tell-Don't-Ask & Info-Expert:
Customer bob = ...;
// TransferObjectFactory has to use Customer's accessors to do its work,
// violates Tell Don't Ask
CustomerDTO dto = TransferObjectFactory.createFrom(bob);
Code that favors Tell-Don't-Ask & Info-Expert but violates SRP:
Customer bob = ...;
// Now Customer is doing more than just representing the domain concept of Customer,
// violates SRP
CustomerDTO dto = bob.toDTO();
Please fill me in on how these practices can co-exist peacefully.
Definitions of the terms,
Information Expert: objects that have the data needed for an operation should host the operation.
Tell Don't Ask: don't ask objects for data in order to do work; tell the objects to do the work.
Single Responsibility Principle: each object should have a narrowly defined responsibility.
I don't think that they are so much at odds as they are emphasizing different things that will cause you pain. One is about structuring code to make it clear where particular responsibilities are and reducing coupling, the other is about reducing the reasons to modify a class.
We all have to make decisions each and every day about how to structure code and what dependencies we are willing to introduce into designs.
We have built up a lot of useful guidelines, maxims and patterns that can help us to make the decisions.
Each of these is useful to detect different kinds of problems that could be present in our designs. For any specific problem that you may be looking at there will be a sweet spot somewhere.
The different guidelines do contradict each other. Just applying every piece of guidance you have heard or read will not make your design better.
For the specific problem you are looking at today you need to decide what the most important factors that are likely to cause you pain are.
You can talk about "Tell Don't Ask" when you ask for object's state in order to tell object to do something.
In your first example TransferObjectFactory.createFrom just a converter. It doesn't tell Customer object to do something after inspecting it's state.
I think first example is correct.
Those classes are not at odds. The DTO is simply serving as a conduit of data from storage that is intended to be used as a dumb container. It certainly doesn't violate the SRP.
On the other hand the .toDTO method is questionable -- why should Customer have this responsibility? For "purity's" sake I would have another class who's job it was to create DTOs from business objects like Customer.
Don't forget these principles are principles, and when you can et away with simpler solutions until changing requirements force the issue, then do so. Needless complexity is definitely something to avoid.
I highly recommend, BTW, Robert C. Martin's Agile Patterns, Practices and principles for much more in depth treatments of this subject.
DTOs with a sister class (like you have) violate all three principles you stated, and encapsulation, which is why you're having problems here.
What are you using this CustomerDTO for, and why can't you simply use Customer, and have the DTOs data inside the customer? If you're not careful, the CustomerDTO will need a Customer, and a Customer will need a CustomerDTO.
TellDontAsk says that if you are basing a decision on the state of one object (e.g. a customer), then that decision should be performed inside the customer class itself.
An example is if you want to remind the Customer to pay any outstanding bills, so you call
List<Bill> bills = Customer.GetOutstandingBills();
PaymentReminder.RemindCustomer(customer, bills);
this is a violation. Instead you want to do
Customer.RemindAboutOutstandingBills()
(and of course you will need to pass in the PaymentReminder as a dependency upon construction of the customer).
Information Expert says the same thing pretty much.
Single Responsibility Principle can be easily misunderstood - it says that the customer class should have one responsibility, but also that the responsibility of grouping data, methods, and other classes aligned with the 'Customer' concept should be encapsulated by only one class. What constitutes a single responsibility is extremely hard to define exactly and I would recommend more reading on the matter.
Craig Larman discussed this when he introduced GRASP in Applying UML and Patterns to Object-Oriented Analysis and Design and Iterative Development (2004):
In some situations, a solution suggested by Expert is undesirable, usually because of problems in coupling and cohesion (these principles are discussed later in this chapter).
For example, who should be responsible for saving a Sale in a database? Certainly, much of the information to be saved is in the Sale object, and thus Expert could argue that the responsibility lies in the Sale class. And, by logical extension of this decision, each class would have its own services to save itself in a database. But acting on that reasoning leads to problems in cohesion, coupling, and duplication. For example, the Sale class must now contain logic related to database handling, such as that related to SQL and JDBC (Java Database Connectivity). The class no longer focuses on just the pure application logic of “being a sale.” Now other kinds of responsibilities lower its cohesion. The class must be coupled to the technical database services of another subsystem, such as JDBC services, rather than just being coupled to other objects in the domain layer of software objects, so its coupling increases. And it is likely that similar database logic would be duplicated in many persistent classes.
All these problems indicate violation of a basic architectural principle: design for a separation of major system concerns. Keep application logic in one place (such as the domain software objects), keep database logic in another place (such as a separate persistence services subsystem), and so forth, rather than intermingling different system concerns in the same component.[11]
Supporting a separation of major concerns improves coupling and cohesion in a design. Thus, even though by Expert we could find some justification for putting the responsibility for database services in the Sale class, for other reasons (usually cohesion and coupling), we'd end up with a poor design.
Thus the SRP generally trumps Information Expert.
However, the Dependency Inversion Principle can combine well with Expert. The argument here would be that Customer should not have a dependency of CustomerDTO (general to detail), but the other way around. This would mean that CustomerDTO is the Expert and should know how to build itself given a Customer:
CustomerDTO dto = new CustomerDTO(bob);
If you're allergic to new, you could go static:
CustomerDTO dto = CustomerDTO.buildFor(bob);
Or, if you hate both, we come back around to an AbstractFactory:
public abstract class DTOFactory<D, E> {
public abstract D createDTO(E entity);
}
public class CustomerDTOFactory extends DTOFactory<CustomerDTO, Customer> {
#Override
public CustomerDTO createDTO(Customer entity) {
return new CustomerDTO(entity);
}
}
I don't 100% agree w/ your two examples as being representative, but from a general perspective you seem to be reasoning from the assumption of two objects and only two objects.
If you separate the problem out further and create one (or more) specialized objects to take on the individual responsibilities you have, and then have the controlling object pass instances of the other objects it is using to the specialized objects you have carved off, you should be able to observe a happy compromise between SRP (each responsibility has handled by a specialized object), and Tell Don't Ask (the controlling object is telling the specialized objects it is composing together to do whatever it is that they do, to each other).
It's a composition solution that relies on a controller of some sort to coordinate and delegate between other objects without getting mired in their internal details.
The Law of Demeter indicates that you should only speak to objects that you know about directly. That is, do not perform method chaining to talk to other objects. When you do so, you are establishing improper linkages with the intermediary objects, inappropriately coupling your code to other code.
That's bad.
The solution would be for the class you do know about to essentially expose simple wrappers that delegate the responsibility to the object it has the relationship with.
That's good.
But, that seems to result in the class having low cohesion. No longer is it simply responsible for precisely what it does, but it also has the delegates that in a sense, making the code less cohesive by duplicating portions of the interface of its related object.
That's bad.
Does it really result in lowering cohesion? Is it the lesser of two evils?
Is this one of those gray areas of development, where you can debate where the line is, or are there strong, principled ways of making a decision of where to draw the line and what criteria you can use to make that decision?
Grady Booch in "Object Oriented Analysis and Design":
"The idea of cohesion also comes from structured design. Simply stated, cohesion
measures the degree of connectivity among the elements of a single module (and
for object-oriented design, a single class or object). The least desirable form of
cohesion is coincidental cohesion, in which entirely unrelated abstractions are
thrown into the same class or module. For example, consider a class comprising
the abstractions of dogs and spacecraft, whose behaviors are quite unrelated. The
most desirable form of cohesion is functional cohesion, in which the elements of
a class or module all work together to provide some well-bounded behavior.
Thus, the class Dog is functionally cohesive if its semantics embrace the behavior
of a dog, the whole dog, and nothing but the dog."
Subsitute Dog with Customer in the above and it might be a bit clearer. So the goal is really just to aim for functional cohesion and to move away from coincidental cohesion as much as possible. Depending on your abstractions, this may be simple or could require some refactoring.
Note cohesion applies just as much to a "module" than to a single class, ie a group of classes working together. So in this case the Customer and Order classes still have decent cohesion because they have this strong relationshhip, customers create orders, orders belong to customers.
Martin Fowler says he'd be more comfortable calling it the "Suggestion of Demeter" (see the article Mocks aren't stubs):
"Mockist testers do talk more about avoiding 'train wrecks' - method chains of style of getThis().getThat().getTheOther(). Avoiding method chains is also known as following the Law of Demeter. While method chains are a smell, the opposite problem of middle men objects bloated with forwarding methods is also a smell. (I've always felt I'd be more comfortable with the Law of Demeter if it were called the Suggestion of Demeter .)"
That sums up nicely where I'm coming from: it is perfectly acceptable and often necessary to have a lower level of cohesion than the strict adherence to the "law" might require. Avoid coincidental cohesion and aim for functional cohesion, but don't get hung up on tweaking where needed to fit in more naturally with your design abstraction.
If you are violating the Law of Demeter by having
int price = customer.getOrder().getPrice();
the solution is not to create a getOrderPrice() and transform the code into
int price = customer.getOrderPrice();
but instead to note that this is a code smell and make the relevant changes that hopefully both increase cohesion and lower coupling. Unfortunately there is no simple refactoring here that always applies, but you should probably apply tell don't ask
I think you may have misunderstood what cohesion means. A class that is implemented in terms of several other classes does not necessarily have low cohesion, as long as it represents a clear concept, and has a clear purpose. For example, you may have a class Person, which is implemented in terms of classes Date (for date of birth), Address, and Education (a list of schools the person went to). You may provide wrappers in Person for getting the year of birth, the last school the person went to, or the state where he lives, to avoid exposing the fact that Person is implemented in terms of those other classes. This would reduce coupling, but it would make Person no less cohesive.
It’s a grey area.
These principals are meant to help you in your work, if you find you’re working for them (i.e. they’re getting in your way and/or you find it over complicates your code) then you’re conforming too hard and you need to back off.
Make it work for you, don’t work for it.
I don't know if this actually lowers cohesion.
Aggregation/composition are all about a class utilising other classes to meet the contract it exposes through its public methods.
The class does not need to duplicate the interface of it's related objects. It's actually hiding any knwowledge about these aggregated classes from the method caller.
To obey the law of Demeter in the case of multiple levels of class dependency, you just need to apply aggregation/composition and good encapsulation at each level.
In other words each class has one or more dependencies on other classes, however these are only ever dependencies on the referenced class and not on any objects returned from properies/methods.
In the situations where there seems to be a tradeoff between coupling and cohesion, I'd probably ask myself "if somebody else had already written this logic, and I were looking for a bug in it, where would I look first?", and write the code that way.