Are "Dependency Inversion" and "Design to Interfaces" the same principles? - oop

Do the "Dependency Inversion Principle" (DIP) and "Design to Interfaces Principle" express the same principle? If not, what would be the difference?
EDIT
To clarify and narrow down the context a bit: by interface I mean a programmatic interface, like a Java interface or a pure abstract base class in C++. No other 'contracts' are involved.

I just wanted to pitch in and quote Derek Greer on another question very similar to this one, since it does answer this question nicely, in my opinion.
‌"What the Dependency Inversion Principle does not refer to is the simple practice of abstracting dependencies through the use of interfaces (e.g. MyService → [ILogger ⇐ Logger])."
While this decouples a component from the specific implementation detail of the dependency, it does not invert the relationship between the consumer and dependency (e.g. [MyService → IMyServiceLogger] ⇐ Logger)."

Dependency inversion is ensuring your higher level modules do not depend on lower level modules. So your application logic does not depend on your business model or business logic. There is a clear separation of concerns.
The principle states that your application defines and owns an interface that your business tier must implement. This way your business tier depends on your application's defined interface. Thus the dependencies are inverted.
Expanding this out, if you now have three applications, each with their own interfaces implemented by the business tier your business tier can change, and as long as they implement the interfaces as they must then your applications are none the wiser.
A good java example of this principle and how such a project would be structured can be found here, on my website: http://www.jeenisoftware.com/maven-dip-principle-example/
Dependency inversion is not so much about design to interface, although that is what is happening, it's more about implementing to a service. In other words a kind of service oriented design pattern.

Design to interfaces (as a variant of design by contract) supports dependency inversion. Both reduce coupling. However:
Design to interfaces and DBC says nothing about how objects are created (e.g. DIP, abstract factories, factory methods).
Dependency inversion (dependency injection) generally relies on interfaces, but focuses on the object lifecycle rather than class design. You can use DIP with abstract base classes if you wish, so you aren't really committed to pure interfaces.
The approaches tend to complement each other.

"design by contract" and "dependency injection" are very closely related, but have different levels of abstraction. "design by contract" is a very general design principle, which can be supported by various techniques; In a language that has a Java-like class system, you one technique is to use interfaces to avoid concrete class dependencies. "dependency injection" is another technique, that often relies on the existence of interfaces to function (but need not always do that - It depends on the language). I would say "dependency injection" supports the principle of "design by contract".

Related

Dependency Inversion Principle (SOLID) vs Encapsulation (Pillars of OOP)

I was recently having a debate about the Dependency Inversion Principle, Inversion of Control and Dependency Injection. In relation to this topic we were debating whether these principles violate one of the pillars of OOP, namely Encapsulation.
My understanding of these things is:
The Dependency Inversion Principle implies that objects should depend upon abstractions, not concretions - this is the fundamental principle upon which the Inversion of Control pattern and Dependency Injection are implemented.
Inversion of Control is a pattern implementation of the Dependency Inversion Principle, where abstract dependencies replace concrete dependencies, allowing concretions of the dependency to be specified outside of the object.
Dependency Injection is a design pattern that implements Inversion of Control and provides dependency resolution. Injection occurs when a dependency is passed to a dependent component. In essence, the Dependency Injection pattern provides a mechanism for coupling dependency abstractions with concrete implementations.
Encapsulation is the process whereby data and functionality that is required by a higher level object is insulated away and inaccessible, thus, the programmer is unaware of how an object is implemented.
The debate got to a sticking point with the following statement:
IoC isn't OOP because it breaks Encapsulation
Personally, I think that the Dependency Inversion Principle and the Inversion of Control pattern should be observed religiously by all OOP developers - and I live by the following quote:
If there is (potentially) more than one way to skin a cat, then do not
behave like there is only one.
Example 1:
class Program {
void Main() {
SkinCatWithKnife skinner = new SkinCatWithKnife ();
skinner.SkinTheCat();
}
}
Here we see an example of encapsulation. The programmer only has to call Main() and the cat will be skinned, but what if he wanted to skin the cat with, say a set of razor sharp teeth?
Example 2:
class Program {
// Encapsulation
ICatSkinner skinner;
public Program(ICatSkinner skinner) {
// Inversion of control
this.skinner = skinner;
}
void Main() {
this.skinner.SkinTheCat();
}
}
... new Program(new SkinCatWithTeeth());
// Dependency Injection
Here we observe the Dependency Inversion Principle and Inversion of Control since an abstract (ICatSkinner) is provided in order to allow concrete dependencies to be passed in by the programmer. At last, there is more than one way to skin a cat!
The quarrel here is; does this break encapsulation? technically one could argue that .SkinTheCat(); is still encapsulated away within the Main() method call, so the programmer is unaware of the behavior of this method, so I do not think this breaks encapsulation.
Delving a little deeper, I think that IoC containers break OOP because they use reflection, but I am not convinced that IoC breaks OOP, nor am I convinced that IoC breaks encapsulation. In fact I'd go as far as to say that:
Encapsulation and Inversion of Control coincide with each other
happily, allowing programmers to pass in only the concretions of a
dependency, whilst hiding away the overall implementation via
encapsulation.
Questions:
Is IoC a direct implementation of the Dependency Inversion Principle?
Does IoC always break encapsulation, and therefore OOP?
Should IoC be used sparingly, religiously or appropriately?
What is the difference between IoC and an IoC container?
Does IoC always break encapsulation, and therefore OOP?
No, these are hierarchically related concerns. Encapsulation is one of the most misunderstood concepts in OOP, but I think the relationship is best described via Abstract Data Types (ADTs). Essentially, an ADT is a general description of data and associated behaviour. This description is abstract; it omits implementation details. Instead, it describes an ADT in terms of pre- and post-conditions.
This is what Bertrand Meyer calls design by contract. You can read more about this seminal description of OOD in Object-Oriented Software Construction.
Objects are often described as data with behaviour. This means that an object without data isn't really an object. Thus, you have to get data into the object in some way.
You could, for example, pass data into an object via its constructor:
public class Foo
{
private readonly int bar;
public Foo(int bar)
{
this.bar = bar;
}
// Other members may use this.bar in various ways.
}
Another option is to use a setter function or property. I hope we can agree that so far, encapsulation is not violated.
What happens if we change bar from an integer to another concrete class?
public class Foo
{
private readonly Bar bar;
public Foo(Bar bar)
{
this.bar = bar;
}
// Other members may use this.bar in various ways.
}
The only difference compared to before is that bar is now an object, instead of a primitive. However, that's a false distinction, because in object-oriented design, an integer is also an object. It's only because of performance optimisations in various programming languages (Java, C#, etc.) that there's an actual difference between primitives (strings, integers, bools, etc.) and 'real' objects. From an OOD perspective, they're all alike. Strings have behaviours as well: you can turn them into all-upper-case, reverse them, etc.
Is encapsulation violated if Bar is a sealed/final, concrete class with only non-virtual members?
bar is only data with behaviour, just like an integer, but apart from that, there's no difference. So far, encapsulation isn't violated.
What happens if we allow Bar to have a single virtual member?
Is encapsulation broken by that?
Can we still express pre- and post-conditions about Foo, given that Bar has a single virtual member?
If Bar adheres to the Liskov Substitution Principle (LSP), it wouldn't make a difference. The LSP explicitly states that changing the behaviour mustn't change the correctness of the system. As long as that contract is fulfilled, encapsulation is still intact.
Thus, the LSP (one of the SOLID principles, of which the Dependency Inversion Principle is another) doesn't violate encapsulation; it describes a principle for maintaining encapsulation in the presence of polymorphism.
Does the conclusion change if Bar is an abstract base class? An interface?
No, it doesn't: those are just different degrees of polymorphism. Thus we could rename Bar to IBar (in order to suggest that it's an interface) and pass it into Foo as its data:
public class Foo
{
private readonly IBar bar;
public Foo(IBar bar)
{
this.bar = bar;
}
// Other members may use this.bar in various ways.
}
bar is just another polymorphic object, and as long as the LSP holds, encapsulation holds.
TL; DR
There's a reason SOLID is also known as the Principles of OOD. Encapsulation (i.e. design-by-contract) defines the ground rules. SOLID describes guidelines for following those rules.
Is IoC a direct implementation of the Dependency Inversion Principle?
The two are related in a way that they talk about abstractions, but that's about it. Inversion of Control is:
a design in which custom-written portions of a computer program
receive the flow of control from a generic, reusable library (source)
Inversion of control is allowing us to hook our custom code into the pipeline of a reusable library. In other words, Inversion control is about frameworks. A reusable library that does not apply Inversion of Control is merely a library. A framework is a reusable library that does apply Inversion of Control.
Do note that we as developers can only apply Inversion of Control if we are writing a framework ourselves; you can't apply inversion of control as an application developer. We can (and should) however apply Dependency Inversion Principle and the Dependency Injection pattern.
Does IoC always break encapsulation, and therefore OOP?
Since IoC is just about hooking into the pipeline of a framework, there is nothing that's leaking here. So the real question is: does Dependency Injection break encapsulation.
The answer to that question is: no, it does not. It doesn't break encapsulation because of two reasons:
Since the Dependency Inversion Principles states that we should program against an abstraction, a consumer will not be able to access the internals of the used implementation and that implementation will therefore not be breaking encapsulation to the client. The implementation might not even be known or accessible at compile time (because it lives in an unreferenced assembly) and the implementation can in that case not leak implementation details and break encapsulation.
Although the implementation accepts the dependencies it requires throughout its constructor, those dependencies will typically be stored in private fields and can't be accessed by anyone (even if a consumer depends directly on the concrete type) and it will therefore not break encapsulation.
Should IoC be used sparingly, religiously or appropriately?
Again, the question is "Should DIP and DI be used sparingly". In my opinion, the answer is: NO, you should actually use it throughout the application. Obviously, you should never apply things religiously. You should apply SOLID principles and the DIP is a crucial part of those principles. They will make your application more flexible and more maintainable and in most scenarios it is very appropriate to apply the SOLID principles.
What is the difference between IoC and an IoC container?
Dependency Injection is a pattern that can be applied either with or without a IoC container. An IoC container is merely a tool that can help building your object graph in a more convenient way, in case you have an application that applies the SOLID principles correctly. If you have an application that doesn't apply the SOLID principles, you will have a hard time using a IoC container. You will have a hard time applying Dependency Injection. Or let me put it more broadly, you will have a hard time maintaining your application anyway. But in no way an IoC container is a required tool. I'm developing and maintaining an IoC container for .NET, but I don't always use a container for all my applications. For the big BLOBAs (boring line of business applications) I often use a container, but for smaller apps (or windows services) I don't always use a container. But I do almost always use Dependency Injection as a pattern, because this is the most effective way to adhere to DIP.
Note: Since an IoC container helps us in applying the Dependency Injection pattern, "IoC container" is a terrible name for such library.
But despite of anything I said above, please note that:
in the real world of the software developer, usefulness trumps theory [from Robert C. Martin's Agile Principle, Patterns and Practices]
In other words, even if DI would break encapsulation, it wouldn't matter, because these techniques and patterns have proven to be very valuable, because it results in very flexible and maintainable systems. Practice trumps theory.
Summing up the question:
We have the ability for a Service to instantiate its own dependencies.
Yet, we also have the ability for a Service to simply define abstractions, and require an application to know about the dependent abstractions, create concrete implementations, and pass them in.
And the question is not, "Why we do it?" (Because we know there is a huge list of why). But the question is, "Doesn't option 2 break encapsulation?"
My "pragmatic" answer
I think Mark is the best bet for any such answers, and as he says: No, encapsulation isn't what people think it is.
Encapsulation is hiding away implementation details of a service or abstraction. A Dependency isn't an implementation detail. If you think of a service as a contract, and its subsequent sub-service dependencies as sub-contracts (etc etc chained along), then you really just end up with one huge contract with addendums.
Imagine I'm a caller and I want to use a legal service to sue my boss. My application would have to know about a service that does so. That alone breaks the theory that knowing about the services/contracts required to accomplish my goal is false.
The argument there is... yeah, but I just want to hire a lawyer, I don't care about what books or services he uses. I'll get some random dood off the interwebz and not care about his implementation details... like so:
sub main() {
LegalService legalService = new LegalService();
legalService.SueMyManagerForBeingMean();
}
public class LegalService {
public void SueMyManagerForBeingMean(){
// Implementation Details.
}
}
But it turns out, other services are required to get the job done, such as understanding workplace law. And also as it turns out... I am VERY Interested in the contracts that lawyer is signing under my name and the other stuff he's doing to steal my money. For example... Why the hell is this internet lawyer based in South Korea? How will THAT help me!?!? That isn't an implementation detail, that's part of a dependency chain of requirements I'm happy to manage.
sub main() {
IWorkLawService understandWorkplaceLaw = new CaliforniaWorkplaceLawService();
//IWorkLawService understandWorkplaceLaw = new NewYorkWorkplaceLawService();
LegalService legalService = new LegalService(understandWorkplaceLaw);
legalService.SueMyManagerForBeingMean();
}
public interface ILegalContract {
void SueMyManagerForBeingMean();
}
public class LegalService : ILegalContract {
private readonly IWorkLawService _workLawService;
public LegalService(IWorkLawService workLawService) {
this._workLawService = workLawService;
}
public void SueMyManagerForBeingMean() {
//Implementation Detail
_workLawService.DoSomething; // { implementation detail in there too }
}
}
Now, all I know is that I have a contract which has other contracts which might have other contracts. I am very well responsible for those contracts, and not their implementation details. Though I am more than happy to sign those contracts with concretions that are relevant to my requirements. And again, I don't care about how those concretions do their jobs, as long as I know I have a binding contract that says we exchange information in some defined way.
I will try to answer your questions, According to my understanding:
Is IoC a direct implementation of the Dependency Inversion Principle?
we can't label IoC as the direct implementation of DIP , as DIP focuses on making higher level modules depending on the abstraction and not on the concretion of lower level modules. But rather IoC is an implementation of Dependency Injection.
Does IoC always break encapsulation, and therefore OOP?
I don't think the mechanism of IoC will violate Encapsulation. But can make the system become Tightly coupled.
Should IoC be used sparingly, religiously or appropriately?
IoC can be used as a in many patterns like Bridge Pattern, where seperating Concretion from Abstraction improves the code. Thus can be used in order to achieve DIP.
What is the difference between IoC and an IoC container?
IoC is a mechanism of Dependency Inversion but containers are those which uses IoC.
Encapsulation does not contradict with Dependency Inversion Principles in Object-Oriented Programming world. For example in a car design, you will have an 'internal engine' which will be encapsulated from outside world, and also 'wheels' that can be replaced easily, and considered as outside component of the car. The car has specification (interface) to rotate the shaft of the wheels, and the wheels component implements part that interact with the shaft.
Here, The internal engine represents the encapsulation process, while the wheel components represent the Dependency Inversion Principles (DIP) in the car design. With DIP, basically we prevent building a monolithic object, and instead we make our object composable. Can you image you build a car, where you cannot replace the wheels because they are built-in into the car.
Also you can read more about Dependency Inversion Principles in more details in my blog Here.
I'm only going to answer one question as many other people have answered everything else. And keep in mind, there is no right or wrong answer, just user preferences.
Should IoC be used sparingly, religiously or appropriately?
My experience leads me to believe that Dependency Injection should only be used on classes that are general and might need changing in the future. Using it religiously will lead to some classes needing 15 interfaces in the constructor which can get really time consuming. That tends to lead to 20% development and 80% house keeping.
Someone brought up an example of a car, and how the builder of the car will want to change the tires. Dependency injection allows one to change the tires without caring of the specific implementation details. But if we take dependency injection religiously... Then we need to start building interfaces to the constituents of the tires as well... Well, what about the threads of the tire? What about the stitching in those tires? What about the chemical in those threads? What about the atoms in those chemical? Etc... Okay! Hey! At some point you're going to have to say "enough is enough"! Let's not turn every little thing into an interface... Because that can get too time consuming. It's okay to have some classes be self contained in and instantiated in the class itself! It's faster to develop, and instantiating the class is a ton easier.
Just my 2 cents.
I have found a case when ioc and dependency injection breaks encapsulation . Lets assume we have a ListUtil class . In that class there is a method called remove duplicates . This method accepts a List . There is an interface ISortAlgorith with a sort method . There is a class called QuickSort which implements this interface. When we write the alogorithm to remove duplicates we have to sort the list inside . Now if RemoveDuplicates allow an interface ISortAlgorithm as a parameter(IOC/Dependency Injection) to allow extensibility for others to choose an another algorithm for remove duplicate we are exposing the complexity of remove duplicate feature of the ListUtil class . Thus violating the foundation stone of Oops.

What is difference between the Open/Closed Principle and the Dependency Inversion Principle?

The DIP states:
High-level modules should not depend on low-level modules. Both should depend on abstractions.
Abstractions should not depend upon details. Details should depend upon abstractions.
And the OCP states:
Software entities (classes, modules, functions, etc.) should be open
for extension, but closed for modification.
I think if we satisfy the DIP, it will cover the OCP too, So, why we separate these two principles?
Uncle Bob Martin, who popularized the Open-Closed Principle (OCP) and Dependency Inversion Principles (DIP) as two of the SOLID principles, states himself that DIP arises from an application of OCP and the Liskov Substitution Principle:
In this column, we discuss the structural implications of the OCP and
the LSP. The structure that results from rigorous use of these
principles can be generalized into a principle all by itself. I call
it “The Dependency Inversion Principle” (DIP).
Robert C. Martin, Engineering Notebook, C++ Report, 1996.
So you're right in stating that every instance of DIP will be an instance of OCP, but OCP is much more general. Here's a use-case of OCP but not DIP I ran into recently. Many web frameworks have a notion of signals, where upon one action, a signal is fired. The object sending the signal is completely unaware of the listeners who are registered with the signal. Every time you want to add more listeners to the signal, you can do so without modifying the sender.
This is clearly exemplifying OCP ("closed to modification, open for extension"), but not DIP, as the sender is not depending on anything, so there's no sense in talking about whether it depends on something more abstract or less so.
More generally you can say the Observer Pattern (one of the GoF patterns) describes how to comply with OCP but not DIP. It'd be interesting to go through the GoF book and see which ones have to do with OCP and how many of those are not DIP-related.
I think adhering to the DIP makes it easier to comply with the OCP. However, one does not guarantee the other.
For example, I can create a class that has a method that takes a parameter of base. If base is an abstract class then I'm adhering to the DIP as I have inverted the dependency to the caller. However, if the code in that method does something like:
if (base is derived)
(derived)base.DoSomethingSpecificToDerived;
elsif (base is evenMoreDerived)
(evenMoreDerived)base.DoSomethingSpecificToEvenMoreDerived;
Then it's not OCP compliant as I have to modify it every time I add a new derivative.
It's very contrived example, but you get my point.
The DIP tells you how to organize the dependencies. It doesn't tell you when you are done with a particular interface.
Roughly speaking, the message of OCP is to have complete but minimalistic interfaces. In other words, it tells you when you are done with an interface but it doesn't tell you how to achieve this.
In some sense, DIP and OCP are orthogonal.
So, why we separate these two principles?
As for design patterns and named principles, almost all of them have in common that:
Find what varies and encapsulate (hide) it.
Prefer aggregation over inheritance.
Design to interfaces.
Even if the named patterns and principles partially overlap in some sense, they tell you something more specific (in a more specific situation) than the above three general principles.
Good answer by #CS. To summarize,
The DIP is an extension of the OCP, so
When we satisfy the DIP, we generally satisfy the OCP as well.
The reverse is not true, and we can conceive of OCP-compliant, DIP violations. Here is one more (Java) example.
public abstract class MyClass {
DependencyOne d1;
DependencyTwo d2;
MyClass() {
d1 = new DependencyOne();
d2 = new DependencyTwo();
}
}
The OCP is satisfied because we can extend the class. The DIP is violated because we directly instantiate dependencies.
Now the challenge is, can we think of a DIP-compliant, OCP violation. The best example I can come up with is an annotation. In Java we use the #Deprecated annotation to mark code which is open for modification, thereby violating the OCP. At the same time, this code may be perfectly DIP compliant in terms of its abstractions and dependencies. Certain libraries use an #Beta annotation to similar effect.
I cannot imagine an example that is DIP-compliant and yet closed to extension, beyond the nullary example of a class which has no dependencies, which is not very interesting. I would say the DIP implies openness to extension. However, there may be edge cases where the DIP does not imply closedness to modification.
The OCP makes a dependent class easy to consume. The OCP enables asynchronous consumption of an interface by decoupling old implementations from newer versions. It allows the things that depend upon it to continue to depend on it even in the face of change for other purposes. That way a class never has to care who's calling it.
The DIP does a couple of things. It makes depending on external classes easy. Dependency Injection enables the substitutions of dependencies by encouraging the separation of creation duties from consumption. Instead of creating the external dependency that is to be consumed, the pattern states that it should be provided externally. Ultimately, this encourages code that is idempotent (code that does not change external state). Idempotent code is good because it can be verified that it does only what is immediately visible. It doesn't have external side effects. It's very testable, understandable, and readable.

Maintainability in a class

How to ensure maintainability in a class? Can it simply be done by creating class using design patterns or is there something else involved? Also, what are the characteristics of a good method?
You won't do badly by following the SOLID and DRY principles.
SOLID is:
SRP Single responsibility principle
the notion that an object should have only a single responsibility.
OCP
Open/closed principle
the notion that “software entities … should be open for extension, but closed for modification”.
LSP
Liskov substitution principle
the notion that “objects in a program should be replaceable with instances of their subtypes without altering the correctness of that program”. See also design by contract.
ISP
Interface segregation principle
the notion that “many client specific interfaces are better than one general purpose interface.”[5]
DIP
Dependency inversion principle
the notion that one should “Depend upon Abstractions. Do not depend upon concretions.”[5]
Dependency injection is one method of following this principle.
And DRY stands for Don't Repeat Yourself, meaning you should strive to remove any duplication in your code.
Put in a lot of effort to make sure you have a good interface. Once you have that, you can completely rewrite the class, if you want, without affecting any other code in the project. If your class is so big that you can't easily rewrite it, then that is an issue too.
Although Oded's answer is good for ensuring the maintainability of a program or library, this question is about class maintainability and for that, there are only two requirements... a good interface, and strong cohesion.

Are there any rules for OOP?

Recently I heard that there are 9 rules for OOP(Java). I know only four as Abstraction, Polymorphism, Inheritance and Encapsulation. Are there any more rules for OOP?
Seems like what you're looking for are the Principles of Object-Oriented Design.
Summarized from Agile Software Development Principles, Patterns, and Practices. These principles are the hard-won product of decades of experience in software engineering. They are not the product of a single mind, but they represent the integration and writings of a large number of software developers and researchers. Although they are presented here as principles of object-oriented design, they are really special cases of long-standing principles of software engineering.
SRP The Single Responsibility Principle A class should have only one reason to change.
OCP The Open-Closed Principle Software entities (classes, packages, methods, etc.) should be open for extension, but closed for modification.
LSP The Liskov Substition Principle Subtypes must be substitutable for their base types.
DIP The Dependency Inversion Principle Abstractions should not depend upon details. Details should depend upons abstractions.
ISP The Interface Segregation Principle
Clients shold not be forced to depend upon methods that they do not use. Interfaces belong to clients, not to hierarchies.
REP The Release-Reuse Equivalency Principle
The granule of reuse is the granule of release.
CCP The Common Closure Principle
The classes in a package should be closed together against the same kinds of changes. A change that affects a closed package affects all the classes in that package and no other packages.
CRP The Common Reuse Principle
The classes in a package are reused together. If you reuse one of the classes in a package, you reuse them all.
ADP The Acylcic Dependencies Principle
Allow no cycles in the dependency graph.
SDP The Stable Dependencies Principle
Depend in the direction of stability.
SAP The Stable Abstractions Principle
A package should be as abstract as it is stable.
Not sure about any rules. All these mentioned things are more like OO paradigms to me. There are few advices we follow like,
Separation of Concern
Single Responsibility per Class
Prefer Composition over Inheritance
Programming to Interface
Plus all mentioned by Billybob, already
These OO principles are straight from Head First Design Patterns:
Encapsulate what Varies
Program to an Interface, rather than an Implementation
Favour Composition over Inheritance
A Class should have only one reason to Change (Single Responsibility Principle)
Sub-Types must be substitutable for their Base (Liskov Substitition Principle)
Classes shoule be Open for extension, but Closed for Modification (Open-Closed Principle)
These are concepts, not rules. There are no rules really, just decisions to make, some designs are better than others, some much better than others :-)
There are plenty of guidelines though :-) Some are language specific (C++ is riddled with them) others are OO specific. Too many to list though :-)
Off the top of my head, important ones are:
Loose coupling, high cohesion
Write testable classes, which you test
Use inheritence sparingly and only where it makes sense (prefer composition)
Try stick to the open/close principle.
(most important) KISS
Plenty to expand upon and add :-)
EDIT: I should add, the rules which you listed are not unique to OO
According to the Pragmatic Programmers - the rules are:
Keep it DRY (Don't Repeat Yourself)
Keep it SHY (Ensure that your classes have high cohesion and low coupling)
and tell the other GUY (Separation of concerns)
http://media.pragprog.com/articles/may_04_oo1.pdf
There are no "Rules" to OOP.
There are 4 language properties that make a language object-oriented or not (these are the things you listed in your question).
The rest of the material out there are guidelines. The best/most helpful guidelines I've read are GRASP
Many of the suggestions are not readily understandable by laymen (non-CS majors). I thought GRASP was pragmatic and approachable.
I think GRASP is nice because it suggests the most critical part of OO in its name - Assignment of Responsibility (to objects not programmers).
The two most critical GRASP concepts from which everything else derives are coupling and cohesion. These two concepts/principals drive all other patterns and approaches.
BTW - did I just interview you? You transcribed the question incorrectly...

Does dependency injection break the Law of Demeter

I have been adding dependency injection to my code because it makes by code much easier to Unit test through mocking.
However I am requiring objects higher up my call chain to have knowledge of objects further down the call chain.
Does this break the Law of Demeter? If so does it matter?
for example: a class A has a dependency on an interface B, The implementation of this interface to use is injected into the constructor of class A. Anyone wanting to use class A must now also have a reference to an implementation of B. And can call its methods directly meaning and has knowledge of its sub components (interface B)
Wikipedia says about the law of Demeter: "The fundamental notion is that a given object should assume as little as possible about the structure or properties of anything else (including its subcomponents)."
Dependency Injection CAN break the Law of Demeter. If you force consumers to do the injection of the dependencies. This can be avoided through static factory methods, and DI frameworks.
You can have both by designing your objects in such a way that they require the dependencies be passed in, and at the same time having a mechanism for using them without explicit performing the injection (factory functions and DI frameworks).
How does it break it? DI perfectly fits in idea of least knowledge. DI gives you low coupling - objects are less defendant on each other.
Citing Wikipedia:
...an object A can request a service (call
a method) of an object instance B, but
object A cannot “reach through” object
B to access yet another object...
Usually DI works exactly the same way, i.e. you use services provided by injected components. If your object try to access some of the B's dependencies i.e. it knows much about B - that's leads to high coupling and breaks idea of DI
However I am requiring objects higher
up my call chain to have knowledge of
objects further down the call chain
Some example?
If I understand you correctly, this isn't caused by the use of dependency injection, it's caused by using mocking strategies that have you specify the function calls you expect a method to make. That's perfectly acceptable in many situations, but obviously that means you have to know something about the method you're calling, if you've specified what you think it's supposed to do.
Writing good software requires balancing tradeoffs. As the implementation becomes more complete, it becomes more inconsistent. You have to decide what risks those inconsistencies create, and whether they're worth the value created by their presence.
Does it break the law?
Strictly speaking, I think it does.
Does it matter?
The main danger of breaking the law is that you make your code more brittle.
If you really keep it to just the tests, it seems like that danger is not too bad.
Mitigation
My understanding of the Law of Demeter is that it can be followed by having "wrapper methods" which prevent directly calling down into objects.
The Law of Demeter specifies that the method M of the object O can call methods on objects created/instantiated inside M. However, there's nothing that specifies how these objects were created. I think it's perfectly fine to use an intermediary object to create these, as long as that object's purpose in life is only that - creating other objects on your behalf. In this sense, DI does not break the Law of Demeter.
This also confused me for some time. In the wiki it also says...
An object A can request a service (call a method) of an object instance B, but object A should not "reach through" object B to access yet another object, C, to request its services. Doing so would mean that object A implicitly requires greater knowledge of object B's internal structure.
And this is the crux of the matter. When you interact with Class A you should not be able to interact with the state or methods of interface B. You simply shouldn't have access to its inner workings.
As for creating class A and knowing about interface B when creating objects; that's a different scenario altogether, it is not what the law of Demeter is trying to address in software design.
I would agree with other answers in that factories and a dependency injection framework would be best to handle this. Hope that clears it up for anyone else confused by this :)
Depends :-)
I think the top answer is not correct , even with a framework a lot of code uses Dependency injection and injects high level objects. You then get spaghetti code with lots of dependencies.
Dependency injection is best used for all the stuff that would pollute your object model eg an ILogger. If you do inject business object ensure its at the lowest level possible and try to pass it the traditional method if you can . Only use the dependecy injection if it gets to messy .
Before I add my answer, I must qualify it. Service-Oriented Programming is built on top of OOP Principles and using OO Languages. Also, SOAs follow Inversion of Control and SOLID Principles to the teeth. So a lot of Service-Oriented programmers are surely arriving here. So, this answer is for Service-Oriented Programmers who arrive to this question, because SOA is built on top of OOP. This does no directly answer the OP's example, but does answer the question from an SOA Perspective.
In General, the Law of Demeter doesn't apply to Service-Oriented Architectures. For OO, the Law of Demeter is talking about "Rich Objects" in OOP which have properties and methods, and whose properties may also have methods. With OOP Rich Models, it is possible to reach through a chain of objects and access methods, properties, methods of properties, methods of properties' properties, etc. But in Service-Oriented Programming, Data (Properties) are separated from Process (Methods). Your Models (mainly) only have properties (Certainly never dependencies), and your Services only have Methods and dependencies on other Services.
In SOP, you can feel free to review the properties of a model, and properties of its properties. You won't ever be able to access methods you shouldn't, only a tree of data. But what about the Services? Does the Law of Demeter apply there?
Yes, the Law of Demeter Can Be applied to SOP Services. But again, the law was originally designed for Rich Models in OOP. And though the law Can Be applied to Services, proper Dependency Injection automagically fulfills the Law of Demeter. In that sense, DI Could not possibly break the law.
In limited opposition to Mark Roddy, I can't find any situation where you can legitimately talk about Dependency Injection and "consumers" in the same sentence. If by "consumers" you mean a class that is consuming another class, that doesn't make sense. With DI, you would have a Composition Root composing your object graph, and one class should never know another class even exists. If by "consumers" you mean a programmer, then how would they not be forced to "do the injection." The programmer is the one who has to create the Composition Root, so they must do the injection. A Programmer should never "do the injection" as an instantiation within a class to consume another class.
Please review the following example which shows actual separate solutions, their references, and the implementing code:
In the top-right, we have the "Core." A lot of packages on NuGet and NPM have a "Core" Project which has Model, Interfaces, and possibly even default implementations. The Core should never ever ever depend on anything external.
In the top-left, we have an external implementation of the Core. The implementation depends on the Core, and so has knowledge of it.
In the bottom-left, we have a standalone Domain. The Domain has a Dependency on some Implementation of the Core, but Does not need to know about the implementation.
This is where I point out that neither the Domain nor the Implementation know each other exist. There is a 0% chance that either could ever reach into (Or beyond) the other one, because they don't even know they exist. The domain only knows that there is a contract, and it can somehow consume the methods by whatever is injected into it.
In the bottom-left is the Composition Root or Entry-Point. This is also known as the "Front Boundary" of the application. The root of an application knows all of its components and does little more than take input, determine who to call, compose objects, and return outputs. In other words, it can only tell the Domain "Here, use this to fulfill your contract for ICalculateThings, then give me the result of CalculateTwoThings.
There is indeed a way to smash everything into the same project, do concrete instantiations of Services, make your dependencies public properties instead of private fields, STILL Do Dependency-Injection (horribly), and then have services call into dependencies of dependencies. But that would be bad, m'kay. You'd have to be trying to be bad to do that.
Side-note, I over-complicated this on purpose. These projects could exist in one solution (as long as the Architect controls the Reference Architecture), and there could be a few more simplifications. But the separation in the image really shows how little knowledge the system has to have about its parts. Only the Composition Root (Entry Point, Front-Boundary) need to know about the parts.
Conclusion (TL;DR;): In Oldskewl OOP, Models are Rich, and the Law of Demeter can easily be broken by looking into models of models to access their methods. But in Newskewl SOP (built on top of OOP Principles and Languages), Data is separated from Process. So you can feel free to look into properties of models. Then, for Services, dependencies are always private, and nothing knows that anything else exists other than what they are told by abstractions, contracts, interfaces.