When exactly does a class/package depend on another? - oop

Many articles/books/.... talk about class or package dependency, few explain what it is. I did find some definitions, but they vary and probably don't cover all cases. E.g.:
"when one class uses another concrete class within its implementation" (so there exists no dependency on an interface?)
"when a class uses another as a variable" (what about inheritance?)
"if changes to the definition of one element may cause changes to the other" (so dependency is a transitive relationship not just on packages, but also on class level?)
"the degree to which each program module relies on each one of the other modules" (but how do you define "relies"?)
Further aspects to consider are method parameters, dependency injection, aspect oriented programming, generics. Any more aspects?
So, can you give a (formal) definition for dependency amongst classes and amongst packages that is fool-proof and covers all these cases and aspects?

If you are asking for dependency in the context of inversion of control or dependency injection, well, you're probably interested in classes that interact with one another directly. That means mostly constructor parameters and properties.
In the context of a UML domain diagram, you're probably interested in "real world" dependency. A dog needs food. That's a dependency. The dog's Bark() method returns a Sound object: that's not something you're interested in, in a UML domain model. The dog doesn't depend on sounds to exist.
You could go philosophical on this also: All classes depend on each other to accomplish a common goal; a (hopefully) great software.
So, all in all, dependency or coupling is not a matter of yes or no. It really depends on the context and on a degree of coupling (weak, strong). I thinks that explains why there are some many divergent definition of dependency.

I wrote a blog post on that topic a while ago: Understanding Code: Static vs Dynamic Dependencies. Basically you need to make a difference between static dependencies, those that are resolved by the compiler at compile-time, and dynamic dependencies, those that are resolved by the runtime (JVM or CLR) at run-time.
static dependencies are typically provoked by calls to static/final methods, read/write to a field, in the definition of the class C the implementation of the interface I by C ... all these associations between code elements that can be found explicitly in the bytecode and source code.
dynamic dependencies are typically provoked by everything that abstracts a method call at compile time, like calls to abstract/virtual methods (polymorphism), variables or parameters typed with an interface (the implementation class is abstracted at compile-time), but also delegates (.NET) or pointers to function (C++).
Most of the time, when you'll read about dependencies in the literature, they are talking about static dependencies.
A static dependencies is direct (meaning not transitive). A tool like NDepend that I mention in the blog post, can also infer indirect (or call it transitive) static dependencies from the set of direct static dependencies.
The idea I defend in the blog post is that when it comes to understand and maintain a program, one needs to focus mostly on the static dependencies, the ones found in the source code.. Indeed, abstractions facilities are used to, well ... abstract, implementation for callers. This makes source code much more easy to develop and maintain. There are however situations, typically at debugging time, where one needs to know what's really behind an abstraction at run-time.

This post is about static dependency - for dynamic dependency and the difference, see
Patrick Smacchia's answer.
In an easy to understand way: an entity (class or package) A depends on an entity B when A cannot be used standalone without B.
Inheritance, aggregation, composition, all of them introduces dependency between related entities.
so there exists no dependency on an interface?
there is, but interface only serves as the glue.
what about inheritance?
see above.
so dependency is a transitive relationship not just on packages, but also on class level?
yep.
but how do you define "relies"?
see above "easy to understand" definition. also related to the 3rd definition you posted.
Update:
So if you have interface A in Package P1, and class C in Package P2 uses A as
method parameter, or
local variable woven into C via AOP, or
class C implements A, or
class C<E extends A>,
then C depends on A and P2 depends on P1.
But if interface A is implemented by class B and class C programs against the interface A and only uses B via dependency injection, then C still (statically!) only depends on A, not on B, because the point of dependency injection is that it doesn't make glued components dependent.

Related

What does it mean to say an object has dependencies?

I keep reading that object x has dependencies. Some people say it's bad, others okay in certain situations, but I don't understand what it means in the first place. I saw this:
What is dependency injection?
But didn't understand the dependency concept to begin with. That tag actually gave a good definition, but was hoping for an example.
"Dependency" can mean many different things, depending (pun intended) on the context ;)
In this case, it really means "what specific 'object' (from perhaps many) do I need to get the job done?"
For example, a component needs something that "prints". The dependency is "printing", but the requested object could print to an HP Laserjet 9200, or to an Oki dot matrix, or to .pdf file.
To put it differently, you could substitute the word "plug-in" for "dependency" here, and keep the same meaning.
'Hope that helps ..
When talking about dependency injection, I'd usually say that classes, not objects have dependencies.
Class A has a dependency on class B if it requires that class B exists and possibly works in a certain way. For example, if class A has a call to new B(), it has a dependency on class B. If class Bwere to disappear or change, your class A may break.
You can in some languages break dependencies by allowing the class to depend on an interface instead. If you depend on interface I instead and B implements I, B can go away and be replaced by C that also implements I, and A wouldn't need to change at all. As an example here, you could take a driver in an operating system, if you replace the disk you may get a new driver that implements the "disk drive" interface, but your operating system still talks to the disk in the same way without knowing exactly what type of disk it is.
Dependency injection is about letting you depend on interfaces instead of classes, basically instead of saying new B(), you'll just declare that you want an object that implements I and a suitable implementation will be injected for you. Your class A doesn't have to have any idea that class B or C even exist.

Changing interface in C++

I am reading an article on extension of interface at following link.
http://wiki.hsr.ch/APF/files/ExtensionInterface.pdf
It has been mentioned here on page 142
Over time the addition of these requests can bloat the interface with
functionality not anticipated in the initial framework design. If new
methods are added to the "universalComponent" interface directly, all
client code must be updated and recompiled. This is tedious and
error-prone.
My question is (Assume we are using C++ to develop)
Why we have to compile client code if we add new methods to interface and not
modifying any existing functions in interface?
Thanks!
I haven't read the article, but for starters, I would suggest to de-emphasize the terms "method" and "interface" in C++. Those terms are popular in strict OO languages like Java, but C++ is a broader, multi-paradigm language.
With that said, "adding methods to interfaces" is really just adding more virtual member functions to a base class. Changing the base class changes the definition of all derived classes, and thus all code that requires the complete type of any derived class or of the base class must be recompiled.
C++ types are not a runtime feature. Types only exist at compile time, and the compiler must have full access to the type definitions. (Again in contrast to other languages!) The interface-implementation relationship exists purely at compile-time and cannot be "precompiled". So there's really no such thing as "modifying the interface" that would produce runtime-modularity. The "interface" concept is just a neat mnemonic that you can use when designing your application, but it does not save you from recompiling. Changing a class definition changes the internal representation of the class, and you cannot (in general) make a correct C++ program unless all parts of the program see the same class definitions.
Adding a method to a class that is involved in polymorphism (means it has at least one virtual member function) potentially changes the binary layout of objects of that class and it's subclasses.

Object Oriented Programming principles

I was wondering, I recently read an article that spoke of the ills of using the singleton pattern siting the disadvantage of global variable occurrence and rightly that the singleton violates alot of the rules we learn from OOP school, single responsibility principle, programming to interfaces and abstract classes and not to concrete classes... all that good stuff. I was wondering how then do you work with like database connection class where you want just one connection to your DB and one object of your DB floating around. The author spoke of Dependency Injection principle which to my mind stands well with the Dependency Inversion rule. How do I know and control what object gets passed around as a dependency other than the fact that I created the class and expect everyone using it play nice and make sure they are using the right resource?!
Edit: This answer assumes you are using a dependency injection container, either one you wrote yourself, or one you got from a library. If not, then use a DI container :)
How do I know and control what object gets passed around as a dependency other than the fact that I created the class and expect everyone using it play nice and make sure they are using the right resource?!
By contract
The oral contract - You write a design spec that says "thou shalt not instantiate this class directly" and "thou shalt not pass around any object you got from the dependency injection container. Pass the container if you have to".
The compiler contract - You give them a dependency injection container, and they grab the instance out of it, by abstract interface. If you want only a single instance to be used, you can supply them a named instance, which they extract with both the name, and the interface.
ISomething instance = serviceLocator.ResolveInstance<ISomething>(
"TheInstanceImSupposedToUse");
You can also make all your concrete classes private/internal/what-have-you, and only provide them an abstract interface to operate against. This will prevent them from instantiating the classes themselves.
// This can only be instantiated by you, but can be used by them via ISomething
private class ConcreteSomething : ISomething
{
// ...
}
By code review
You make group-wide coding and design standards that are fair, and make sure they are understood by everyone within the group.
You use a source control mechanism, and require code reviews before they check in. You read over their code for what they link to, what headers they include, what objects they instantiate, and what instances they are passing around.
If they violate your rules during code reviews, you don't let them check in until they fix their code. Optionally, for repeat offenders, you make them pay you a dollar, you make them buy you lunch, or you hire a different contractor to replace them. Whatever works well within your group :)
For those who criticize the singleton pattern, based on SRP, here is an opposing view. Also, I've found that dependency injection containers can create as many problems as they solve. That said, I'm using a promising compromise, as covered in another post.
Dependency injection containers (even one you develop yourself, which isn't an entirely uncommon practice) are generally very configurable. What you'd do in that scenario is configure it such that any request for the interface that implementation, well, implements would be satisfied with that implementation. Even if it's a singleton.
For example, take a look at the Logger singleton being used here: http://www.pnpguidance.net/News/StructureMapTutorialDependencyInjectionIoCNET.aspx
Don't take what you read anywhere as absolute truth. Read it, understand it and then you can see when it's best to apply certain things. In your case, why wouldn't you want to create a static singleton?

Must Dependency Injection come at the expense of Encapsulation?

If I understand correctly, the typical mechanism for Dependency Injection is to inject either through a class' constructor or through a public property (member) of the class.
This exposes the dependency being injected and violates the OOP principle of encapsulation.
Am I correct in identifying this tradeoff? How do you deal with this issue?
Please also see my answer to my own question below.
There is another way of looking at this issue that you might find interesting.
When we use IoC/dependency injection, we're not using OOP concepts. Admittedly we're using an OO language as the 'host', but the ideas behind IoC come from component-oriented software engineering, not OO.
Component software is all about managing dependencies - an example in common use is .NET's Assembly mechanism. Each assembly publishes the list of assemblies that it references, and this makes it much easier to pull together (and validate) the pieces needed for a running application.
By applying similar techniques in our OO programs via IoC, we aim to make programs easier to configure and maintain. Publishing dependencies (as constructor parameters or whatever) is a key part of this. Encapsulation doesn't really apply, as in the component/service oriented world, there is no 'implementation type' for details to leak from.
Unfortunately our languages don't currently segregate the fine-grained, object-oriented concepts from the coarser-grained component-oriented ones, so this is a distinction that you have to hold in your mind only :)
It's a good question - but at some point, encapsulation in its purest form needs to be violated if the object is ever to have its dependency fulfilled. Some provider of the dependency must know both that the object in question requires a Foo, and the provider has to have a way of providing the Foo to the object.
Classically this latter case is handled as you say, through constructor arguments or setter methods. However, this is not necessarily true - I know that the latest versions of the Spring DI framework in Java, for example, let you annotate private fields (e.g. with #Autowired) and the dependency will be set via reflection without you needing to expose the dependency through any of the classes public methods/constructors. This might be the kind of solution you were looking for.
That said, I don't think that constructor injection is much of a problem, either. I've always felt that objects should be fully valid after construction, such that anything they need in order to perform their role (i.e. be in a valid state) should be supplied through the constructor anyway. If you have an object that requires a collaborator to work, it seems fine to me that the constructor publically advertises this requirement and ensures it is fulfilled when a new instance of the class is created.
Ideally when dealing with objects, you interact with them through an interface anyway, and the more you do this (and have dependencies wired through DI), the less you actually have to deal with constructors yourself. In the ideal situation, your code doesn't deal with or even ever create concrete instances of classes; so it just gets given an IFoo through DI, without worrying about what the constructor of FooImpl indicates it needs to do its job, and in fact without even being aware of FooImpl's existance. From this point of view, the encapsulation is perfect.
This is an opinion of course, but to my mind DI doesn't necessarily violate encapsulation and in fact can help it by centralising all of the necessary knowledge of internals into one place. Not only is this a good thing in itself, but even better this place is outside your own codebase, so none of the code you write needs to know about classes' dependencies.
This exposes the dependency being injected and violates the OOP principle of encapsulation.
Well, frankly speaking, everything violates encapsulation. :) It's a kind of a tender principle that must be treated well.
So, what violates encapsulation?
Inheritance does.
"Because inheritance exposes a subclass to details of its parent's implementation, it's often said that 'inheritance breaks encapsulation'". (Gang of Four 1995:19)
Aspect-oriented programming does. For example, you register onMethodCall() callback and that gives you a great opportunity to inject code to the normal method evaluation, adding strange side-effects etc.
Friend declaration in C++ does.
Class extention in Ruby does. Just redefine a string method somewhere after a string class was fully defined.
Well, a lot of stuff does.
Encapsulation is a good and important principle. But not the only one.
switch (principle)
{
case encapsulation:
if (there_is_a_reason)
break!
}
Yes, DI violates encapsulation (also known as "information hiding").
But the real problem comes when developers use it as an excuse to violate the KISS (Keep It Short and Simple) and YAGNI (You Ain't Gonna Need It) principles.
Personally, I prefer simple and effective solutions. I mostly use the "new" operator to instantiate stateful dependencies whenever and wherever they are needed. It is simple, well encapsulated, easy to understand, and easy to test. So, why not?
A good depenancy injection container/system will allow for constructor injection. The dependant objects will be encapsulated, and need not be exposed publicly at all. Further, by using a DP system, none of your code even "knows" the details of how the object is constructed, possibly even including the object being constructed. There is more encapsulation in this case since nearly all of your code not only is shielded from knowledge of the encapsulated objects, but does not even participate in the objects construction.
Now, I am assuming you are comparing against the case where the created object creates its own encapsulated objects, most likely in its constructor. My understanding of DP is that we want to take this responsibility away from the object and give it to someone else. To that end, the "someone else", which is the DP container in this case, does have intimate knowledge which "violates" encapsulation; the benefit is that it pulls that knowledge out of the object, iteself. Someone has to have it. The rest of your application does not.
I would think of it this way: The dependancy injection container/system violates encapsulation, but your code does not. In fact, your code is more "encapsulated" then ever.
This is similar to the upvoted answer but I want to think out loud - perhaps others see things this way as well.
Classical OO uses constructors to define the public "initialization" contract for consumers of the class (hiding ALL implementation details; aka encapsulation). This contract can ensure that after instantiation you have a ready-to-use object (i.e. no additional initialization steps to be remembered (er, forgotten) by the user).
(constructor) DI undeniably breaks encapsulation by bleeding implemenation detail through this public constructor interface. As long as we still consider the public constructor responsible for defining the initialization contract for users, we have created a horrible violation of encapsulation.
Theoretical Example:
Class Foo has 4 methods and needs an integer for initialization, so its constructor looks like Foo(int size) and it's immediately clear to users of class Foo that they must provide a size at instantiation in order for Foo to work.
Say this particular implementation of Foo may also need a IWidget to do its job. Constructor injection of this dependency would have us create a constructor like Foo(int size, IWidget widget)
What irks me about this is now we have a constructor that's blending initialization data with dependencies - one input is of interest to the user of the class (size), the other is an internal dependency that only serves to confuse the user and is an implementation detail (widget).
The size parameter is NOT a dependency - it's simple a per-instance initialization value. IoC is dandy for external dependencies (like widget) but not for internal state initialization.
Even worse, what if the Widget is only necessary for 2 of the 4 methods on this class; I may be incurring instantiation overhead for Widget even though it may not be used!
How to compromise/reconcile this?
One approach is to switch exclusively to interfaces to define the operation contract; and abolish the use of constructors by users.
To be consistent, all objects would have to be accessed through interfaces only, and instantiated only through some form of resolver (like an IOC/DI container). Only the container gets to instantiate things.
That takes care of the Widget dependency, but how do we initialize "size" without resorting to a separate initialization method on the Foo interface? Using this solution, we lost the ability to ensure that an instance of Foo is fully initialized by the time you get the instance. Bummer, because I really like the idea and simplicity of constructor injection.
How do I achieve guaranteed initialization in this DI world, when initialization is MORE than ONLY external dependencies?
As Jeff Sternal pointed out in a comment to the question, the answer is entirely dependent on how you define encapsulation.
There seem to be two main camps of what encapsulation means:
Everything related to the object is a method on an object. So, a File object may have methods to Save, Print, Display, ModifyText, etc.
An object is its own little world, and does not depend on outside behavior.
These two definitions are in direct contradiction to each other. If a File object can print itself, it will depend heavily on the printer's behavior. On the other hand, if it merely knows about something that can print for it (an IFilePrinter or some such interface), then the File object doesn't have to know anything about printing, and so working with it will bring less dependencies into the object.
So, dependency injection will break encapsulation if you use the first definition. But, frankly I don't know if I like the first definition - it clearly doesn't scale (if it did, MS Word would be one big class).
On the other hand, dependency injection is nearly mandatory if you're using the second definition of encapsulation.
It doesn't violate encapsulation. You're providing a collaborator, but the class gets to decide how it is used. As long as you follow Tell don't ask things are fine. I find constructer injection preferable, but setters can be fine as well as long as they're smart. That is they contain logic to maintain the invariants the class represents.
Pure encapsulation is an ideal that can never be achieved. If all dependencies were hidden then you wouldn't have the need for DI at all. Think about it this way, if you truly have private values that can be internalized within the object, say for instance the integer value of the speed of a car object, then you have no external dependency and no need to invert or inject that dependency. These sorts of internal state values that are operated on purely by private functions are what you want to encapsulate always.
But if you're building a car that wants a certain kind of engine object then you have an external dependency. You can either instantiate that engine -- for instance new GMOverHeadCamEngine() -- internally within the car object's constructor, preserving encapsulation but creating a much more insidious coupling to a concrete class GMOverHeadCamEngine, or you can inject it, allowing your Car object to operate agnostically (and much more robustly) on for example an interface IEngine without the concrete dependency. Whether you use an IOC container or simple DI to achieve this is not the point -- the point is that you've got a Car that can use many kinds of engines without being coupled to any of them, thus making your codebase more flexible and less prone to side effects.
DI is not a violation of encapsulation, it is a way of minimizing the coupling when encapsulation is necessarily broken as a matter of course within virtually every OOP project. Injecting a dependency into an interface externally minimizes coupling side effects and allows your classes to remain agnostic about implementation.
It depends on whether the dependency is really an implementation detail or something that the client would want/need to know about in some way or another. One thing that is relevant is what level of abstraction the class is targeting. Here are some examples:
If you have a method that uses caching under the hood to speed up calls, then the cache object should be a Singleton or something and should not be injected. The fact that the cache is being used at all is an implementation detail that the clients of your class should not have to care about.
If your class needs to output streams of data, it probably makes sense to inject the output stream so that the class can easily output the results to an array, a file, or wherever else someone else might want to send the data.
For a gray area, let's say you have a class that does some monte carlo simulation. It needs a source of randomness. On the one hand, the fact that it needs this is an implementation detail in that the client really doesn't care exactly where the randomness comes from. On the other hand, since real-world random number generators make tradeoffs between degree of randomness, speed, etc. that the client may want to control, and the client may want to control seeding to get repeatable behavior, injection may make sense. In this case, I'd suggest offering a way of creating the class without specifying a random number generator, and use a thread-local Singleton as the default. If/when the need for finer control arises, provide another constructor that allows for a source of randomness to be injected.
Having struggled with the issue a little further, I am now in the opinion that Dependency Injection does (at this time) violate encapsulation to some degree. Don't get me wrong though - I think that using dependency injection is well worth the tradeoff in most cases.
The case for why DI violates encapsulation becomes clear when the component you are working on is to be delivered to an "external" party (think of writing a library for a customer).
When my component requires sub-components to be injected via the constructor (or public properties) there's no guarantee for
"preventing users from setting the internal data of the component into an invalid or inconsistent state".
At the same time it cannot be said that
"users of the component (other pieces of software) only need to know what the component does, and cannot make themselves dependent on the details of how it does it".
Both quotes are from wikipedia.
To give a specific example: I need to deliver a client-side DLL that simplifies and hides communication to a WCF service (essentially a remote facade). Because it depends on 3 different WCF proxy classes, if I take the DI approach I am forced to expose them via the constructor. With that I expose the internals of my communication layer which I am trying to hide.
Generally I am all for DI. In this particular (extreme) example, it strikes me as dangerous.
I struggled with this notion as well. At first, the 'requirement' to use the DI container (like Spring) to instantiate an object felt like jumping thru hoops. But in reality, it's really not a hoop - it's just another 'published' way to create objects I need. Sure, encapsulation is 'broken' becuase someone 'outside the class' knows what it needs, but it really isn't the rest of the system that knows that - it's the DI container. Nothing magical happens differently because DI 'knows' one object needs another.
In fact it gets even better - by focusing on Factories and Repositories I don't even have to know DI is involved at all! That to me puts the lid back on encapsulation. Whew!
I belive in simplicity. Applying IOC/Dependecy Injection in Domain classes does not make any improvement except making the code much more harder to main by having an external xml files describing the relation. Many technologies like EJB 1.0/2.0 & struts 1.1 are reversing back by reducing the stuff the put in XML and try put them in code as annoation etc. So applying IOC for all the classes you develope will make the code non-sense.
IOC has it benefits when the dependent object is not ready for creation at compile time. This can happend in most of the infrasture abstract level architecture components, trying establish a common base framework which may need to work for different scenarios. In those places usage IOC makes more sense. Still this does not make the code more simple / maintainable.
As all the other technologies, this too has PROs & CONs. My worry is, we implement latest technologies in all the places irrespective of their best context usage.
Encapsulation is only broken if a class has both the responsibility to create the object (which requires knowledge of implementation details) and then uses the class (which does not require knowledge of these details). I'll explain why, but first a quick car anaology:
When I was driving my old 1971 Kombi,
I could press the accelerator and it
went (slightly) quicker. I did not
need to know why, but the guys who
built the Kombi at the factory knew
exactly why.
But back to the coding. Encapsulation is "hiding an implementation detail from something using that implementation." Encapsulation is a good thing because the implementation details can change without the user of the class knowing.
When using dependency injection, constructor injection is used to construct service type objects (as opposed to entity/value objects which model state). Any member variables in service type object represent implementation details that should not leak out. e.g. socket port number, database credentials, another class to call to perform encryption, a cache, etc.
The constructor is relevant when the class is being initially created. This happens during the construction-phase while your DI container (or factory) wires together all the service objects. The DI container only knows about implementation details. It knows all about implementation details like the guys at the Kombi factory know about spark plugs.
At run-time, the service object that was created is called apon to do some real work. At this time, the caller of the object knows nothing of the implementation details.
That's me driving my Kombi to the beach.
Now, back to encapsulation. If implementation details change, then the class using that implementation at run-time does not need to change. Encapsulation is not broken.
I can drive my new car to the beach too. Encapsulation is not broken.
If implementation details change, the DI container (or factory) does need to change. You were never trying to hide implementation details from the factory in the first place.
DI violates Encapsulation for NON-Shared objects - period. Shared objects have a lifespan outside of the object being created, and thus must be AGGREGATED into the object being created. Objects that are private to the object being created should be COMPOSED into the created object - when the created object is destroyed, it takes the composed object with it.
Let's take the human body as an example. What's composed and what's aggregated. If we were to use DI, the human body constructor would have 100's of objects. Many of the organs, for example, are (potentially) replaceable. But, they are still composed into the body. Blood cells are created in the body (and destroyed) everyday, without the need for external influences (other than protein). Thus, blood cells are created internally by the body - new BloodCell().
Advocators of DI argue that an object should NEVER use the new operator.
That "purist" approach not only violates encapsulation but also the Liskov Substitution Principle for whoever is creating the object.
PS. By providing Dependency Injection you do not necessarily break Encapsulation. Example:
obj.inject_dependency( factory.get_instance_of_unknown_class(x) );
Client code does not know implementation details still.
Maybe this is a naive way of thinking about it, but what is the difference between a constructor that takes in an integer parameter and a constructor that takes in a service as a parameter? Does this mean that defining an integer outside the new object and feeding it into the object breaks encapsulation? If the service is only used within the new object, I don't see how that would break encapsulation.
Also, by using some sort of autowiring feature (Autofac for C#, for example), it makes the code extremely clean. By building extension methods for the Autofac builder, I was able to cut out a LOT of DI configuration code that I would have had to maintain over time as the list of dependencies grew.
I think it's self evident that at the very least DI significantly weakens encapsulation. In additional to that here are some other downsides of DI to consider.
It makes code harder to reuse. A module which a client can use without having to explicitly provide dependencies to, is obviously easier to use than one where the client has to somehow discover what that component's dependencies are and then somehow make them available. For example a component originally created to be used in an ASP application may expect to have its dependencies provided by a DI container that provides object instances with lifetimes related to client http requests. This may not be simple to reproduce in another client that does not come with the same built in DI container as the original ASP application.
It can make code more fragile. Dependencies provided by interface specification can be implemented in unexpected ways which gives rise to a whole class of runtime bugs that are not possible with a statically resolved concrete dependency.
It can make code less flexible in the sense that you may end up with fewer choices about how you want it to work. Not every class needs to have all its dependencies in existence for the entire lifetime of the owning instance, yet with many DI implementations you have no other option.
With that in mind I think the most important question then becomes, "does a particular dependency need to be externally specified at all?". In practise I have rarely found it necessary to make a dependency externally supplied just to support testing.
Where a dependency genuinely needs to be externally supplied, that normally suggests that the relation between the objects is a collaboration rather than an internal dependency, in which case the appropriate goal is then encapsulation of each class, rather than encapsulation of one class inside the other.
In my experience the main problem regarding the use of DI is that whether you start with an application framework with built in DI, or you add DI support to your codebase, for some reason people assume that since you have DI support that must be the correct way to instantiate everything. They just never even bother to ask the question "does this dependency need to be externally specified?". And worse, they also start trying to force everyone else to use the DI support for everything too.
The result of this is that inexorably your codebase starts to devolve into a state where creating any instance of anything in your codebase requires reams of obtuse DI container configuration, and debugging anything is twice as hard because you have the extra workload of trying to identify how and where anything was instantiated.
So my answer to the question is this. Use DI where you can identify an actual problem that it solves for you, which you can't solve more simply any other way.
I agree that taken to an extreme, DI can violate encapsulation. Usually DI exposes dependencies which were never truly encapsulated. Here's a simplified example borrowed from Miško Hevery's Singletons are Pathological Liars:
You start with a CreditCard test and write a simple unit test.
#Test
public void creditCard_Charge()
{
CreditCard c = new CreditCard("1234 5678 9012 3456", 5, 2008);
c.charge(100);
}
Next month you get a bill for $100. Why did you get charged? The unit test affected a production database. Internally, CreditCard calls Database.getInstance(). Refactoring CreditCard so that it takes a DatabaseInterface in its constructor exposes the fact that there's dependency. But I would argue that the dependency was never encapsulated to begin with since the CreditCard class causes externally visible side effects. If you want to test CreditCard without refactoring, you can certainly observe the dependency.
#Before
public void setUp()
{
Database.setInstance(new MockDatabase());
}
#After
public void tearDown()
{
Database.resetInstance();
}
I don't think it's worth worrying whether exposing the Database as a dependency reduces encapsulation, because it's a good design. Not all DI decisions will be so straight forward. However, none of the other answers show a counter example.
I think it's a matter of scope. When you define encapsulation (not letting know how) you must define what is the encapsuled functionality.
Class as is: what you are encapsulating is the only responsability of the class. What it knows how to do. By example, sorting. If you inject some comparator for ordering, let's say, clients, that's not part of the encapsuled thing: quicksort.
Configured functionality: if you want to provide a ready-to-use functionality then you are not providing QuickSort class, but an instance of QuickSort class configured with a Comparator. In that case the code responsible for creating and configuring that must be hidden from the user code. And that's the encapsulation.
When you are programming classes, it is, implementing single responsibilities into classes, you are using option 1.
When you are programming applications, it is, making something that undertakes some useful concrete work then you are repeteadily using option 2.
This is the implementation of the configured instance:
<bean id="clientSorter" class="QuickSort">
<property name="comparator">
<bean class="ClientComparator"/>
</property>
</bean>
This is how some other client code use it:
<bean id="clientService" class"...">
<property name="sorter" ref="clientSorter"/>
</bean>
It is encapsulated because if you change implementation (you change clientSorter bean definition) it doesn't break client use. Maybe, as you use xml files with all written together you are seeing all the details. But believe me, the client code (ClientService)
don't know nothing about its sorter.
It's probably worth mentioning that Encapsulation is somewhat perspective dependent.
public class A {
private B b;
public A() {
this.b = new B();
}
}
public class A {
private B b;
public A(B b) {
this.b = b;
}
}
From the perspective of someone working on the A class, in the second example A knows a lot less about the nature of this.b
Whereas without DI
new A()
vs
new A(new B())
The person looking at this code knows more about the nature of A in the second example.
With DI, at least all that leaked knowledge is in one place.

Does dependency injection break the Law of Demeter

I have been adding dependency injection to my code because it makes by code much easier to Unit test through mocking.
However I am requiring objects higher up my call chain to have knowledge of objects further down the call chain.
Does this break the Law of Demeter? If so does it matter?
for example: a class A has a dependency on an interface B, The implementation of this interface to use is injected into the constructor of class A. Anyone wanting to use class A must now also have a reference to an implementation of B. And can call its methods directly meaning and has knowledge of its sub components (interface B)
Wikipedia says about the law of Demeter: "The fundamental notion is that a given object should assume as little as possible about the structure or properties of anything else (including its subcomponents)."
Dependency Injection CAN break the Law of Demeter. If you force consumers to do the injection of the dependencies. This can be avoided through static factory methods, and DI frameworks.
You can have both by designing your objects in such a way that they require the dependencies be passed in, and at the same time having a mechanism for using them without explicit performing the injection (factory functions and DI frameworks).
How does it break it? DI perfectly fits in idea of least knowledge. DI gives you low coupling - objects are less defendant on each other.
Citing Wikipedia:
...an object A can request a service (call
a method) of an object instance B, but
object A cannot “reach through” object
B to access yet another object...
Usually DI works exactly the same way, i.e. you use services provided by injected components. If your object try to access some of the B's dependencies i.e. it knows much about B - that's leads to high coupling and breaks idea of DI
However I am requiring objects higher
up my call chain to have knowledge of
objects further down the call chain
Some example?
If I understand you correctly, this isn't caused by the use of dependency injection, it's caused by using mocking strategies that have you specify the function calls you expect a method to make. That's perfectly acceptable in many situations, but obviously that means you have to know something about the method you're calling, if you've specified what you think it's supposed to do.
Writing good software requires balancing tradeoffs. As the implementation becomes more complete, it becomes more inconsistent. You have to decide what risks those inconsistencies create, and whether they're worth the value created by their presence.
Does it break the law?
Strictly speaking, I think it does.
Does it matter?
The main danger of breaking the law is that you make your code more brittle.
If you really keep it to just the tests, it seems like that danger is not too bad.
Mitigation
My understanding of the Law of Demeter is that it can be followed by having "wrapper methods" which prevent directly calling down into objects.
The Law of Demeter specifies that the method M of the object O can call methods on objects created/instantiated inside M. However, there's nothing that specifies how these objects were created. I think it's perfectly fine to use an intermediary object to create these, as long as that object's purpose in life is only that - creating other objects on your behalf. In this sense, DI does not break the Law of Demeter.
This also confused me for some time. In the wiki it also says...
An object A can request a service (call a method) of an object instance B, but object A should not "reach through" object B to access yet another object, C, to request its services. Doing so would mean that object A implicitly requires greater knowledge of object B's internal structure.
And this is the crux of the matter. When you interact with Class A you should not be able to interact with the state or methods of interface B. You simply shouldn't have access to its inner workings.
As for creating class A and knowing about interface B when creating objects; that's a different scenario altogether, it is not what the law of Demeter is trying to address in software design.
I would agree with other answers in that factories and a dependency injection framework would be best to handle this. Hope that clears it up for anyone else confused by this :)
Depends :-)
I think the top answer is not correct , even with a framework a lot of code uses Dependency injection and injects high level objects. You then get spaghetti code with lots of dependencies.
Dependency injection is best used for all the stuff that would pollute your object model eg an ILogger. If you do inject business object ensure its at the lowest level possible and try to pass it the traditional method if you can . Only use the dependecy injection if it gets to messy .
Before I add my answer, I must qualify it. Service-Oriented Programming is built on top of OOP Principles and using OO Languages. Also, SOAs follow Inversion of Control and SOLID Principles to the teeth. So a lot of Service-Oriented programmers are surely arriving here. So, this answer is for Service-Oriented Programmers who arrive to this question, because SOA is built on top of OOP. This does no directly answer the OP's example, but does answer the question from an SOA Perspective.
In General, the Law of Demeter doesn't apply to Service-Oriented Architectures. For OO, the Law of Demeter is talking about "Rich Objects" in OOP which have properties and methods, and whose properties may also have methods. With OOP Rich Models, it is possible to reach through a chain of objects and access methods, properties, methods of properties, methods of properties' properties, etc. But in Service-Oriented Programming, Data (Properties) are separated from Process (Methods). Your Models (mainly) only have properties (Certainly never dependencies), and your Services only have Methods and dependencies on other Services.
In SOP, you can feel free to review the properties of a model, and properties of its properties. You won't ever be able to access methods you shouldn't, only a tree of data. But what about the Services? Does the Law of Demeter apply there?
Yes, the Law of Demeter Can Be applied to SOP Services. But again, the law was originally designed for Rich Models in OOP. And though the law Can Be applied to Services, proper Dependency Injection automagically fulfills the Law of Demeter. In that sense, DI Could not possibly break the law.
In limited opposition to Mark Roddy, I can't find any situation where you can legitimately talk about Dependency Injection and "consumers" in the same sentence. If by "consumers" you mean a class that is consuming another class, that doesn't make sense. With DI, you would have a Composition Root composing your object graph, and one class should never know another class even exists. If by "consumers" you mean a programmer, then how would they not be forced to "do the injection." The programmer is the one who has to create the Composition Root, so they must do the injection. A Programmer should never "do the injection" as an instantiation within a class to consume another class.
Please review the following example which shows actual separate solutions, their references, and the implementing code:
In the top-right, we have the "Core." A lot of packages on NuGet and NPM have a "Core" Project which has Model, Interfaces, and possibly even default implementations. The Core should never ever ever depend on anything external.
In the top-left, we have an external implementation of the Core. The implementation depends on the Core, and so has knowledge of it.
In the bottom-left, we have a standalone Domain. The Domain has a Dependency on some Implementation of the Core, but Does not need to know about the implementation.
This is where I point out that neither the Domain nor the Implementation know each other exist. There is a 0% chance that either could ever reach into (Or beyond) the other one, because they don't even know they exist. The domain only knows that there is a contract, and it can somehow consume the methods by whatever is injected into it.
In the bottom-left is the Composition Root or Entry-Point. This is also known as the "Front Boundary" of the application. The root of an application knows all of its components and does little more than take input, determine who to call, compose objects, and return outputs. In other words, it can only tell the Domain "Here, use this to fulfill your contract for ICalculateThings, then give me the result of CalculateTwoThings.
There is indeed a way to smash everything into the same project, do concrete instantiations of Services, make your dependencies public properties instead of private fields, STILL Do Dependency-Injection (horribly), and then have services call into dependencies of dependencies. But that would be bad, m'kay. You'd have to be trying to be bad to do that.
Side-note, I over-complicated this on purpose. These projects could exist in one solution (as long as the Architect controls the Reference Architecture), and there could be a few more simplifications. But the separation in the image really shows how little knowledge the system has to have about its parts. Only the Composition Root (Entry Point, Front-Boundary) need to know about the parts.
Conclusion (TL;DR;): In Oldskewl OOP, Models are Rich, and the Law of Demeter can easily be broken by looking into models of models to access their methods. But in Newskewl SOP (built on top of OOP Principles and Languages), Data is separated from Process. So you can feel free to look into properties of models. Then, for Services, dependencies are always private, and nothing knows that anything else exists other than what they are told by abstractions, contracts, interfaces.