I haven't found any clear articles on this, but I was wondering about why polymorphism is the recommended design pattern over exhaustive switch case / pattern matching. I ask this because I've gotten a lot of heat from experienced developers for not using polymorphic classes, and it's been troubling me. I've personally had a terrible time with polymorphism and a wonderful time with switch cases, the reduction in abstractions and indirection makes readability of the code so much easier in my opinion. This is in direct contrast with books like "clean code" which are typically seen as industry standards.
Note: I use TypeScript, so the following examples may not apply in other languages, but I think the principle generally applies as long as you have exhaustive pattern matching / switch cases.
List the options
If you want to know what the possible values of an action, with an enum, switch case, this is trivial. For classes this requires some reflection magic
// definitely two actions here, I could even loop over them programmatically with basic primitives
enum Action {
A = 'a',
B = 'b',
}
Following the code
Dependency injection and abstract classes mean that jump to definition will never go where you want
function doLetterThing(myEnum: Action) {
switch (myEnum) {
case Action.A:
return;
case Action.B;
return;
default:
exhaustiveCheck(myEnum);
}
}
versus
function doLetterThing(action: BaseAction) {
action.doAction();
}
If I jump to definition for BaseAction or doAction I will end up on the abstract class, which doesn't help me debug the function or the implementation. If you have a dependency injection pattern with only a single class, this means that you can "guess" by going to the main class / function and looking for how "BaseAction" is instantiated and following that type to the place and scrolling to find the implementation. This seems generally like a bad UX for a developer though.
(small note about whether dependency injection is good, traits seem to do a good enough job in cases where they are necessary (though either done prematurely as a rule rather than as a necessity seems to lead to more difficult to follow code))
Write less code
This depends, but if have to define an extra abstract class for your base type, plus override all the function types, how is that less code than single line switch cases? With good types here if you add an option to the enum, your type checker will flag all the places you need to handle this which will usually involve adding 1 line each for the case and 1+ line for implementation. Compare this with polymorphic classes which you need to define a new class, which needs the new function syntax with the correct params and the opening and closing parens. In most cases, switch cases have less code and less lines.
Colocation
Everything for a type is in one place which is nice, but generally whenever I implement a function like this is I look for a similarly implemented function. With a switch case, it's extremely adjacent, with a derived class I would need to find and locate in another file or directory.
If I implemented a feature change such as trimming spaces off the ends of a string for one type, I would need to open all the class files to make sure if they implement something similar that it is implemented correctly in all of them. And if I forget, I might have different behaviour for different types without knowing. With a switch the co location makes this extremely obvious (though not foolproof)
Conclusion
Am I missing something? It doesn't make sense that we have these clear design principles that I basically can only find affirmative articles about but don't see any clear benefits, and serious downsides compared to some basic pattern matching style development
Consider the solid-principles, in particular OCP and DI.
To extend a switch case or enum and add new functionality in the future, you must modify the existing code. Modifying legacy code is risky and expensive. Risky because you may inadvertently introduce regression. Expensive because you have to learn (or re-learn) implementation details, and then re-test the legacy code (which presumably was working before you modified it).
Dependency on concrete implementations creates tight coupling and inhibits modularity. This makes code rigid and fragile, because a change in one place affects many dependents.
In addition, consider scalability. An abstraction supports any number of implementations, many of which are potentially unknown at the time the abstraction is created. A developer needn't understand or care about additional implementations. How many cases can a developer juggle in one switch, 10? 100?
Note this does not mean polymorphism (or OOP) is suitable for every class or application. For example, there are counterpoints in, Should every class implement an interface? When considering extensibility and scalability, there is an assumption that a code base will grow over time. If you're working with a few thousand lines of code, "enterprise-level" standards are going to feel very heavy. Likewise, coupling a few classes together when you only have a few classes won't be very noticeable.
Benefits of good design are realized years down the road when code is able to evolve in new directions.
I think you are missing the point. The main purpose of having a clean code is not to make your life easier while implementing the current feature, rather it makes your life easier in future when you are extending or maintaining the code.
In your example, you may feel implementing your two actions using switch case. But what happens if you need to add more actions in future? Using the abstract class, you can easily create a new action type and the caller doesn't need to be modified. But if you keep using switch case it will be lot more messier, especially for complex cases.
Also, following a better design pattern (DI in this case) will make the code easier to test. When you consider only easy cases, you may not find the usefulness of using proper design patterns. But if you think broader aspect, it really pays off.
"Base class" is against the Clean Code. There should not be a "Base class", not just for bad naming, also for composition over inheritance rule. So from now on, I will assume it is an interface in which other classes implement it, not extend (which is important for my example). First of all, I would like to see your concerns:
Answer for Concerns
This depends, but if have to define an extra abstract class for your
base type, plus override all the function types, how is that less code
than single line switch cases
I think "write less code" should not be character count. Then Ruby or GoLang or even Python beats the Java, obviously does not it? So I would not count the lines, parenthesis etc. instead code that you should test/maintain.
Everything for a type is in one place which is nice, but generally
whenever I implement a function like this is I look for a similarly
implemented function.
If "look for a similarly" means, having implementation together makes copy some parts from the similar function then we also have some clue here for refactoring. Having Implementation class differently has its own reason; their implementation is completely different. They may follow some pattern, lets see from Communication perspective; If we have Letter and Phone implementations, we should not need to look their implementation to implement one of them. So your assumption is wrong here, if you look to their code to implement new feature then your interface does not guide you for the new feature. Let's be more specific;
interface Communication {
sendMessage()
}
Letter implements Communication {
sendMessage() {
// get receiver
// get sender
// set message
// send message
}
}
Now we need Phone, so if we go to Letter implementation to get and idea to how to implement Phone then our interface does not enough for us to guide our implementation. Technically Phone and Letter is different to send a message. Then we need a Design pattern here, maybe Template Pattern? Let's see;
interface Communication {
default sendMessage() {
getMessageFactory().sendMessage(getSender(), getReceiver(), getBody())
}
getSender()
getReceiver()
getBody()
}
Letter implements Communication {
getSender() { returns sender }
getReceiver() {returns receiver }
getBody() {returns body}
getMessageFactory {returns LetterMessageFactory}
}
Now when we need to implement Phone we don't need to look the details of other implementations. We exactly now what we need to return and also our Communication interface's default method handles how to send the message.
If I implemented a feature change such as trimming spaces off the ends
of a string for one type, I would need to open all the class files to
make sure if they implement something similar that it is implemented
correctly in all of them...
So if there is a "feature change" it should be only its implemented class, not in all classes. You should not change all of the implementations. Or if it is same implementation in all of them, then why each implements it differently? It should be kept as the default method in their interface. Then if feature change required, only default method is changed and you should update your implementation and test in one place.
These are the main points that I wanted to answer your concerns. But I think the main point is you don't get the benefit. I was also struggling before I work on a big project that other teams need to extend my features. I will divide benefits to topics with extreme examples which may be more helpful to understand:
Easy to read
Normally when you see a function, you should not feel to go its implementation to understand what is happening there. It should be self-explanatory. Based on this fact; action.doAction(); -> or lets say communication.sendMessage() if they implement Communicate interface. I don't need to go for its base class, search for implementations etc. for debugging. Even implementing class is "Letter" or "Phone" I know that they send message, I don't need their implementation details. So I don't want to see all implemented classes like in your example "switch Letter; Phone.." etc. In your example doLetterThing responsible for one thing (doAction), since all of them do same thing, then why you are showing your developer all these cases?. They are just making the code harder to read.
Easy to extend
Imagine that you are extending a big project where you don't have an access to their source(I want to give extreme example to show its benefit easier). In the java world, I can say you are implementing SPI (Service Provider Interface). I can show you 2 example for this, https://github.com/apereo/cas and https://github.com/keycloak/keycloak where you can see that interface and implementations are separated and you just implement new behavior when it is required, no need to touch the original source. Why this is important? Imagine the following scenario again;
Let's suppose that Keycloak calls communication.sendMessage(). They don't know implementations in build time. If you extend Keycloak in this case, you can have your own class that implements Communication interface, let's say "Computer". Know if you have your SPI in the classpath, Keycloak reads it and calls your computer.sendMessage(). We did not touch the source code but extended the capabilities of Message Handler class. We can't achieve this if we coded against switch cases without touching the source.
I've had a certain feeling these last couple of days that dependency-injection should really be called "I can't make up my mind"-pattern. I know this might sound silly, but really it's about the reasoning behind why I should use Dependency Injection (DI). Often it is said that I should use DI, to achieve a higher level of loose-coupling, and I get that part. But really... how often do I change my database, once my choice has fallen on MS SQL or MySQL .. Very rarely right?
Does anyone have some very compelling reasons why DI is the way to go?
Two words, unit testing.
One of the most compelling reasons for DI is to allow easier unit testing without having to hit a database and worry about setting up 'test' data.
DI is very useful for decoupling your system. If all you're using it for is to decouple the database implementation from the rest of your application, then either your application is pretty simple or you need to do a lot more analysis on the problem domain and discover what components within your problem domain are the most likely to change and the components within your system that have a large amount of coupling.
DI is most useful when you're aiming for code reuse, versatility and robustness to changes in your problem domain.
How relevant it is to your project depends upon the expected lifespan of your code. Depending on the type of work you're doing zero reuse from one project to the next for the majority of code you're writing might actually be quite acceptable.
An example for use the use of DI is in creating an application that can be deployed for several clients using DI to inject customisations for the client, which could also be described as the GOF Strategy pattern. Many of the GOF patterns can be facilitated with the use of a DI framework.
DI is more relevant to Enterprise application development in which you have a large amount of code, complicated business requirements and an expectation (or hope) that the system will be maintained for many years or decades.
Even if you don't change the structure of your program during development phases you will find out you need to access several subsystems from different parts of your program. With DI each of your classes just needs to ask for services and you're free of having to provide all the wiring manually.
This really helps me on concentrating on the interaction of things in the software design and not on "who needs to carry what around because someone else needs it later".
Additionally it also just saves a LOT of work writing boilerplate code. Do I need a singleton? I just configure a class to be one. Can I test with such a "singleton"? Yes, I still can (since I just CONFIGURED it to exist only once, but the test can instantiate an alternative implementation).
But, by the way before I was using DI I didn't really understand its worth, but trying it was a real eye-opener to me: My designs are a lot more object-oriented as they have been before.
By the way, with the current application I DON'T unit-test (bad, bad me) but I STILL couldn't live with DI anymore. It is so much easier moving things around and keeping classes small and simple.
While I semi-agree with you with the DB example, one of the large things that I found helpful to use DI is to help me test the layer I build on top of the database.
Here's an example...
You have your database.
You have your code that accesses the database and returns objects
You have business domain objects that take the previous item's objects and do some logic with them.
If you merge the data access with your business domain logic, your domain objects can become difficult to test. DI allows you to inject your own data access objects into your domain so that you don't depend on the database for testing or possibly demonstrations (ran a demo where some data was pulled in from xml instead of a database).
Abstracting 3rd party components and frameworks like this would also help you.
Aside from the testing example, there's a few places where DI can be used through a Design by Contract approach. You may find it appropriate to create a processing engine of sorts that calls methods of the objects you're injecting into it. While it may not truly "process it" it runs the methods that have different implementation in each object you provide.
I saw an example of this where the every business domain object had a "Save" function that the was called after it was injected into the processor. The processor modified the component with configuration information and Save handled the object's primary state. In essence, DI supplemented the polymorphic method implementation of the objects that conformed to the Interface.
Dependency Injection gives you the ability to test specific units of code in isolation.
Say I have a class Foo for example that takes an instance of a class Bar in its constructor. One of the methods on Foo might check that a Property value of Bar is one which allows some other processing of Bar to take place.
public class Foo
{
private Bar _bar;
public Foo(Bar bar)
{
_bar = bar;
}
public bool IsPropertyOfBarValid()
{
return _bar.SomeProperty == PropertyEnum.ValidProperty;
}
}
Now let's say that Bar is instantiated and it's Properties are set to data from some datasource in it's constructor. How might I go about testing the IsPropertyOfBarValid() method of Foo (ignoring the fact that this is an incredibly simple example)? Well, Foo is dependent on the instance of Bar passed in to the constructor, which in turn is dependent on the data from the datasource that it's properties are set to. What we would like to do is have some way of isolating Foo from the resources it depends upon so that we can test it in isolation
This is where Dependency Injection comes in. What we want is to have some way of faking an instance of Bar passed to Foo such that we can control the properties set on this fake Bar and achieve what we set out to do, test that the implementation of IsPropertyOfBarValid() does what we expect it to do, i.e. return true when Bar.SomeProperty == PropertyEnum.ValidProperty and false for any other value.
There are two types of fake object, Mocks and Stubs. Stubs provide input for the application under test so that the test can be performed on something else. Mocks on the other hand provide input to the test to decide on pass\fail.
Martin Fowler has a great article on the difference between Mocks and Stubs
I think that DI is worth using when you have many services/components whose implementations must be selected at runtime based on external configuration. (Note that such configuration can take the form of an XML file or a combination of code annotations and separate classes; choose what is more convenient.)
Otherwise, I would simply use a ServiceLocator, which is much "lighter" and easier to understand than a whole DI framework.
For unit testing, I prefer to use a mocking API that can mock objects on demand, instead of requiring them to be "injected" into the tested unit from a test. For Java, one such library is my own, JMockit.
Aside from loose coupling, testing of any type is achieved with much greater ease thanks to DI. You can put replace an existing dependency of a class under test with a mock, a dummy or even another version. If a class is created with its dependencies directly instantiated it can often be difficult or even impossible to "stub" them out if required.
I just understood tonight.
For me, dependancy injection is a method for instantiate objects which require a lot of parameters to work in a specific context.
When should you use dependancy injection?
You can use dependancy injection if you instanciate in a static way an object. For example, if you use a class which can convert objects into XML file or JSON file and if you need only the XML file. You will have to instanciate the object and configure a lot of thing if you don't use dependancy injection.
When should you not use depandancy injection?
If an object is instanciated with request parameters (after a submission form), you should not use depandancy injection because the object is not instanciated in a static way.
From what I have read best practice is to have classes based on an interface and loosely couple the objects, in order to help code re-use and unit test.
Is this correct and is it a rule that should always be followed?
The reason I ask is I have recently worked on a system with 100’s of very different objects. A few shared common interfaces but most do not and wonder if it should have had an interface mirroring every property and function in those classes?
I am using C# and dot net 2.0 however I believe this question would fit many languages.
It's useful for objects which really provide a service - authentication, storage etc. For simple types which don't have any further dependencies, and where there are never going to be any alternative implementations, I think it's okay to use the concrete types.
If you go overboard with this kind of thing, you end up spending a lot of time mocking/stubbing everything in the world - which can often end up creating brittle tests.
Not really. Service components (class that do things for your application) are a good fit for interfaces, but as a rule I wouldn't bother having interfaces for, say, basic entity classes.
For example:
If you're working on a domain model, then that model shouldn't be interfaces. However if that domain model wants to call service classes (like data access, operating system functions etc) then you should be looking at interfaces for those components. This reduces coupling between the classes and means it's the interface, or "contract" that is coupled.
In this situation you then start to find it much easier to write unit tests (because you can have stubs/mocks/fakes for database access etc) and can use IoC to swap components without recompiling your applications.
I'd only use interfaces where that level of abstraction was required - i.e. you need to use polymorphic behaviour. Common examples would be dependency injection or where you have a factory-type scenario going on somewhere, or you need to establish a "multiple inheritance" type behaviour.
In my case, with my development style, this is quite often (I favour aggregation over deep inheritance hierarchies for most things other than UI controls), but I have seen perfectly fine apps that use very little. It all depends...
Oh yes, and if you do go heavily into interfaces - beware web services. If you need to expose your object methods via a web service they can't really return or take interface types, only concrete types (unless you are going to hand-write all your own serialization/deserialization). Yes, that has bitten me big time...
A downside to interface is that they can't be versioned. Once you shipped the interface you won't be making changes to it. If you use abstract classes then you can easily extend the contract over time by adding new methods and flagging them as virtual.
As an example, all stream objects in .NET derive from System.IO.Stream which is an abstract class. This makes it easy for Microsoft to add new features. In version 2 of the frameworkj they added the ReadTimeout and WriteTimeout properties without breaking any code. If they used an interface(say IStream) then they wouldn't have been able to do this. Instead they'd have had to create a new interface to define the timeout methods and we'd have to write code to conditionally cast to this interface if we wanted to use the functionality.
Interfaces should be used when you want to clearly define the interaction between two different sections of your software. Especially when it is possible that you want to rip out either end of the connection and replace it with something else.
For example in my CAM application I have a CuttingPath connected to a Collection of Points. It makes no sense to have a IPointList interface as CuttingPaths are always going to be comprised of Points in my application.
However I uses the interface IMotionController to communicate with the machine because we support many different types of cutting machine each with their own commend set and method of communications. So in that case it makes sense to put it behind a interface as one installation may be using a different machine than another.
Our applications has been maintain since the mid 80s and went to a object oriented design in late 90s. I have found that what could change greatly exceeded what I originally thought and the use of interfaces has grown. For example it used to be that our DrawingPath was comprised of points. But now it is comprised of entities (splines, arcs, ec) So it is pointed to a EntityList that is a collection of Object implementing IEntity interface.
But that change was propelled by the realization that a DrawingPath could be drawn using many different methods. Once that it was realized that a variety of drawing methods was needed then the need for a interface as opposed to a fixed relationship to a Entity Object was indicated.
Note that in our system DrawingPaths are rendered down to a low level cutting path which are always series of point segments.
I tried to take the advice of 'code to an interface' literally on a recent project. The end result was essentially duplication of the public interface (small i) of each class precisely once in an Interface (big I) implementation. This is pretty pointless in practice.
A better strategy I feel is to confine your interface implementations to verbs:
Print()
Draw()
Save()
Serialize()
Update()
...etc etc. This means that classes whose primary role is to store data - and if your code is well-designed they would usually only do that - don't want or need interface implementations. Anywhere you might want runtime-configurable behaviour, for example a variety of different graph styles representing the same data.
It's better still when the thing asking for the work really doesn't want to know how the work is done. This means you can give it a macguffin that it can simply trust will do whatever its public interface says it does, and let the component in question simply choose when to do the work.
I agree with kpollock. Interfaces are used to get a common ground for objects. The fact that they can be used in IOC containers and other purposes is an added feature.
Let's say you have several types of customer classes that vary slightly but have common properties. In this case it is great to have a ICustomer interface to bound them together, logicaly. By doing that you could create a CustomerHander class/method that handels ICustomer objects the same way instead of creating a handerl method for each variation of customers.
This is the strength of interfaces.
If you only have a single class that implements an interface, then the interface isn't to much help, it just sits there and does nothing.
Inversion of Control (IoC) can be quite confusing when it is first encountered.
What is it?
Which problem does it solve?
When is it appropriate to use and when not?
The Inversion-of-Control (IoC) pattern, is about providing any kind of callback, which "implements" and/or controls reaction, instead of acting ourselves directly (in other words, inversion and/or redirecting control to the external handler/controller).
The Dependency-Injection (DI) pattern is a more specific version of IoC pattern, and is all about removing dependencies from your code.
Every DI implementation can be considered IoC, but one should not call it IoC, because implementing Dependency-Injection is harder than callback (Don't lower your product's worth by using the general term "IoC" instead).
For DI example, say your application has a text-editor component, and you want to provide spell checking. Your standard code would look something like this:
public class TextEditor {
private SpellChecker checker;
public TextEditor() {
this.checker = new SpellChecker();
}
}
What we've done here creates a dependency between the TextEditor and the SpellChecker.
In an IoC scenario we would instead do something like this:
public class TextEditor {
private IocSpellChecker checker;
public TextEditor(IocSpellChecker checker) {
this.checker = checker;
}
}
In the first code example we are instantiating SpellChecker (this.checker = new SpellChecker();), which means the TextEditor class directly depends on the SpellChecker class.
In the second code example we are creating an abstraction by having the SpellChecker dependency class in TextEditor's constructor signature (not initializing dependency in class). This allows us to call the dependency then pass it to the TextEditor class like so:
SpellChecker sc = new SpellChecker(); // dependency
TextEditor textEditor = new TextEditor(sc);
Now the client creating the TextEditor class has control over which SpellChecker implementation to use because we're injecting the dependency into the TextEditor signature.
Note that just like IoC being the base of many other patterns, above sample is only one of many Dependency-Injection kinds, for example:
Constructor Injection.
Where an instance of IocSpellChecker would be passed to constructor, either automatically or similar to above manually.
Setter Injection.
Where an instance of IocSpellChecker would be passed through setter-method or public property.
Service-lookup and/or Service-locator
Where TextEditor would ask a known provider for a globally-used-instance (service) of IocSpellChecker type (and that maybe without storing said instance, and instead, asking the provider again and again).
Inversion of Control is what you get when your program callbacks, e.g. like a gui program.
For example, in an old school menu, you might have:
print "enter your name"
read name
print "enter your address"
read address
etc...
store in database
thereby controlling the flow of user interaction.
In a GUI program or somesuch, instead we say:
when the user types in field a, store it in NAME
when the user types in field b, store it in ADDRESS
when the user clicks the save button, call StoreInDatabase
So now control is inverted... instead of the computer accepting user input in a fixed order, the user controls the order in which the data is entered, and when the data is saved in the database.
Basically, anything with an event loop, callbacks, or execute triggers falls into this category.
What is Inversion of Control?
If you follow these simple two steps, you have done inversion of control:
Separate what-to-do part from when-to-do part.
Ensure that when part knows as little as possible about what part; and vice versa.
There are several techniques possible for each of these steps based on the technology/language you are using for your implementation.
--
The inversion part of the Inversion of Control (IoC) is the confusing thing; because inversion is the relative term. The best way to understand IoC is to forget about that word!
--
Examples
Event Handling. Event Handlers (what-to-do part) -- Raising Events (when-to-do part)
Dependency Injection. Code that constructs a dependency (what-to-do part) -- instantiating and injecting that dependency for the clients when needed, which is usually taken care of by the DI tools such as Dagger (when-to-do-part).
Interfaces. Component client (when-to-do part) -- Component Interface implementation (what-to-do part)
xUnit fixture. Setup and TearDown (what-to-do part) -- xUnit frameworks calls to Setup at the beginning and TearDown at the end (when-to-do part)
Template method design pattern. template method when-to-do part -- primitive subclass implementation what-to-do part
DLL container methods in COM. DllMain, DllCanUnload, etc (what-to-do part) -- COM/OS (when-to-do part)
Inversion of Controls is about separating concerns.
Without IoC: You have a laptop computer and you accidentally break the screen. And darn, you find the same model laptop screen is nowhere in the market. So you're stuck.
With IoC: You have a desktop computer and you accidentally break the screen. You find you can just grab almost any desktop monitor from the market, and it works well with your desktop.
Your desktop successfully implements IoC in this case. It accepts a variety type of monitors, while the laptop does not, it needs a specific screen to get fixed.
Inversion of Control, (or IoC), is about getting freedom (You get married, you lost freedom and you are being controlled. You divorced, you have just implemented Inversion of Control. That's what we called, "decoupled". Good computer system discourages some very close relationship.) more flexibility (The kitchen in your office only serves clean tap water, that is your only choice when you want to drink. Your boss implemented Inversion of Control by setting up a new coffee machine. Now you get the flexibility of choosing either tap water or coffee.) and less dependency (Your partner has a job, you don't have a job, you financially depend on your partner, so you are controlled. You find a job, you have implemented Inversion of Control. Good computer system encourages in-dependency.)
When you use a desktop computer, you have slaved (or say, controlled). You have to sit before a screen and look at it. Using the keyboard to type and using the mouse to navigate. And a badly written software can slave you even more. If you replace your desktop with a laptop, then you somewhat inverted control. You can easily take it and move around. So now you can control where you are with your computer, instead of your computer controlling it.
By implementing Inversion of Control, a software/object consumer gets more controls/options over the software/objects, instead of being controlled or having fewer options.
With the above ideas in mind. We still miss a key part of IoC. In the scenario of IoC, the software/object consumer is a sophisticated framework. That means the code you created is not called by yourself. Now let's explain why this way works better for a web application.
Suppose your code is a group of workers. They need to build a car. These workers need a place and tools (a software framework) to build the car. A traditional software framework will be like a garage with many tools. So the workers need to make a plan themselves and use the tools to build the car. Building a car is not an easy business, it will be really hard for the workers to plan and cooperate properly. A modern software framework will be like a modern car factory with all the facilities and managers in place. The workers do not have to make any plan, the managers (part of the framework, they are the smartest people and made the most sophisticated plan) will help coordinate so that the workers know when to do their job (framework calls your code). The workers just need to be flexible enough to use any tools the managers give to them (by using Dependency Injection).
Although the workers give the control of managing the project on the top level to the managers (the framework). But it is good to have some professionals help out. This is the concept of IoC truly come from.
Modern Web applications with an MVC architecture depends on the framework to do URL Routing and put Controllers in place for the framework to call.
Dependency Injection and Inversion of Control are related. Dependency Injection is at the micro level and Inversion of Control is at the macro level. You have to eat every bite (implement DI) in order to finish a meal (implement IoC).
Before using Inversion of Control you should be well aware of the fact that it has its pros and cons and you should know why you use it if you do so.
Pros:
Your code gets decoupled so you can easily exchange implementations of an interface with alternative implementations
It is a strong motivator for coding against interfaces instead of implementations
It's very easy to write unit tests for your code because it depends on nothing else than the objects it accepts in its constructor/setters and you can easily initialize them with the right objects in isolation.
Cons:
IoC not only inverts the control flow in your program, it also clouds it considerably. This means you can no longer just read your code and jump from one place to another because the connections that would normally be in your code are not in the code anymore. Instead it is in XML configuration files or annotations and in the code of your IoC container that interprets these metadata.
There arises a new class of bugs where you get your XML config or your annotations wrong and you can spend a lot of time finding out why your IoC container injects a null reference into one of your objects under certain conditions.
Personally I see the strong points of IoC and I really like them but I tend to avoid IoC whenever possible because it turns your software into a collection of classes that no longer constitute a "real" program but just something that needs to be put together by XML configuration or annotation metadata and would fall (and falls) apart without it.
Wikipedia Article. To me, inversion of control is turning your sequentially written code and turning it into an delegation structure. Instead of your program explicitly controlling everything, your program sets up a class or library with certain functions to be called when certain things happen.
It solves code duplication. For example, in the old days you would manually write your own event loop, polling the system libraries for new events. Nowadays, most modern APIs you simply tell the system libraries what events you're interested in, and it will let you know when they happen.
Inversion of control is a practical way to reduce code duplication, and if you find yourself copying an entire method and only changing a small piece of the code, you can consider tackling it with inversion of control. Inversion of control is made easy in many languages through the concept of delegates, interfaces, or even raw function pointers.
It is not appropriate to use in all cases, because the flow of a program can be harder to follow when written this way. It's a useful way to design methods when writing a library that will be reused, but it should be used sparingly in the core of your own program unless it really solves a code duplication problem.
Suppose you are an object. And you go to a restaurant:
Without IoC: you ask for "apple", and you are always served apple when you ask more.
With IoC: You can ask for "fruit". You can get different fruits each time you get served. for example, apple, orange, or water melon.
So, obviously, IoC is preferred when you like the varieties.
Answering only the first part.
What is it?
Inversion of Control (IoC) means to create instances of dependencies first and latter instance of a class (optionally injecting them through constructor), instead of creating an instance of the class first and then the class instance creating instances of dependencies.
Thus, inversion of control inverts the flow of control of the program. Instead of the callee controlling the flow of control (while creating dependencies), the caller controls the flow of control of the program.
But I think you have to be very careful with it. If you will overuse this pattern, you will make very complicated design and even more complicated code.
Like in this example with TextEditor: if you have only one SpellChecker maybe it is not really necessary to use IoC ? Unless you need to write unit tests or something ...
Anyway: be reasonable. Design pattern are good practices but not Bible to be preached. Do not stick it everywhere.
IoC / DI to me is pushing out dependencies to the calling objects. Super simple.
The non-techy answer is being able to swap out an engine in a car right before you turn it on. If everything hooks up right (the interface), you are good.
Inversion of control is a pattern used for decoupling components and layers in the system. The pattern is implemented through injecting dependencies into a component when it is constructed. These dependences are usually provided as interfaces for further decoupling and to support testability. IoC / DI containers such as Castle Windsor, Unity are tools (libraries) which can be used for providing IoC. These tools provide extended features above and beyond simple dependency management, including lifetime, AOP / Interception, policy, etc.
a. Alleviates a component from being responsible for managing it's dependencies.
b. Provides the ability to swap dependency implementations in different environments.
c. Allows a component be tested through mocking of dependencies.
d. Provides a mechanism for sharing resources throughout an application.
a. Critical when doing test-driven development. Without IoC it can be difficult to test, because the components under test are highly coupled to the rest of the system.
b. Critical when developing modular systems. A modular system is a system whose components can be replaced without requiring recompilation.
c. Critical if there are many cross-cutting concerns which need to addressed, partilarly in an enterprise application.
Let's say that we have a meeting in a hotel.
We have invited many people, so we have left out many jugs of water and many plastic cups.
When somebody wants to drink, he/she fills a cup, drinks the water and throws the cup on the floor.
After an hour or so we have a floor covered with plastic cups and water.
Let's try that after inverting the control:
Imagine the same meeting in the same place, but instead of plastic cups we now have a waiter with just one glass cup (Singleton)
When somebody wants to drink, the waiter gets one for them. They drink it and return it to the waiter.
Leaving aside the question of the hygiene, the use of a waiter (process control) is much more effective and economic.
And this is exactly what Spring (another IoC container, for example: Guice) does. Instead of letting the application create what it needs using the new keyword (i.e. taking a plastic cup), Spring IoC offers the application the same cup/ instance (singleton) of the needed object (glass of water).
Think of yourself as an organizer of such a meeting:
Example:-
public class MeetingMember {
private GlassOfWater glassOfWater;
...
public void setGlassOfWater(GlassOfWater glassOfWater){
this.glassOfWater = glassOfWater;
}
//your glassOfWater object initialized and ready to use...
//spring IoC called setGlassOfWater method itself in order to
//offer to meetingMember glassOfWater instance
}
Useful links:-
http://adfjsf.blogspot.in/2008/05/inversion-of-control.html
http://martinfowler.com/articles/injection.html
http://www.shawn-barrett.com/blog/post/Tip-of-the-day-e28093-Inversion-Of-Control.aspx
I shall write down my simple understanding of this two terms:
For quick understanding just read examples*
Dependency Injection(DI):
Dependency injection generally means passing an object on which method depends, as a parameter to a method, rather than having the method create the dependent object. What it means in practice is that the method does not depends directly on a particular implementation; any implementation that meets the requirements can be passed as a parameter.
With this objects tell thier dependencies.
And spring makes it available. This leads to loosely coupled application development.
Quick Example:EMPLOYEE OBJECT WHEN CREATED,
IT WILL AUTOMATICALLY CREATE ADDRESS OBJECT
(if address is defines as dependency by Employee object)
Inversion of Control(IoC) Container:
This is common characteristic of frameworks,
IOC manages java objects – from instantiation to destruction through its BeanFactory. -Java components that are instantiated by the IoC container are called beans, and the IoC container manages a bean's scope, lifecycle events, and any AOP features for which it has been configured and coded.
QUICK EXAMPLE:Inversion of Control is about getting freedom, more flexibility, and less dependency. When you are using a desktop computer, you are slaved (or say, controlled). You have to sit before a screen and look at it. Using keyboard to type and using mouse to navigate. And a bad written software can slave you even more. If you replaced your desktop with a laptop, then you somewhat inverted control. You can easily take it and move around. So now you can control where you are with your computer, instead of computer controlling it.
By implementing Inversion of Control, a software/object consumer get more controls/options over the software/objects, instead of being controlled or having less options.
Inversion of control as a design guideline serves the following purposes:
There is a decoupling of the execution of a certain task from implementation.
Every module can focus on what it is designed for.
Modules make no assumptions about what other systems do but rely on their contracts.
Replacing modules has no side effect on other modules I will keep things abstract here, You can visit following links for detail understanding of the topic.
A good read with example
Detailed explanation
I found a very clear example here which explains how the 'control is inverted'.
Classic code (without Dependency injection)
Here is how a code not using DI will roughly work:
Application needs Foo (e.g. a controller), so:
Application creates Foo
Application calls Foo
Foo needs Bar (e.g. a service), so:
Foo creates Bar
Foo calls Bar
Bar needs Bim (a service, a repository, …), so:
Bar creates Bim
Bar does something
Using dependency injection
Here is how a code using DI will roughly work:
Application needs Foo, which needs Bar, which needs Bim, so:
Application creates Bim
Application creates Bar and gives it Bim
Application creates Foo and gives it Bar
Application calls Foo
Foo calls Bar
Bar does something
The control of the dependencies is inverted from one being called to the one calling.
What problems does it solve?
Dependency injection makes it easy to swap with the different implementation of the injected classes. While unit testing you can inject a dummy implementation, which makes the testing a lot easier.
Ex: Suppose your application stores the user uploaded file in the Google Drive, with DI your controller code may look like this:
class SomeController
{
private $storage;
function __construct(StorageServiceInterface $storage)
{
$this->storage = $storage;
}
public function myFunction ()
{
return $this->storage->getFile($fileName);
}
}
class GoogleDriveService implements StorageServiceInterface
{
public function authenticate($user) {}
public function putFile($file) {}
public function getFile($file) {}
}
When your requirements change say, instead of GoogleDrive you are asked to use the Dropbox. You only need to write a dropbox implementation for the StorageServiceInterface. You don't have make any changes in the controller as long as Dropbox implementation adheres to the StorageServiceInterface.
While testing you can create the mock for the StorageServiceInterface with the dummy implementation where all the methods return null(or any predefined value as per your testing requirement).
Instead if you had the controller class to construct the storage object with the new keyword like this:
class SomeController
{
private $storage;
function __construct()
{
$this->storage = new GoogleDriveService();
}
public function myFunction ()
{
return $this->storage->getFile($fileName);
}
}
When you want to change with the Dropbox implementation you have to replace all the lines where new GoogleDriveService object is constructed and use the DropboxService. Besides when testing the SomeController class the constructor always expects the GoogleDriveService class and the actual methods of this class are triggered.
When is it appropriate and when not?
In my opinion you use DI when you think there are (or there can be) alternative implementations of a class.
I agree with NilObject, but I'd like to add to this:
if you find yourself copying an entire method and only changing a small piece of the code, you can consider tackling it with inversion of control
If you find yourself copying and pasting code around, you're almost always doing something wrong. Codified as the design principle Once and Only Once.
For example, task#1 is to create object.
Without IOC concept, task#1 is supposed to be done by Programmer.But With IOC concept, task#1 would be done by container.
In short Control gets inverted from Programmer to container. So, it is called as inversion of control.
I found one good example here.
It seems that the most confusing thing about "IoC" the acronym and the name for which it stands is that it's too glamorous of a name - almost a noise name.
Do we really need a name by which to describe the difference between procedural and event driven programming? OK, if we need to, but do we need to pick a brand new "bigger than life" name that confuses more than it solves?
Inversion of control is when you go to the grocery store and your wife gives you the list of products to buy.
In programming terms, she passed a callback function getProductList() to the function you are executing - doShopping().
It allows user of the function to define some parts of it, making it more flexible.
I understand that the answer has already been given here. But I still think, some basics about the inversion of control have to be discussed here in length for future readers.
Inversion of Control (IoC) has been built on a very simple principle called Hollywood Principle. And it says that,
Don't call us, we'll call you
What it means is that don't go to the Hollywood to fulfill your dream rather if you are worthy then Hollywood will find you and make your dream comes true. Pretty much inverted, huh?
Now when we discuss about the principle of IoC, we use to forget about the Hollywood. For IoC, there has to be three element, a Hollywood, you and a task like to fulfill your dream.
In our programming world, Hollywood represent a generic framework (may be written by you or someone else), you represent the user code you wrote and the task represent the thing you want to accomplish with your code. Now you don't ever go to trigger your task by yourself, not in IoC! Rather you have designed everything in such that your framework will trigger your task for you. Thus you have built a reusable framework which can make someone a hero or another one a villain. But that framework is always in charge, it knows when to pick someone and that someone only knows what it wants to be.
A real life example would be given here. Suppose, you want to develop a web application. So, you create a framework which will handle all the common things a web application should handle like handling http request, creating application menu, serving pages, managing cookies, triggering events etc.
And then you leave some hooks in your framework where you can put further codes to generate custom menu, pages, cookies or logging some user events etc. On every browser request, your framework will run and executes your custom codes if hooked then serve it back to the browser.
So, the idea is pretty much simple. Rather than creating a user application which will control everything, first you create a reusable framework which will control everything then write your custom codes and hook it to the framework to execute those in time.
Laravel and EJB are examples of such a frameworks.
Reference:
https://martinfowler.com/bliki/InversionOfControl.html
https://en.wikipedia.org/wiki/Inversion_of_control
Inversion of Control is a generic principle, while Dependency Injection realises this principle as a design pattern for object graph construction (i.e. configuration controls how the objects are referencing each other, rather than the object itself controlling how to get the reference to another object).
Looking at Inversion of Control as a design pattern, we need to look at what we are inverting. Dependency Injection inverts control of constructing a graph of objects. If told in layman's term, inversion of control implies change in flow of control in the program. Eg. In traditional standalone app, we have main method, from where the control gets passed to other third party libraries(in case, we have used third party library's function), but through inversion of control control gets transferred from third party library code to our code, as we are taking the service of third party library. But there are other aspects that need to be inverted within a program - e.g. invocation of methods and threads to execute the code.
For those interested in more depth on Inversion of Control a paper has been published outlining a more complete picture of Inversion of Control as a design pattern (OfficeFloor: using office patterns to improve software design http://doi.acm.org/10.1145/2739011.2739013 with a free copy available to download from http://www.officefloor.net/about.html).
What is identified is the following relationship:
Inversion of Control (for methods) = Dependency (state) Injection + Continuation Injection + Thread Injection
Summary of above relationship for Inversion of Control available - http://dzone.com/articles/inversion-of-coupling-control
IoC is about inverting the relationship between your code and third-party code (library/framework):
In normal s/w development, you write the main() method and call "library" methods. You are in control :)
In IoC the "framework" controls main() and calls your methods. The Framework is in control :(
DI (Dependency Injection) is about how the control flows in the application. Traditional desktop application had control flow from your application(main() method) to other library method calls, but with DI control flow is inverted that's framework takes care of starting your app, initializing it and invoking your methods whenever required.
In the end you always win :)
I like this explanation: http://joelabrahamsson.com/inversion-of-control-an-introduction-with-examples-in-net/
It start simple and shows code examples as well.
The consumer, X, needs the consumed class, Y, to accomplish something. That’s all good and natural, but does X really need to know that it uses Y?
Isn’t it enough that X knows that it uses something that has the behavior, the methods, properties etc, of Y without knowing who actually implements the behavior?
By extracting an abstract definition of the behavior used by X in Y, illustrated as I below, and letting the consumer X use an instance of that instead of Y it can continue to do what it does without having to know the specifics about Y.
In the illustration above Y implements I and X uses an instance of I. While it’s quite possible that X still uses Y what’s interesting is that X doesn’t know that. It just knows that it uses something that implements I.
Read article for further info and description of benefits such as:
X is not dependent on Y anymore
More flexible, implementation can be decided in runtime
Isolation of code unit, easier testing
...
A very simple written explanation can be found here
http://binstock.blogspot.in/2008/01/excellent-explanation-of-dependency.html
It says -
"Any nontrivial application is made up of two or more classes that
collaborate with each other to perform some business logic.
Traditionally, each object is responsible for obtaining its own
references to the objects it collaborates with (its dependencies).
When applying DI, the objects are given their dependencies at creation
time by some external entity that coordinates each object in the
system. In other words, dependencies are injected into objects."
Programming speaking
IoC in easy terms: It's the use of Interface as a way of specific something (such a field or a parameter) as a wildcard that can be used by some classes. It allows the re-usability of the code.
For example, let's say that we have two classes : Dog and Cat. Both shares the same qualities/states: age, size, weight. So instead of creating a class of service called DogService and CatService, I can create a single one called AnimalService that allows to use Dog and Cat only if they use the interface IAnimal.
However, pragmatically speaking, it has some backwards.
a) Most of the developers don't know how to use it. For example, I can create a class called Customer and I can create automatically (using the tools of the IDE) an interface called ICustomer. So, it's not rare to find a folder filled with classes and interfaces, no matter if the interfaces will be reused or not. It's called BLOATED. Some people could argue that "may be in the future we could use it". :-|
b) It has some limitings. For example, let's talk about the case of Dog and Cat and I want to add a new service (functionality) only for dogs. Let's say that I want to calculate the number of days that I need to train a dog (trainDays()), for cat it's useless, cats can't be trained (I'm joking).
b.1) If I add trainDays() to the Service AnimalService then it also works with cats and it's not valid at all.
b.2) I can add a condition in trainDays() where it evaluates which class is used. But it will break completely the IoC.
b.3) I can create a new class of service called DogService just for the new functionality. But, it will increase the maintainability of the code because we will have two classes of service (with similar functionality) for Dog and it's bad.
Inversion of control is about transferring control from library to the client. It makes more sense when we talk about a client that injects (passes) a function value (lambda expression) into a higher order function (library function) that controls (changes) the behavior of the library function.
So, a simple implementation (with huge implications) of this pattern is a higher order library function (which accepts another function as an argument). The library function transfers control over its behavior by giving the client the ability to supply the "control" function as an argument.
For example, library functions like "map", "flatMap" are IoC implementations.
Of course, a limited IoC version is, for example, a boolean function parameter. A client may control the library function by switching the boolean argument.
A client or framework that injects library dependencies (which carry behavior) into libraries may also be considered IoC
I've read a lot of answers for this but if someone is still confused and needs a plus ultra "laymans term" to explain IoC here is my take:
Imagine a parent and child talking to each other.
Without IoC:
*Parent: You can only speak when I ask you questions and you can only act when I give you permission.
Parent: This means, you can't ask me if you can eat, play, go to the bathroom or even sleep if I don't ask you.
Parent: Do you want to eat?
Child: No.
Parent: Okay, I'll be back. Wait for me.
Child: (Wants to play but since there's no question from the parent, the child can't do anything).
After 1 hour...
Parent: I'm back. Do you want to play?
Child: Yes.
Parent: Permission granted.
Child: (finally is able to play).
This simple scenario explains the control is centered to the parent. The child's freedom is restricted and highly depends on the parent's question. The child can ONLY speak when asked to speak, and can ONLY act when granted permission.
With IoC:
The child has now the ability to ask questions and the parent can respond with answers and permissions. Simply means the control is inverted!
The child is now free to ask questions anytime and though there is still dependency with the parent regarding permissions, he is not dependent in the means of speaking/asking questions.
In a technological way of explaining, this is very similar to console/shell/cmd vs GUI interaction. (Which is answer of Mark Harrison above no.2 top answer).
In console, you are dependent on the what is being asked/displayed to you and you can't jump to other menus and features without answering it's question first; following a strict sequential flow. (programmatically this is like a method/function loop).
However with GUI, the menus and features are laid out and the user can select whatever it needs thus having more control and being less restricted. (programmatically, menus have callback when selected and an action takes place).
Since already there are many answers for the question but none of them shows the breakdown of Inversion Control term I see an opportunity to give a more concise and useful answer.
Inversion of Control is a pattern that implements the Dependency Inversion Principle (DIP). DIP states the following: 1. High-level modules should not depend on low-level modules. Both should depend on abstractions (e.g. interfaces). 2. Abstractions should not depend on details. Details (concrete implementations) should depend on abstractions.
There are three types of Inversion of Control:
Interface Inversion
Providers shouldn’t define an interface. Instead, the consumer should define the interface and providers must implement it. Interface Inversion allows eliminating the necessity to modify the consumer each time when a new provider added.
Flow Inversion
Changes control of the flow. For example, you have a console application where you asked to enter many parameters and after each entered parameter you are forced to press Enter. You can apply Flow Inversion here and implement a desktop application where the user can choose the sequence of parameters’ entering, the user can edit parameters, and at the final step, the user needs to press Enter only once.
Creation Inversion
It can be implemented by the following patterns: Factory Pattern, Service Locator, and Dependency Injection. Creation Inversion helps to eliminate dependencies between types moving the process of dependency objects creation outside of the type that uses these dependency objects. Why dependencies are bad? Here are a couple of examples: direct creation of a new object in your code makes testing harder; it is impossible to change references in assemblies without recompilation (OCP principle violation); you can’t easily replace a desktop-UI by a web-UI.
Creating an object within class is called tight coupling, Spring removes this dependency by following a design pattern(DI/IOC). In which object of class in passed in constructor rather than creating in class. More over we give super class reference variable in constructor to define more general structure.
Using IoC you are not new'ing up your objects. Your IoC container will do that and manage the lifetime of them.
It solves the problem of having to manually change every instantiation of one type of object to another.
It is appropriate when you have functionality that may change in the future or that may be different depending on the environment or configuration used in.