Polymorphism vs if and logic - oop

class Person {
private state ="normal" //cripple
run() {
if (this.state === "normal") {
console.log("run")
} else {
console.log("hobble")
}
}
}
//vs
abstract class AttemptRun {
abstract run();
}
class NormalRun extends AttemptRun {
run() {
console.log("run")
}
}
class CrippleRun extends AttemptRun {
run() {
console.log("hobble")
}
}
class Person {
protected runAbility: AttemptRun;
run() {
this.runAbility.run()
}
}
Assuming that I understood the concept here is my question.
Why is polymorphism better than if and logic.
Because it seems to me you will still need a factory or another method to set the type of person's ability. So if the logic is not here it's going to be somewhere else.
Why is this repeated some much in books i read, like clean code.
It's listed as a code smell.
I feel like it can make unit test a little bit easier because then you only test the abilities and you don't need to test the actual other class that is using it.
Is that all it has to offer.
An if/else and not you need to write a different class and a factory?
Hardly seems fair?
Perhaps it's more work but in the long run, it will be better.
What are the weaknesses of each case?
Is this something you would do if it's a small class? Basically, I don't know if I understand the concept, and then assuming i do. How practical is it to use.
A wise developer might use it only when they need something specific. I don't know any of the specifics.

There's a couple of things about your simple example that don't demonstrate the benefits of the polymorphic approach.
In your case there's just one if statement (in run), just 2 variations of person, and the behaviour in each case is very similar (they both just log some text).
Consider a larger case where you've got more functionality though. If you add an attemptToDance you'd introduce a new if/else block, if you add other variation of person all your functions need a new if or case in the existing functions. As you add more functionality you end up with many cases in many if/else blocks, and there's no way the compiler can verify for you that you haven't missed one of the person types in of the if cases.
Catching errors with unit tests is great, but choosing a design that makes the error impossible is even better - the compiler never misses things like that, and you know the compiler ran and worked (you hope the unit tests were run successfully - but you're never quite as certain)
If you have an abstract base class defining an interface that all types of person implement, then the compiler will tell you if you fail to implement one of the methods for one of the derived person classes.
In a larger, real case, the implementation of each method on each type of person can and probably will vary more than just outputting different text. if these stay in an if case, all that different functionality ends up in one place, you have code that depends on many things at once, this makes testing and maintenance harder. If the people classes have state such that the methods interact, this complicates things even more, and polymorphism allows you to wrap up that behaviour in a class, without that class needing to concern itself with the other types of person class.
In the simple case the if/else version works, it just doesn't scale very well in many cases.
You may still need an if/else or switch in a factory method somewhere, but one switch that's just responsible for construction is easier to maintain than many switches or if blocks.

Using polymorphism instead of if-else statements has pros and cons.
PROs
OOP is a way to improve and promote the reuse of code, using if/else statements instead of an abstract class/interface AttemptRunwith many implementantions you could bump in a situation in which you need to add another class e.g. Animal and you'll need to rewrite also the cases in common with class Person.
Use of polimorphism aids to maintain the code by improving its readability. Very long if-else statement are tiring to read and you have to remember for that is useful, instead a class name directly remind you the function of the specific case.
CONs
Pro 2 is also a cons: as you said, polymorphism force you to test all related classes that you're using to inject responsabilities and change behaviours.
Polymorphism hits on performance: the correct implementantion of a method is retrieved by compiler by forwarding; surely a list of if-else is faster.

Related

Polymorphism versus switch case tradeoffs

I haven't found any clear articles on this, but I was wondering about why polymorphism is the recommended design pattern over exhaustive switch case / pattern matching. I ask this because I've gotten a lot of heat from experienced developers for not using polymorphic classes, and it's been troubling me. I've personally had a terrible time with polymorphism and a wonderful time with switch cases, the reduction in abstractions and indirection makes readability of the code so much easier in my opinion. This is in direct contrast with books like "clean code" which are typically seen as industry standards.
Note: I use TypeScript, so the following examples may not apply in other languages, but I think the principle generally applies as long as you have exhaustive pattern matching / switch cases.
List the options
If you want to know what the possible values of an action, with an enum, switch case, this is trivial. For classes this requires some reflection magic
// definitely two actions here, I could even loop over them programmatically with basic primitives
enum Action {
A = 'a',
B = 'b',
}
Following the code
Dependency injection and abstract classes mean that jump to definition will never go where you want
function doLetterThing(myEnum: Action) {
switch (myEnum) {
case Action.A:
return;
case Action.B;
return;
default:
exhaustiveCheck(myEnum);
}
}
versus
function doLetterThing(action: BaseAction) {
action.doAction();
}
If I jump to definition for BaseAction or doAction I will end up on the abstract class, which doesn't help me debug the function or the implementation. If you have a dependency injection pattern with only a single class, this means that you can "guess" by going to the main class / function and looking for how "BaseAction" is instantiated and following that type to the place and scrolling to find the implementation. This seems generally like a bad UX for a developer though.
(small note about whether dependency injection is good, traits seem to do a good enough job in cases where they are necessary (though either done prematurely as a rule rather than as a necessity seems to lead to more difficult to follow code))
Write less code
This depends, but if have to define an extra abstract class for your base type, plus override all the function types, how is that less code than single line switch cases? With good types here if you add an option to the enum, your type checker will flag all the places you need to handle this which will usually involve adding 1 line each for the case and 1+ line for implementation. Compare this with polymorphic classes which you need to define a new class, which needs the new function syntax with the correct params and the opening and closing parens. In most cases, switch cases have less code and less lines.
Colocation
Everything for a type is in one place which is nice, but generally whenever I implement a function like this is I look for a similarly implemented function. With a switch case, it's extremely adjacent, with a derived class I would need to find and locate in another file or directory.
If I implemented a feature change such as trimming spaces off the ends of a string for one type, I would need to open all the class files to make sure if they implement something similar that it is implemented correctly in all of them. And if I forget, I might have different behaviour for different types without knowing. With a switch the co location makes this extremely obvious (though not foolproof)
Conclusion
Am I missing something? It doesn't make sense that we have these clear design principles that I basically can only find affirmative articles about but don't see any clear benefits, and serious downsides compared to some basic pattern matching style development
Consider the solid-principles, in particular OCP and DI.
To extend a switch case or enum and add new functionality in the future, you must modify the existing code. Modifying legacy code is risky and expensive. Risky because you may inadvertently introduce regression. Expensive because you have to learn (or re-learn) implementation details, and then re-test the legacy code (which presumably was working before you modified it).
Dependency on concrete implementations creates tight coupling and inhibits modularity. This makes code rigid and fragile, because a change in one place affects many dependents.
In addition, consider scalability. An abstraction supports any number of implementations, many of which are potentially unknown at the time the abstraction is created. A developer needn't understand or care about additional implementations. How many cases can a developer juggle in one switch, 10? 100?
Note this does not mean polymorphism (or OOP) is suitable for every class or application. For example, there are counterpoints in, Should every class implement an interface? When considering extensibility and scalability, there is an assumption that a code base will grow over time. If you're working with a few thousand lines of code, "enterprise-level" standards are going to feel very heavy. Likewise, coupling a few classes together when you only have a few classes won't be very noticeable.
Benefits of good design are realized years down the road when code is able to evolve in new directions.
I think you are missing the point. The main purpose of having a clean code is not to make your life easier while implementing the current feature, rather it makes your life easier in future when you are extending or maintaining the code.
In your example, you may feel implementing your two actions using switch case. But what happens if you need to add more actions in future? Using the abstract class, you can easily create a new action type and the caller doesn't need to be modified. But if you keep using switch case it will be lot more messier, especially for complex cases.
Also, following a better design pattern (DI in this case) will make the code easier to test. When you consider only easy cases, you may not find the usefulness of using proper design patterns. But if you think broader aspect, it really pays off.
"Base class" is against the Clean Code. There should not be a "Base class", not just for bad naming, also for composition over inheritance rule. So from now on, I will assume it is an interface in which other classes implement it, not extend (which is important for my example). First of all, I would like to see your concerns:
Answer for Concerns
This depends, but if have to define an extra abstract class for your
base type, plus override all the function types, how is that less code
than single line switch cases
I think "write less code" should not be character count. Then Ruby or GoLang or even Python beats the Java, obviously does not it? So I would not count the lines, parenthesis etc. instead code that you should test/maintain.
Everything for a type is in one place which is nice, but generally
whenever I implement a function like this is I look for a similarly
implemented function.
If "look for a similarly" means, having implementation together makes copy some parts from the similar function then we also have some clue here for refactoring. Having Implementation class differently has its own reason; their implementation is completely different. They may follow some pattern, lets see from Communication perspective; If we have Letter and Phone implementations, we should not need to look their implementation to implement one of them. So your assumption is wrong here, if you look to their code to implement new feature then your interface does not guide you for the new feature. Let's be more specific;
interface Communication {
sendMessage()
}
Letter implements Communication {
sendMessage() {
// get receiver
// get sender
// set message
// send message
}
}
Now we need Phone, so if we go to Letter implementation to get and idea to how to implement Phone then our interface does not enough for us to guide our implementation. Technically Phone and Letter is different to send a message. Then we need a Design pattern here, maybe Template Pattern? Let's see;
interface Communication {
default sendMessage() {
getMessageFactory().sendMessage(getSender(), getReceiver(), getBody())
}
getSender()
getReceiver()
getBody()
}
Letter implements Communication {
getSender() { returns sender }
getReceiver() {returns receiver }
getBody() {returns body}
getMessageFactory {returns LetterMessageFactory}
}
Now when we need to implement Phone we don't need to look the details of other implementations. We exactly now what we need to return and also our Communication interface's default method handles how to send the message.
If I implemented a feature change such as trimming spaces off the ends
of a string for one type, I would need to open all the class files to
make sure if they implement something similar that it is implemented
correctly in all of them...
So if there is a "feature change" it should be only its implemented class, not in all classes. You should not change all of the implementations. Or if it is same implementation in all of them, then why each implements it differently? It should be kept as the default method in their interface. Then if feature change required, only default method is changed and you should update your implementation and test in one place.
These are the main points that I wanted to answer your concerns. But I think the main point is you don't get the benefit. I was also struggling before I work on a big project that other teams need to extend my features. I will divide benefits to topics with extreme examples which may be more helpful to understand:
Easy to read
Normally when you see a function, you should not feel to go its implementation to understand what is happening there. It should be self-explanatory. Based on this fact; action.doAction(); -> or lets say communication.sendMessage() if they implement Communicate interface. I don't need to go for its base class, search for implementations etc. for debugging. Even implementing class is "Letter" or "Phone" I know that they send message, I don't need their implementation details. So I don't want to see all implemented classes like in your example "switch Letter; Phone.." etc. In your example doLetterThing responsible for one thing (doAction), since all of them do same thing, then why you are showing your developer all these cases?. They are just making the code harder to read.
Easy to extend
Imagine that you are extending a big project where you don't have an access to their source(I want to give extreme example to show its benefit easier). In the java world, I can say you are implementing SPI (Service Provider Interface). I can show you 2 example for this, https://github.com/apereo/cas and https://github.com/keycloak/keycloak where you can see that interface and implementations are separated and you just implement new behavior when it is required, no need to touch the original source. Why this is important? Imagine the following scenario again;
Let's suppose that Keycloak calls communication.sendMessage(). They don't know implementations in build time. If you extend Keycloak in this case, you can have your own class that implements Communication interface, let's say "Computer". Know if you have your SPI in the classpath, Keycloak reads it and calls your computer.sendMessage(). We did not touch the source code but extended the capabilities of Message Handler class. We can't achieve this if we coded against switch cases without touching the source.

How to separate your code from specific customer code?

I have the following design problem:
I have many lines of object oriented source code (C++) and our customers want specific changes to our code to fit their needs. Here a very simplified example:
void somefunction() {
// do something
}
The function after I inserted the customer wishes:
void somefunction() {
// do something
setFlag(5000);
}
This looks not so bad, but we have many customers which want to set their own flag values on many different locations in the code. The code is getting more and more messy. How can I separate these customer code from my source code? Is there any design pattern?
One strategy to deal with this is to pull the specifics "up" from this class to the "top", where it can be setup or configured properly.
What I mean is:
Get the concrete settings out of the class. Generalize, make it a parameter in the constructor, or make different subclasses or classes, etc.
Make all the other objects that depend on this depend on the interface only, so they don't know about these settings or options.
On the "top", in the main() method, or some builders or factories where everything is plugged together, there you can plug in the exact parameters or implementations you need for the specific customer.
I'm afraid there is no (correct) way around refactoring these classes to pull all of these specifics into one place.
There are workarounds, like getting configuration values at all of these places, or just creating different branches for the different versions, but these do not really scale, and will cause maintenance problems in my experience.
This is a pretty general question, so the answer will be quite general. You want your software to be open for extensions, but closed for modifications. There are many ways to achieve this with different degrees of openness, from simple ones like parameters to architecture-level frameworks and patterns. Many of the design patterns, e.g. Template method, Strategy deal with these kinds of issues. Essentially, you provide hooks or placeholders in your code were you can plug-in custom behavior.
In modern C++, some of these patterns, or their implementation with explicit classes, are a bit dated and can be replaced with lambda functions instead. There are also numeruous examples in standard libraries, e.g the use of allocators in STL containers. The allocator let's you, as a customer of the STL, change the way memory is allocated and deallocated.
To limit the uncontrolled writing of code, you should consider to expose to your customer a strong base class(in the form of interface or abstract class) with some(or all) methods closed to modification.
Then, every customer will extend the base class behaviour implementing or subclassing it. Briefly, in my thought, to every customer corresponds a subclass CustomerA, CustomerB, etc.. in this way you'll divide the code written by every customer.
In my opinion, the base class methods open to modification should be a very limited set or, better, none. The added behaviour should stay only in the added methods in the derived class, if possible; in this way, you'll avoid the uncontrolled modification of methods that mustn't be modified.

Why are helperclasses anti pattern

A recent question here made me rethink this whole helper classes are anti pattern thing.
asawyer pointed out a few links in the comments to that question:
Helper classes is an anti-pattern.
While those links go into detail how helperclasses collide with the well known principles of oop some things are still unclear to me.
For example "Do not repeat yourself". How can you acchieve this without creating some sort of helper?
I thought you could derive a certain type and provide some features for it.
But I bellieve that isnt practical all the time.
Lets take a look at the following example,
please keep in mind I tried not to use any higher language features nor "languagespecific" stuff. So this might been ugly nested and not optimal...
//Check if the string is full of whitepsaces
bool allWhiteSpace = true;
if(input == null || input.Length == 0)
allWhiteSpace = false;
else
{
foreach(char c in input)
{
if( c != ' ')
{
allWhiteSpace = false;
break;
}
}
}
Lets create a bad helper class called StringHelper, the code becomes shorter:
bool isAllWhiteSpace = StringHelper.IsAllWhiteSpace(input);
So since this isnt the only time we need to check this, i guess "Do not repeat yourself" is fullfilled here.
How do we acchieve this without a helper ? Considering that this piece of Code isn't bound to a single class?
Do we need to inherit string and call it BetterString ?
bool allWhiteSpace = better.IsAllWhiteSpace;
or do we create a class? StringChecker
StringChecker checker = new StringChecker();
bool allWhiteSpace = checker.IsAllwhiteSpace(input);
So how do we acchieve this?
Some languages (e.g. C#) allow the use of ExtensionMethods. Do they count as helperclasses aswell? I tend to prefer those over helperclasses.
Helper classes may be bad (there are always exceptions) because a well-designed OO system will have clearly understood responsibilities for each class. For example, a List is responsible for managing an ordered list of items. Some people new to OOD who discover that a class has methods to do stuff with its data sometimes ask "why doesn't List have a dispayOnGUI method (or similar such thing)?". The answer is that it is not the responsibility of List to be concerned with the GUI.
If you call a class a "Helper" it really doesn't say anything about what that class is supposed to do.
A typical scenario is that there will be some class and someone decides it is getting too big and carves it up into two smaller classes, one of which is a helper. It often isn't really clear what methods should go in the helper and what methods should stay in the original class: the responsibility of the helper is not defined.
It is hard to explain unless you are experienced with OOD, but let me show by an analogy. By the way, I find this analogy extremely powerful:
Imagine you have a large team in which there are members with different job designations: e.g, front-end developers, back-end developers, testers, analysts, project managers, support engineers, integration specialists, etc. (as you like).
Each role you can think of as a class: it has certain responsibilities and the people fulfilling those responsibilities hopefully have the necessary knowledge to execute them. These roles will interact in a similar way to classes interacting.
Now imagine it is discovered that the back-end developers find their job too complicated. You can hire more if it is simply a throughput problem, but perhaps the problem is that the task requires too much knowledge across too many domains. It is decided to split up the back-end developer role by creating a new role, and maybe hire new people to fill it.
How helpful would it be if that new job description was "Back-end developer helper"? Not very ... the applicants are likely to be given a haphazard set of tasks, they may get confused about what they are supposed to do, their co-workers may not understand what they are supposed to do.
More seriously, the knowledge of the helpers may have to be exactly the same as the original developers as we haven't really narrowed down the actual responsibilities.
So "Helper" isn't really saying anything in terms of defining what the responsibilities of the new role are. Instead, it would be better to split-off, for example, the database part of the role, so "Back-end developer" is split into "Back-end developer" and "Database layer developer".
Calling a class a helper has the same problem and the solution is the same solution. You should think more about what the responsibilities of the new class should be. Ideally, it should not just shave-off some methods, but should also take some data with it that it is responsible for managing and thereby create a solution that is genuinely simpler to understand piece by piece than the original large class, rather than simply placing the same complicated logic in two different places.
I have found in some cases that a helper class is well designed, but all it lacks is a good name. In this case, calling it "Builder" or "Formatter" or "Context" instead of "Helper" immediately makes the solution far easier to understand.
Disclaimer: the following answer is based on my own experience and I'm not making a point of right and wrong.
IMHO, Helper classes are neither good nor bad, it all depends on your business/domain logic and your software architecture.
Here's Why:
lets say that we need to implement the idea of white spaces you proposed, so first I will ask my self.
When would I need to check against white spaces?
Hence, imagine the following scenario: a blogging system with Users, Posts, Comments. Thus, I would have three Classes:
Class User{}
Class Post{}
Class Comment{}
each class would have some field that is a string type. Anyway, I would need to validate these fields so I would create something like:
Class UserValidator{}
Class PostValidator{}
Class CommentValidator{}
and I would place my validation policies in those three classes. But WAIT! all of the aforementioned classes needs a check against null or all whitespaces? Ummmm....
the best solution is to take it higher in the tree and put it in some Father class called Validator:
Class Validator{
//some code
bool function is_all_whitespaces(){}
}
so, if you need the function is_all_whitespaces(){} to be abstract ( with class validator being abstract too) or turn it into an interface that would be another issue and it depends on your way of thinking mostly.
back to the point in this case I would have my classes ( for the sake of giving an example ) look like:
Class UserValidator inherits Validator{}
Class PostValidator inherits Validator{}
Class CommentValidator inherits Validator{}
in this case I really don't need the helper at all. but lets say that you have a function called multiD_array_group_by_key
and you are using it in different positions, but you don't like to have it in some OOP structured place you can have in some ArrayHelper but by that you are a step behind from being fully object oriented.

Should Interface implementations be independent

I have come across some legacy code that has raised all my heckles as an Object Oriented Programmer.
Here's the pattern used often:
An interface has two implementations and one implementation calls a method of the other.
Now, I think it should be refactored so that the implementations do not know about each other. It is simple enough HOW to do it. What I cannot figure out clearly - & hoping good people of SO would help me with - is WHY.
I can see the theoratical reason - it is a terrible object-oriented design. But I am playing the devil's advocate here and asking - what is the practical disadvantage of two implementation having knowledge of each other. Why should time & money be spent to get rid of this (in my mind) anti-pattern?
Any info or links on this will be appreciated.
I can see the theoratical reason - it is a terrible object-oriented design.
Why? It sounds entirely reasonable to me.
For example, suppose I want to decorate every call - e.g. to add statistics for how often a call has been made, or add some authorization check etc. It makes sense to keep that decoration separate from the real implementation, and just delegate:
public class DecoratedFoo : IFoo
{
private readonly IFoo original;
public DecoratedFoo(IFoo original)
{
this.original = original;
}
public string Bar() // Defined in IFoo
{
// Update statistics here, or whatever
return original.Bar();
}
}
Why do you view that separation of concerns to be "terribly object-oriented design"? Even if the decorated class knows about a specific implementation of IFoo and calls members which aren't part of IFoo itself in order to make things more efficient, it doesn't seem particularly awful to me. It's just one class knowing about another, and they happen to implement the same interface. They're more tightly coupled than the example above which only knows about IFoo, but it's still not "terrible".
There is nothing wrong with an implementation1 of interface1 being aware of or interacting with implementation2 of interface1.
I think you have just spotted an intended or un intended implementation of proxy pattern
http://en.wikipedia.org/wiki/Proxy_pattern
Hope this helps :)
My thoughts on this are
Suppose in the due course of time if you are retiring one implementation and you have kept that separately then there is no change in the other and you dont need to test that. If there is no separation you need to spend time in separating and testing the other implementation.
Its always cleaner to have single responsibility.
That method of the "other implementation" that the first implementation calls is what I would call a library function. Put it in a separate module/file/project/whatever (depends on your language/dev env) and have both implementations include it and use it from there.
There is absolutely nothing wrong with two implementations of some interfacing containing common code, but of course that common code should probably be separated from each implementation so that you can load either into your program without having to load the other.

Is Inheritance really needed?

I must confess I'm somewhat of an OOP skeptic. Bad pedagogical and laboral experiences with object orientation didn't help. So I converted into a fervent believer in Visual Basic (the classic one!).
Then one day I found out C++ had changed and now had the STL and templates. I really liked that! Made the language useful. Then another day MS decided to apply facial surgery to VB, and I really hated the end result for the gratuitous changes (using "end while" instead of "wend" will make me into a better developer? Why not drop "next" for "end for", too? Why force the getter alongside the setter? Etc.) plus so much Java features which I found useless (inheritance, for instance, and the concept of a hierarchical framework).
And now, several years afterwards, I find myself asking this philosophical question: Is inheritance really needed?
The gang-of-four say we should favor object composition over inheritance. And after thinking of it, I cannot find something you can do with inheritance you cannot do with object aggregation plus interfaces. So I'm wondering, why do we even have it in the first place?
Any ideas? I'd love to see an example of where inheritance would be definitely needed, or where using inheritance instead of composition+interfaces can lead to a simpler and easier to modify design. In former jobs I've found if you need to change the base class, you need to modify also almost all the derived classes for they depended on the behaviour of parent. And if you make the base class' methods virtual... then not much code sharing takes place :(
Else, when I finally create my own programming language (a long unfulfilled desire I've found most developers share), I'd see no point in adding inheritance to it...
Really really short answer: No. Inheritance is not needed because only byte code is truly needed. But obviously, byte code or assemble is not a practically way to write your program. OOP is not the only paradigm for programming. But, I digress.
I went to college for computer science in the early 2000s when inheritance (is a), compositions (has a), and interfaces (does a) were taught on an equal footing. Because of this, I use very little inheritance because it is often suited better by composition. This was stressed because many of the professors had seen bad code (along with what you have described) because of abuse of inheritance.
Regardless of creating a language with or without inheritances, can you create a programming language which prevents bad habits and bad design decisions?
I think asking for situations where inheritance is really needed is missing the point a bit. You can fake inheritance by using an interface and some composition. This doesnt mean inheritance is useless. You can do anything you did in VB6 in assembly code with some extra typing, that doesn't mean VB6 was useless.
I usually just start using an interface. Sometimes I notice I actually want to inherit behaviour. That usually means I need a base class. It's that simple.
Inheritance defines an "Is-A" relationship.
class Point( object ):
# some set of features: attributes, methods, etc.
class PointWithMass( Point ):
# An additional feature: mass.
Above, I've used inheritance to formally declare that PointWithMass is a Point.
There are several ways to handle object P1 being a PointWithMass as well as Point. Here are two.
Have a reference from PointWithMass object p1 to some Point object p1-friend. The p1-friend has the Point attributes. When p1 needs to engage in Point-like behavior, it needs to delegate the work to its friend.
Rely on language inheritance to assure that all features of Point are also applicable to my PointWithMass object, p1. When p1 needs to engage in Point-like behavior, it already is a Point object and can just do what needs to be done.
I'd rather not manage the extra objects floating around to assure that all superclass features are part of a subclass object. I'd rather have inheritance to be sure that each subclass is an instance of it's own class, plus is an instance of all superclasses, too.
Edit.
For statically-typed languages, there's a bonus. When I rely on the language to handle this, a PointWithMass can be used anywhere a Point was expected.
For really obscure abuse of inheritance, read about C++'s strange "composition through private inheritance" quagmire. See Any sensible examples of creating inheritance without creating subtyping relations? for some further discussion on this. It conflates inheritance and composition; it doesn't seem to add clarity or precision to the resulting code; it only applies to C++.
The GoF (and many others) recommend that you only favor composition over inheritance. If you have a class with a very large API, and you only want to add a very small number of methods to it, leaving the base implementation alone, I would find it inappropriate to use composition. You'd have to re-implement all of the public methods of the encapsulated class to just return their value. This is a waste of time (programmer and CPU) when you can just inherit all of this behavior, and spend your time concentrating on new methods.
So, to answer your question, no you don't absolutely need inheritance. There are, however, many situations where it's the right design choice.
The problem with inheritance is that it conflates the issue of sub-typing (asserting an is-a relationship) and code reuse (e.g., private inheritance is for reuse only).
So, no it's an overloaded word that we don't need. I'd prefer sub-typing (using the 'implements' keyword) and import (kinda like Ruby does it in class definitions)
Inheritance lets me push off a whole bunch of bookkeeping onto the compiler because it gives me polymorphic behavior for object hierarchies that I would otherwise have to create and maintain myself. Regardless of how good a silver bullet OOP is, there will always be instances where you want to employ a certain type of behavior because it just makes sense to do. And ultimately, that's the point of OOP: it makes a certain class of problems much easier to solve.
The downsides of composition is that it may disguise the relatedness of elements and it may be harder for others to understand. With,say, a 2D Point class and the desire to extend it to higher dimensions, you would presumably have to add (at least) Z getter/setter, modify getDistance(), and maybe add a getVolume() method. So you have the Objects 101 elements: related state and behavior.
A developer with a compositional mindset would presumably have defined a getDistance(x, y) -> double method and would now define a getDistance(x, y, z) -> double method. Or, thinking generally, they might define a getDistance(lambdaGeneratingACoordinateForEveryAxis()) -> double method. Then they would probably write createTwoDimensionalPoint() and createThreeDimensionalPoint() factory methods (or perhaps createNDimensionalPoint(n) ) that would stitch together the various state and behavior.
A developer with an OO mindset would use inheritance. Same amount of complexity in the implementation of domain characteristics, less complexity in terms of initializing the object (constructor takes care of it vs. a Factory method), but not as flexible in terms of what can be initialized.
Now think about it from a comprehensibility / readability standpoint. To understand the composition, one has a large number of functions that are composed programmatically inside another function. So there's little in terms of static code 'structure' (files and keywords and so forth) that makes the relatedness of Z and distance() jump out. In the OO world, you have a great big flashing red light telling you the hierarchy. Additionally, you have an essentially universal vocabulary to discuss structure, widely known graphical notations, a natural hierarchy (at least for single inheritance), etc.
Now, on the other hand, a well-named and constructed Factory method will often make explicit more of the sometimes-obscure relationships between state and behavior, since a compositional mindset facilitates functional code (that is, code that passes state via parameters, not via this ).
In a professional environment with experienced developers, the flexibility of composition generally trumps its more abstract nature. However, one should never discount the importance of comprehensibility, especially in teams that have varying degrees of experience and/or high levels of turnover.
Inheritance is an implementation decision. Interfaces almost always represent a better design, and should usually be used in an external API.
Why write a lot of boilerplate code forwarding method calls to a composed member object when the compiler will do it for you with inheritance?
This answer to another question summarises my thinking pretty well.
Does anyone else remember all of the OO-purists going ballistic over the COM implementation of "containment" instead of "inheritance?" It achieved essentially the same thing, but with a different kind of implementation. This reminds me of your question.
I strictly try to avoid religious wars in software development. ("vi" OR "emacs" ... when everybody knows its "vi"!) I think they are a sign of small minds. Comp Sci Professors can afford to sit around and debate these things. I'm working in the real world and could care less. All of this stuff are simply attempts at giving useful solutions to real problems. If they work, people will use them. The fact that OO languages and tools have been commercially available on a wide scale for going on 20 years is a pretty good bet that they are useful to a lot of people.
There are a lot of features in a programming language that are not really needed. But they are there for a variety of reasons that all basically boil down to reusability and maintainability.
All a business cares about is producing (quality of course) cheaply and quickly.
As a developer you help do this is by becoming more efficient and productive. So you need to make sure the code you write is easily reusable and maintainable.
And, among other things, this is what inheritance gives you - the ability to reuse without reinventing the wheel, as well as the ability to easily maintain your base object without having to perform maintenance on all similar objects.
There's lots of useful usages of inheritance, and probably just as many which are less useful. One of the useful ones is the stream class.
You have a method that should be able stream data. By using the stream base class as input to the method you ensure that your method can be used to write to many kinds of streams without change. To the file system, over the network, with compression, etc.
No.
for me, OOP is mostly about encapsulation of state and behavior and polymorphism.
and that is. but if you want static type checking, you'll need some way to group different types, so the compiler can check while still allowing you to use new types in place of another, related type. creating a hierarchy of types lets you use the same concept (classes) for types and for groups of types, so it's the most widely used form.
but there are other ways, i think the most general would be duck typing, and closely related, prototype-based OOP (which isn't inheritance in fact, but it's usually called prototype-based inheritance).
Depends on your definition of "needed". No, there is nothing that is impossible to do without inheritance, although the alternative may require more verbose code, or a major rewrite of your application.
But there are definitely cases where inheritance is useful. As you say, composition plus interfaces together cover almost all cases, but what if I want to supply a default behavior? An interface can't do that. A base class can. Sometimes, what you want to do is really just override individual methods. Not reimplement the class from scratch (as with an interface), but just change one aspect of it. or you may not want all members of the class to be overridable. Perhaps you have only one or two member methods you want the user to override, and the rest, which calls these (and performs validation and other important tasks before and after the user-overridden methods) are specified once and for all in the base class, and can not be overridden.
Inheritance is often used as a crutch by people who are too obsessed with Java's narrow definition of (and obsession with) OOP though, and in most cases I agree, it's the wrong solution, as if the deeper your class hierarchy, the better your software.
Inheritance is a good thing when the subclass really is the same kind of object as the superclass. E.g. if you're implementing the Active Record pattern, you're attempting to map a class to a table in the database, and instances of the class to a row in the database. Consequently, it is highly likely that your Active Record classes will share a common interface and implementation of methods like: what is the primary key, whether the current instance is persisted, saving the current instance, validating the current instance, executing callbacks upon validation and/or saving, deleting the current instance, running a SQL query, returning the name of the table that the class maps to, etc.
It also seems from how you phrase your question that you're assuming that inheritance is single but not multiple. If we need multiple inheritance, then we have to use interfaces plus composition to pull off the job. To put a fine point about it, Java assumes that implementation inheritance is singular and interface inheritance can be multiple. One need not go this route. E.g. C++ and Ruby permit multiple inheritance for your implementation and your interface. That said, one should use multiple inheritance with caution (i.e. keep your abstract classes virtual and/or stateless).
That said, as you note, there are too many real-life class hierarchies where the subclasses inherit from the superclass out of convenience rather than bearing a true is-a relationship. So it's unsurprising that a change in the superclass will have side-effects on the subclasses.
Not needed, but usefull.
Each language has got its own methods to write less code. OOP sometimes gets convoluted, but I think that is the responsability of the developers, the OOP platform is usefull and sharp when it is well used.
I agree with everyone else about the necessary/useful distinction.
The reason I like OOP is because it lets me write code that's cleaner and more logically organized. One of the biggest benefits comes from the ability to "factor-up" logic that's common to a number of classes. I could give you concrete examples where OOP has seriously reduced the complexity of my code, but that would be boring for you.
Suffice it to say, I heart OOP.
Absolutely needed? no,
But think of lamps. You can create a new lamp from scratch each time you make one, or you can take properties from the original lamp and make all sorts of new styles of lamp that have the same properties as the original, each with their own style.
Or you can make a new lamp from scratch or tell people to look at it a certain way to see the light, or , or, or
Not required, but nice :)
Thanks to all for your answers. I maintain my position that, strictly speaking, inheritance isn't needed, though I believe I found a new appreciation for this feature.
Something else: In my job experience, I have found inheritance leads to simpler, clearer designs when it's brought in late in the project, after it's noticed a lot of the classes have much commonality and you create a base class. In projects where a grand-schema was created from the very beginning, with a lot of classes in an inheritance hierarchy, refactoring is usually painful and dificult.
Seeing some answers mentioning something similar makes me wonder if this might not be exactly how inheritance's supposed to be used: ex post facto. Reminds me of Stepanov's quote: "you don't start with axioms, you end up with axioms after you have a bunch of related proofs". He's a mathematician, so he ought to know something.
The biggest problem with interfaces is that they cannot be changed. Make an interface public, then change it (add a new method to it) and break million applications all around the world, because they have implemented your interface, but not the new method. The app may not even start, a VM may refuse to load it.
Use a base class (not abstract) other programmers can inherit from (and override methods as needed); then add a method to it. Every app using your class will still work, this method just won't be overridden by anyone, but since you provide a base implementation, this one will be used and it may work just fine for all subclasses of your class... it may also cause strange behavior because sometimes overriding it would have been necessary, okay, might be the case, but at least all those million apps in the world will still start up!
I rather have my Java application still running after updating the JDK from 1.6 to 1.7 with some minor bugs (that can be fixed over time) than not having it running it at all (forcing an immediate fix or it will be useless to people).
//I found this QA very useful. Many have answered this right. But i wanted to add...
1: Ability to define abstract interface - E.g., for plugin developers. Of course, you can use function pointers, but this is better and simpler.
2: Inheritance helps model types very close to their actual relationships. Sometimes a lot of errors get caught at compile time, because you have the right type hierarchy. For instance, shape <-- triangle (lets say there is a lot of code to be reused). You might want to compose triangle with a shape object, but shape is an incomplete type. Inserting dummy implementations like double getArea() {return -1;} will do, but you are opening up room for error. That return -1 can get executed some day!
3: void func(B* b); ... func(new D()); Implicit type conversion gives a great notational convenience since Derived is Base. I remember having read Straustrup saying that he wanted to make classes first class citizens just like fundamental data types (hence overloading operators etc). Implicit conversion from Derived to Base, behaves just like an implicit conversion from a data type to broader compatible one (short to int).
Inheritance and Composition have their own pros and cons.
Refer to this related SE question on pros of inheritance and cons of composition.
Prefer composition over inheritance?
Have a look at the example in this documentation link:
The example shows different use cases of overriding by using inheritance as a mean to achieve polymorphism.
In the following, inheritance is used to present a particular property for all of several specific incarnations of the same type thing. In this case, the GeneralPresenation has a properties that are relevant to all "presentation" (the data passed to an MVC view). The Master Page is the only thing using it and expects a GeneralPresentation, though the specific views expect more info, tailored to their needs.
public abstract class GeneralPresentation
{
public GeneralPresentation()
{
MenuPages = new List<Page>();
}
public IEnumerable<Page> MenuPages { get; set; }
public string Title { get; set; }
}
public class IndexPresentation : GeneralPresentation
{
public IndexPresentation() { IndexPage = new Page(); }
public Page IndexPage { get; set; }
}
public class InsertPresentation : GeneralPresentation
{
public InsertPresentation() {
InsertPage = new Page();
ValidationInfo = new PageValidationInfo();
}
public PageValidationInfo ValidationInfo { get; set; }
public Page InsertPage { get; set; }
}