When to use inheritance? - oop

Me and my friend had a little debate.
I had to implement a "browser process watcher" class which invokes an event whenever the browser that is being watched (let's say Internet explorer) is running.
We created a "Process watcher" class, and here starts the debate:
He said that the constructor should only accept strings (like "iexplore.exe"), and i said we should inherit "Process watcher" to create a "browser watcher" which accepts the currently used browser enum, which the constructor will "translate" it to "iexplore".
he said we should use a util function which will act as the translator.
I know both ways are valid and good, but i wonder whats the pros and cons of each, and what is suitable in our case.

Lately I've been taking the approach of "Keep it simple now, and refactor later if you need to extend it".
What you're doing right now seems pretty simple. You only really have one case that you're trying to handle. So I'd say take the simpler approach for now. In the end, if you never have to make another kind of watcher then you'll avoid the extra complexity. However, code it in a way that will make it easier to refactor later if you need to.
In the future, if you find you need another type of watcher, spend the effort then to refactor it into an inheritance (or composition, or whatever other pattern you want to follow). If your initial code is done right the refactoring should be fairly easy, so you're not really adding much extra work.
I've found this approach works fairly well for me. In the cases where I really didn't need inheritance the code stays simple. But when I really do need it I can add it in without any real problems.

Other things being equal I prefer the simpler solution (a single concrete class which takes a string as a constructor parameter) to the more complicated one (using a base class and a subclass).
Inheritance is appropriate when you want to vary behaviour: if the browser watcher will do something that the ordinary process watcher doesn't. But if you only want to vary the value of the data, then just vary the data.

If you have no other use for ProcessWatcher than to serve as the parent of BrowserWatcher than you shouldn't create it. If other Watchers are being implemented that have shared functionality that can be placed in ProcessWatcher, then you should (both are "isa" relationships so Rob's criterion is met).
It really is as simple as that. Arguing that some day you'll have other watchers is not an argument in favor of creating a separate class. It is a mental tic that you should lose ASAP.

Inheritance should only ever be used to implement an "isa" relationship.
As you can say that a "browser watcher" is a specific instance of a "process watcher" then inheritance is suitable for this architecture.
Hence, for me, having the identity of what you are watching passed through as a part of the construction of the browser watcher implementation of the "process watcher" is definitely the way to go.
Edit: More specifically, inheritance is for when you want to specialise behaviour. For example, most animals make a sound, but you could hope to provide which sound to make in a class called animal, you must wait for the specialisation.
So then we have Horse class providing a "neigh" for its sound, a Dog class providing a "bark" for its sound, etc.
HTH
cheers,
Rob

Depends on what use case you have or what god you follow.
I don't say "inheritance is evil" but generally I follow the principle "Favor composition over inheritance" to avoid excessive class hierarchies.

I agree that in most cases, simplicity over complexity is a good strategy, as long as your simplicity is not too short-sighted (ref. Herms, write code in such a way that you can easily re-factor later).
However, I also know how difficult it can be to shut up that bug in your ear that encourages a more thorough design. If you still want to favor inheritance without necessarily thinking in terms of "base class" and "subclass", you can simply define an interface (ex. IProcessWatcher) which is implemented by ProcessWatcher. When you use the ProcessWatcher object, refer to it in terms of the interface so that if you later decide to create a BrowserWatcher (or any other kind of ProcessWatcher), you can do so without forcing it to descend from ProcessWatcher, as long as it implements the IProcessWatcher interface.
Warning: Proceed with caution. It gets tempting to want to define an interface for every single object, and let's face it, that just ridiculous. =)
Ultimately, you need to find something that you're both comfortable with, since you will both have to maintain this code, and I think this might be a nice compromise, rather than simply "Inheritance or No inheritance".
Good luck!

in a very simple sentence can say:
when you need to use inheritance (subclassing) that subclass has different behaviour (not properties) than super class.

Related

Could override be deprecated?

There is a Design Principle that says Favor composition over inheritance and its advertised benefit is that it simplifies design. Let's agree on that as background for this question.
So, could override be deprecated? Could we, in theory, get rid of it for good?
Let's be a bit over zealous on the above mentioned Design Principle and take it to the extreme: composition all the way. One reason should be enough for now, override abuse.
One question arises: are we, programmers, going to loose something? Is any power lost trying to prevent some possible abuse?
So, what applications are there for override and can they be achieved otherwise? Should they?
Not only is this a completely radical and impractical proposal, it's not a particularly compelling one. Just because a feature gets abused doesn't mean that it should be removed entirely. People have been abusing all sorts of things for a very long time, but that hardly implies that they don't serve a useful purpose when used correctly.
Design patterns are one thing; designing an intentionally limited language to conform with your ideal notion of a good design pattern is quite another. To my mind, it's an exercise in futility. Programmers will still find something to abuse.
And I take issue with the central assumption that any use of override is inappropriate or abusive. There are lots of cases where you want to take advantage of inheritance implying an is-a relationship. Sure, this model doesn't fit the real world 100% of the time, but there are plenty of times that it does.
The Animal and Shape class examples that you read about in textbooks might be a bit contrived, but I frequently use inheritance in real-world applications.
That's not to imply that I disagree with the sentiment that one should generally or when in doubt, favor composition over inheritance. But that's not saying that inheritance is bad and should never be used.
If you remove inheritance altogether you remove a significant feature of OOP design.
Using inheritance allows you to use a "is a" design, which has a strong meaning in OOP design, and of course saves code redundancy.
If you'd use only encapsulation you'd have to either expose the members (which isn't always what you want (raises design complexity because of the amount of stuff the programmer needs to know about).
Or, make wrapper methods that will call the member's methods (which is redundant).
Besides that, lets assume you know the difference between overriding and hiding, you can see that most OOP languages will choose to use strictly overriding when given the choice.
This is because overriding is usually more intuitive than hiding.
So, if you remove overriding, and still allow inheritance, you are left with hiding. That usually leads to many runtime errors and un-expected results with type conflicts.
Farther more you won't be able to have things like an array or list of base class pointers that point a lot of different derived classes. Because if you don't have overrides it won't be able to call the specified derived class method, it will only call the same base class method for all of them.
I've added a response on behalf of astander extracting from his link (hope you don't mind)
For example, one advantage with inheritance is that it is easier to
use than composition. However, that ease of use comes at the cost that
it is harder to reuse because the subclass is tied to the parent
class.
One advantage of composition is that it is more flexible because
behavior can be swapped at runtime. One disadvantage of composition is
that the behavior of the system may be harder to understand just by
looking at the source. These are all factors one should think about
when applying composition over inheritance.
I'm always using polymorphism. I always seem to have a bunch of objects with some common concept behind them and a lot of code that is interested in that concept--that is, they care about Animals, not Lions and Tigers and Bears or even Carnivores. Interfaces often work better for this than superclasses, so I suppose I could get by without subclassing. (Are interfaces okay when subclassing is not?) However, I have often found that a lot of classes using an interface have identical code for the interface methods. Changing the interface to a superclass can let me get rid of a lot of duplicate code. The other situation I find myself in is where a large, complex class does what I need except for one teeny, tiny little thing. With subclassing, I can create a new class that does exactly what I need in just a few lines.
There may be a language component to this debate. When I'm writing in Java I subclass at a furious rate. When I'm writing in C# I think long and hard before overriding anything or even using interfaces. I'm not sure why and it may have more to do with the type of work I do in those languages than the languages themselves. But working in C#, I am quite sympathetic to this idea, while when working in Java...well, I'd have to toss almost all my Java code if I couldn't override.

When do you need to create abstractions in the form of interfaces?

When do you encourage programming against an interface and not directly to a concrete class?
A guideline that I follow is to create abstractions whenever code requires to cross a logical/physical boundary, most especially when infrastructure-related concerns are involved.
Another checkpoint would be if a dependency will likely change in the future, due to possible additional concerns code (such as caching, transactional awareness, invoking a webservice instead of in-process execution) or if such dependencies have direct references to infrastructure integration points.
If code depends on something that does not require control to cross a logical/physical boundary, I more or less don't create abstractions to interact with those.
Am I missing anything?
Also, use interfaces when
Multiple objects will need to be acted upon in a particular fashion, but are not fundamentally related. Perhaps many of your business objects access a particular utility object, and when they do they need to give a reference of themselves to that utility object so the utility object can call a particular method. Have that method in an interface and pass that interface to that utility object.
Passing around interfaces as parameters can be very helpful in unit testing. Even if you have just one type of object that sports a particular interface, and hence don't really need a defined interface, you might define/implement an interface solely to "fake" that object in unit tests.
related to the first 2 bullets, check out the Observer pattern and the Dependency Injection. I'm not saying to implement these patterns, but they illustrate types of places where interfaces are really helpful.
Another twist on this is for implementing a couple of the SOLID Principals, Open Closed principal and the Interface Segregation principle. Like the previous bullet, don't get stressed about strictly implementing these principals everywhere (right away at least), but use these concepts to help move your thinking away from just what objects go where to thinking more about contracts and dependency
In the end, let's not make it too complicated: we're in a strongly typed world in .NET. If you need to call a method or set a property but the object you're passing/using could be fundamentally different, use an interface.
I would add that if your code is not going to be referenced by another library (for a while at least), then the decision of whether to use an interface in a particular situation is one that you can responsibly put off. The "extract interface" refactoring is easy to do these days. In my current project, I've got an object being passed around that I'm thinking maybe I should switch to an interface; I'm not stressing about it.
Interfaces abstraction are convenient when doing unit test. It helps for mocking test objects. It very useful in TDD for developing without actually using data from your database.
If you don't need any features of the class that aren't found in the Interface...then why not always prefer the Interface implementation?
It will make your code easier to modify in the future and easier to test (mocking).
you have the right idea, already. i would only add a couple of notes to this...
first, abstraction does not mean 'interface'. for example, a "connection string" is an abstraction, even though it's just a string... it's not about the 'type' of the thing in question, it's about the intention of use for that thing.
and secondly, if you are doing test automation of any kind, look for the pain and friction that are exposed by writing the tests. if you find yourself having to set up too many external conditions for a test, it's a sign that you need a better abstraction between the thing your testing and the things it interacts with.
I think you've said it pretty well. Much of this will be a stylistic thing. There are open source projects I've looked at where everything has an interface and an implementation, and it's kind of frustrating, but it might make iterative development a little easier, since any objects implementation can break but dummies will still work. But honestly, I can dummy any class that doesn't overuse the final keyword by inheritance.
I would add to your list this: anything which can be thought of as a black box should be abstracted. This includes some of the things you've mentioned, but it also includes hairy algorithms, which are likely to have multiple useful implementations with different advantages for different situation.
Additionally, interfaces come in handy very often with composite objects. That's the only way something like java's swing library gets anything done, but it can also be useful for more mundane objects. (I personally like having an interface like ValidityChecker with ways to and-compose or or-compose subordinate ValidityCheckers.)
Most of the useful things that come with the Interface passing have been already said. However I would add:
implementing an interface to an object, or later multiple objects, FORCES all the implementers to follow an IDENTICAL pattern to implement contract with the object. This can be useful in case you have not so OOP-experienced-programmers actually writing the implementation code.
in some languages you can add attributes on the interface itself, which can be different from the actual object implementation attribute as sense and intent

Is Inheritance really needed?

I must confess I'm somewhat of an OOP skeptic. Bad pedagogical and laboral experiences with object orientation didn't help. So I converted into a fervent believer in Visual Basic (the classic one!).
Then one day I found out C++ had changed and now had the STL and templates. I really liked that! Made the language useful. Then another day MS decided to apply facial surgery to VB, and I really hated the end result for the gratuitous changes (using "end while" instead of "wend" will make me into a better developer? Why not drop "next" for "end for", too? Why force the getter alongside the setter? Etc.) plus so much Java features which I found useless (inheritance, for instance, and the concept of a hierarchical framework).
And now, several years afterwards, I find myself asking this philosophical question: Is inheritance really needed?
The gang-of-four say we should favor object composition over inheritance. And after thinking of it, I cannot find something you can do with inheritance you cannot do with object aggregation plus interfaces. So I'm wondering, why do we even have it in the first place?
Any ideas? I'd love to see an example of where inheritance would be definitely needed, or where using inheritance instead of composition+interfaces can lead to a simpler and easier to modify design. In former jobs I've found if you need to change the base class, you need to modify also almost all the derived classes for they depended on the behaviour of parent. And if you make the base class' methods virtual... then not much code sharing takes place :(
Else, when I finally create my own programming language (a long unfulfilled desire I've found most developers share), I'd see no point in adding inheritance to it...
Really really short answer: No. Inheritance is not needed because only byte code is truly needed. But obviously, byte code or assemble is not a practically way to write your program. OOP is not the only paradigm for programming. But, I digress.
I went to college for computer science in the early 2000s when inheritance (is a), compositions (has a), and interfaces (does a) were taught on an equal footing. Because of this, I use very little inheritance because it is often suited better by composition. This was stressed because many of the professors had seen bad code (along with what you have described) because of abuse of inheritance.
Regardless of creating a language with or without inheritances, can you create a programming language which prevents bad habits and bad design decisions?
I think asking for situations where inheritance is really needed is missing the point a bit. You can fake inheritance by using an interface and some composition. This doesnt mean inheritance is useless. You can do anything you did in VB6 in assembly code with some extra typing, that doesn't mean VB6 was useless.
I usually just start using an interface. Sometimes I notice I actually want to inherit behaviour. That usually means I need a base class. It's that simple.
Inheritance defines an "Is-A" relationship.
class Point( object ):
# some set of features: attributes, methods, etc.
class PointWithMass( Point ):
# An additional feature: mass.
Above, I've used inheritance to formally declare that PointWithMass is a Point.
There are several ways to handle object P1 being a PointWithMass as well as Point. Here are two.
Have a reference from PointWithMass object p1 to some Point object p1-friend. The p1-friend has the Point attributes. When p1 needs to engage in Point-like behavior, it needs to delegate the work to its friend.
Rely on language inheritance to assure that all features of Point are also applicable to my PointWithMass object, p1. When p1 needs to engage in Point-like behavior, it already is a Point object and can just do what needs to be done.
I'd rather not manage the extra objects floating around to assure that all superclass features are part of a subclass object. I'd rather have inheritance to be sure that each subclass is an instance of it's own class, plus is an instance of all superclasses, too.
Edit.
For statically-typed languages, there's a bonus. When I rely on the language to handle this, a PointWithMass can be used anywhere a Point was expected.
For really obscure abuse of inheritance, read about C++'s strange "composition through private inheritance" quagmire. See Any sensible examples of creating inheritance without creating subtyping relations? for some further discussion on this. It conflates inheritance and composition; it doesn't seem to add clarity or precision to the resulting code; it only applies to C++.
The GoF (and many others) recommend that you only favor composition over inheritance. If you have a class with a very large API, and you only want to add a very small number of methods to it, leaving the base implementation alone, I would find it inappropriate to use composition. You'd have to re-implement all of the public methods of the encapsulated class to just return their value. This is a waste of time (programmer and CPU) when you can just inherit all of this behavior, and spend your time concentrating on new methods.
So, to answer your question, no you don't absolutely need inheritance. There are, however, many situations where it's the right design choice.
The problem with inheritance is that it conflates the issue of sub-typing (asserting an is-a relationship) and code reuse (e.g., private inheritance is for reuse only).
So, no it's an overloaded word that we don't need. I'd prefer sub-typing (using the 'implements' keyword) and import (kinda like Ruby does it in class definitions)
Inheritance lets me push off a whole bunch of bookkeeping onto the compiler because it gives me polymorphic behavior for object hierarchies that I would otherwise have to create and maintain myself. Regardless of how good a silver bullet OOP is, there will always be instances where you want to employ a certain type of behavior because it just makes sense to do. And ultimately, that's the point of OOP: it makes a certain class of problems much easier to solve.
The downsides of composition is that it may disguise the relatedness of elements and it may be harder for others to understand. With,say, a 2D Point class and the desire to extend it to higher dimensions, you would presumably have to add (at least) Z getter/setter, modify getDistance(), and maybe add a getVolume() method. So you have the Objects 101 elements: related state and behavior.
A developer with a compositional mindset would presumably have defined a getDistance(x, y) -> double method and would now define a getDistance(x, y, z) -> double method. Or, thinking generally, they might define a getDistance(lambdaGeneratingACoordinateForEveryAxis()) -> double method. Then they would probably write createTwoDimensionalPoint() and createThreeDimensionalPoint() factory methods (or perhaps createNDimensionalPoint(n) ) that would stitch together the various state and behavior.
A developer with an OO mindset would use inheritance. Same amount of complexity in the implementation of domain characteristics, less complexity in terms of initializing the object (constructor takes care of it vs. a Factory method), but not as flexible in terms of what can be initialized.
Now think about it from a comprehensibility / readability standpoint. To understand the composition, one has a large number of functions that are composed programmatically inside another function. So there's little in terms of static code 'structure' (files and keywords and so forth) that makes the relatedness of Z and distance() jump out. In the OO world, you have a great big flashing red light telling you the hierarchy. Additionally, you have an essentially universal vocabulary to discuss structure, widely known graphical notations, a natural hierarchy (at least for single inheritance), etc.
Now, on the other hand, a well-named and constructed Factory method will often make explicit more of the sometimes-obscure relationships between state and behavior, since a compositional mindset facilitates functional code (that is, code that passes state via parameters, not via this ).
In a professional environment with experienced developers, the flexibility of composition generally trumps its more abstract nature. However, one should never discount the importance of comprehensibility, especially in teams that have varying degrees of experience and/or high levels of turnover.
Inheritance is an implementation decision. Interfaces almost always represent a better design, and should usually be used in an external API.
Why write a lot of boilerplate code forwarding method calls to a composed member object when the compiler will do it for you with inheritance?
This answer to another question summarises my thinking pretty well.
Does anyone else remember all of the OO-purists going ballistic over the COM implementation of "containment" instead of "inheritance?" It achieved essentially the same thing, but with a different kind of implementation. This reminds me of your question.
I strictly try to avoid religious wars in software development. ("vi" OR "emacs" ... when everybody knows its "vi"!) I think they are a sign of small minds. Comp Sci Professors can afford to sit around and debate these things. I'm working in the real world and could care less. All of this stuff are simply attempts at giving useful solutions to real problems. If they work, people will use them. The fact that OO languages and tools have been commercially available on a wide scale for going on 20 years is a pretty good bet that they are useful to a lot of people.
There are a lot of features in a programming language that are not really needed. But they are there for a variety of reasons that all basically boil down to reusability and maintainability.
All a business cares about is producing (quality of course) cheaply and quickly.
As a developer you help do this is by becoming more efficient and productive. So you need to make sure the code you write is easily reusable and maintainable.
And, among other things, this is what inheritance gives you - the ability to reuse without reinventing the wheel, as well as the ability to easily maintain your base object without having to perform maintenance on all similar objects.
There's lots of useful usages of inheritance, and probably just as many which are less useful. One of the useful ones is the stream class.
You have a method that should be able stream data. By using the stream base class as input to the method you ensure that your method can be used to write to many kinds of streams without change. To the file system, over the network, with compression, etc.
No.
for me, OOP is mostly about encapsulation of state and behavior and polymorphism.
and that is. but if you want static type checking, you'll need some way to group different types, so the compiler can check while still allowing you to use new types in place of another, related type. creating a hierarchy of types lets you use the same concept (classes) for types and for groups of types, so it's the most widely used form.
but there are other ways, i think the most general would be duck typing, and closely related, prototype-based OOP (which isn't inheritance in fact, but it's usually called prototype-based inheritance).
Depends on your definition of "needed". No, there is nothing that is impossible to do without inheritance, although the alternative may require more verbose code, or a major rewrite of your application.
But there are definitely cases where inheritance is useful. As you say, composition plus interfaces together cover almost all cases, but what if I want to supply a default behavior? An interface can't do that. A base class can. Sometimes, what you want to do is really just override individual methods. Not reimplement the class from scratch (as with an interface), but just change one aspect of it. or you may not want all members of the class to be overridable. Perhaps you have only one or two member methods you want the user to override, and the rest, which calls these (and performs validation and other important tasks before and after the user-overridden methods) are specified once and for all in the base class, and can not be overridden.
Inheritance is often used as a crutch by people who are too obsessed with Java's narrow definition of (and obsession with) OOP though, and in most cases I agree, it's the wrong solution, as if the deeper your class hierarchy, the better your software.
Inheritance is a good thing when the subclass really is the same kind of object as the superclass. E.g. if you're implementing the Active Record pattern, you're attempting to map a class to a table in the database, and instances of the class to a row in the database. Consequently, it is highly likely that your Active Record classes will share a common interface and implementation of methods like: what is the primary key, whether the current instance is persisted, saving the current instance, validating the current instance, executing callbacks upon validation and/or saving, deleting the current instance, running a SQL query, returning the name of the table that the class maps to, etc.
It also seems from how you phrase your question that you're assuming that inheritance is single but not multiple. If we need multiple inheritance, then we have to use interfaces plus composition to pull off the job. To put a fine point about it, Java assumes that implementation inheritance is singular and interface inheritance can be multiple. One need not go this route. E.g. C++ and Ruby permit multiple inheritance for your implementation and your interface. That said, one should use multiple inheritance with caution (i.e. keep your abstract classes virtual and/or stateless).
That said, as you note, there are too many real-life class hierarchies where the subclasses inherit from the superclass out of convenience rather than bearing a true is-a relationship. So it's unsurprising that a change in the superclass will have side-effects on the subclasses.
Not needed, but usefull.
Each language has got its own methods to write less code. OOP sometimes gets convoluted, but I think that is the responsability of the developers, the OOP platform is usefull and sharp when it is well used.
I agree with everyone else about the necessary/useful distinction.
The reason I like OOP is because it lets me write code that's cleaner and more logically organized. One of the biggest benefits comes from the ability to "factor-up" logic that's common to a number of classes. I could give you concrete examples where OOP has seriously reduced the complexity of my code, but that would be boring for you.
Suffice it to say, I heart OOP.
Absolutely needed? no,
But think of lamps. You can create a new lamp from scratch each time you make one, or you can take properties from the original lamp and make all sorts of new styles of lamp that have the same properties as the original, each with their own style.
Or you can make a new lamp from scratch or tell people to look at it a certain way to see the light, or , or, or
Not required, but nice :)
Thanks to all for your answers. I maintain my position that, strictly speaking, inheritance isn't needed, though I believe I found a new appreciation for this feature.
Something else: In my job experience, I have found inheritance leads to simpler, clearer designs when it's brought in late in the project, after it's noticed a lot of the classes have much commonality and you create a base class. In projects where a grand-schema was created from the very beginning, with a lot of classes in an inheritance hierarchy, refactoring is usually painful and dificult.
Seeing some answers mentioning something similar makes me wonder if this might not be exactly how inheritance's supposed to be used: ex post facto. Reminds me of Stepanov's quote: "you don't start with axioms, you end up with axioms after you have a bunch of related proofs". He's a mathematician, so he ought to know something.
The biggest problem with interfaces is that they cannot be changed. Make an interface public, then change it (add a new method to it) and break million applications all around the world, because they have implemented your interface, but not the new method. The app may not even start, a VM may refuse to load it.
Use a base class (not abstract) other programmers can inherit from (and override methods as needed); then add a method to it. Every app using your class will still work, this method just won't be overridden by anyone, but since you provide a base implementation, this one will be used and it may work just fine for all subclasses of your class... it may also cause strange behavior because sometimes overriding it would have been necessary, okay, might be the case, but at least all those million apps in the world will still start up!
I rather have my Java application still running after updating the JDK from 1.6 to 1.7 with some minor bugs (that can be fixed over time) than not having it running it at all (forcing an immediate fix or it will be useless to people).
//I found this QA very useful. Many have answered this right. But i wanted to add...
1: Ability to define abstract interface - E.g., for plugin developers. Of course, you can use function pointers, but this is better and simpler.
2: Inheritance helps model types very close to their actual relationships. Sometimes a lot of errors get caught at compile time, because you have the right type hierarchy. For instance, shape <-- triangle (lets say there is a lot of code to be reused). You might want to compose triangle with a shape object, but shape is an incomplete type. Inserting dummy implementations like double getArea() {return -1;} will do, but you are opening up room for error. That return -1 can get executed some day!
3: void func(B* b); ... func(new D()); Implicit type conversion gives a great notational convenience since Derived is Base. I remember having read Straustrup saying that he wanted to make classes first class citizens just like fundamental data types (hence overloading operators etc). Implicit conversion from Derived to Base, behaves just like an implicit conversion from a data type to broader compatible one (short to int).
Inheritance and Composition have their own pros and cons.
Refer to this related SE question on pros of inheritance and cons of composition.
Prefer composition over inheritance?
Have a look at the example in this documentation link:
The example shows different use cases of overriding by using inheritance as a mean to achieve polymorphism.
In the following, inheritance is used to present a particular property for all of several specific incarnations of the same type thing. In this case, the GeneralPresenation has a properties that are relevant to all "presentation" (the data passed to an MVC view). The Master Page is the only thing using it and expects a GeneralPresentation, though the specific views expect more info, tailored to their needs.
public abstract class GeneralPresentation
{
public GeneralPresentation()
{
MenuPages = new List<Page>();
}
public IEnumerable<Page> MenuPages { get; set; }
public string Title { get; set; }
}
public class IndexPresentation : GeneralPresentation
{
public IndexPresentation() { IndexPage = new Page(); }
public Page IndexPage { get; set; }
}
public class InsertPresentation : GeneralPresentation
{
public InsertPresentation() {
InsertPage = new Page();
ValidationInfo = new PageValidationInfo();
}
public PageValidationInfo ValidationInfo { get; set; }
public Page InsertPage { get; set; }
}

How do you define a Single Responsibility?

I know about "class having a single reason to change". Now, what is that exactly? Are there some smells/signs that could tell that class does not have a single responsibility? Or could the real answer hide in YAGNI and only refactor to a single responsibility the first time your class changes?
The Single Responsibility Principle
There are many obvious cases, e.g. CoffeeAndSoupFactory. Coffee and soup in the same appliance can lead to quite distasteful results. In this example, the appliance might be broken into a HotWaterGenerator and some kind of Stirrer. Then a new CoffeeFactory and SoupFactory can be built from those components and any accidental mixing can be avoided.
Among the more subtle cases, the tension between data access objects (DAOs) and data transfer objects (DTOs) is very common. DAOs talk to the database, DTOs are serializable for transfer between processes and machines. Usually DAOs need a reference to your database framework, therefore they are unusable on your rich clients which neither have the database drivers installed nor have the necessary privileges to access the DB.
Code Smells
The methods in a class start to be grouped by areas of functionality ("these are the Coffee methods and these are the Soup methods").
Implementing many interfaces.
Write a brief, but accurate description of what the class does.
If the description contains the word "and" then it needs to be split.
Well, this principle is to be used with some salt... to avoid class explosion.
A single responsibility does not translate to single method classes. It means a single reason for existence... a service that the object provides for its clients.
A nice way to stay on the road... Use the object as person metaphor... If the object were a person, who would I ask to do this? Assign that responsibility to the corresponding class. However you wouldn't ask the same person to do your manage files, compute salaries, issue paychecks, and verify financial records... Why would you want a single object to do all these? (it's okay if a class takes on multiple responsibilities as long as they are all related and coherent.)
If you employ a CRC card, it's a nice subtle guideline. If you're having trouble getting all the responsibilities of that object on a CRC card, it's probably doing too much... a max of 7 would do as a good marker.
Another code smell from the refactoring book would be HUGE classes. Shotgun surgery would be another... making a change to one area in a class causes bugs in unrelated areas of the same class...
Finding that you are making changes to the same class for unrelated bug-fixes again and again is another indication that the class is doing too much.
A simple and practical method to check single responsibility (not only classes but also method of classes) is the name choice. When you design a class, if you easily find a name for the class that specify exactly what it defines, you're in the right way.
A difficulty to choose a name is near always a symptom of bad design.
the methods in your class should be cohesive...they should work together and make use of the same data structures internally. If you find you have too many methods that don't seem entirely well related, or seem to operate on different things, then quite likely you don't have a good single responsibility.
Often it's hard to initially find responsibilities, and sometimes you need to use the class in several different contexts and then refactor the class into two classes as you start to see the distinctions. Sometimes you find that it's because you are mixing an abstract and concrete concept together. They tend to be harder to see, and, again, use in different contexts will help clarify.
The obvious sign is when your class ends up looking like a Big Ball of Mud, which is really the opposite of SRP (single responsibility principle).
Basically, all the object's services should be focused on carrying out a single responsibility, meaning every time your class changes and adds a service which does not respect that, you know you're "deviating" from the "right" path ;)
The cause is usually due to some quick fixes hastily added to the class to repair some defects. So the reason why you are changing the class is usually the best criteria to detect if you are about to break the SRP.
Martin's Agile Principles, Patterns, and Practices in C# helped me a lot to grasp SRP. He defines SRP as:
A class should have only one reason to change.
So what is driving change?
Martin's answer is:
[...] each responsibility is an axis of change. (p. 116)
and further:
In the context of the SRP, we define a responsibility to be a reason for change. If you can think of more than one motive for changing a class, that class has more than one responsibility (p. 117)
In fact SRP is encapsulating change. If change happens, it should just have a local impact.
Where is YAGNI?
YAGNI can be nicely combined with SRP: When you apply YAGNI, you wait until some change is actually happening. If this happens you should be able to clearly see the responsibilities which are inferred from the reason(s) for change.
This also means that responsibilities can evolve with each new requirement and change. Thinking further SRP and YAGNI will provide you the means to think in flexible designs and architectures.
Perhaps a little more technical than other smells:
If you find you need several "friend" classes or functions, that's usually a good smell of bad SRP - because the required functionality is not actually exposed publically by your class.
If you end up with an excessively "deep" hierarchy (a long list of derived classes until you get to leaf classes) or "broad" hierarchy (many, many classes derived shallowly from a single parent class). It's usually a sign that the parent class does either too much or too little. Doing nothing is the limit of that, and yes, I have seen that in practice, with an "empty" parent class definition just to group together a bunch of unrelated classes in a single hierarchy.
I also find that refactoring to single responsibility is hard. By the time you finally get around to it, the different responsibilities of the class will have become entwined in the client code making it hard to factor one thing out without breaking the other thing. I'd rather err on the side of "too little" than "too much" myself.
Here are some things that help me figure out if my class is violating SRP:
Fill out the XML doc comments on a class. If you use words like if, and, but, except, when, etc., your classes probably is doing too much.
If your class is a domain service, it should have a verb in the name. Many times you have classes like "OrderService", which should probably be broken up into "GetOrderService", "SaveOrderService", "SubmitOrderService", etc.
If you end up with MethodA that uses MemberA and MethodB that uses MemberB and it is not part of some concurrency or versioning scheme, you might be violating SRP.
If you notice that you have a class that just delegates calls to a lot of other classes, you might be stuck in proxy class hell. This is especially true if you end up instantiating the proxy class everywhere when you could just use the specific classes directly. I have seen a lot of this. Think ProgramNameBL and ProgramNameDAL classes as a substitute for using a Repository pattern.
I've also been trying to get my head around the SOLID principles of OOD, specifically the single responsibility principle, aka SRP (as a side note the podcast with Jeff Atwood, Joel Spolsky and "Uncle Bob" is worth a listen). The big question for me is: What problems is SOLID trying to address?
OOP is all about modeling. The main purpose of modeling is to present a problem in a way that allows us to understand it and solve it. Modeling forces us to focus on the important details. At the same time we can use encapsulation to hide the "unimportant" details so that we only have to deal with them when absolutely necessary.
I guess you should ask yourself: What problem is your class trying to solve? Has the important information you need to solve this problem risen to the surface? Are the unimportant details tucked away so that you only have to think about them when absolutely necessary?
Thinking about these things results in programs that are easier to understand, maintain and extend. I think this is at the heart of OOD and the SOLID principles, including SRP.
Another rule of thumb I'd like to throw in is the following:
If you feel the need to either write some sort of cartesian product of cases in your test cases, or if you want to mock certain private methods of the class, Single Responsibility is violated.
I recently had this in the following way:
I had a cetain abstract syntax tree of a coroutine which will be generated into C later. For now, think of the nodes as Sequence, Iteration and Action. Sequence chains two coroutines, Iteration repeats a coroutine until a userdefined condition is true and Action performs a certain userdefined action. Furthermore, it is possible to annotate Actions and Iterations with codeblocks, which define the actions and conditions to evaluate as the coroutine walks ahead.
It was necessary to apply a certain transformation to all of these code blocks (for those interested: I needed to replace the conceptual user variables with actual implementation variables in order to prevent variable clashes. Those who know lisp macros can think of gensym in action :) ). Thus, the simplest thing that would work was a visitor which knows the operation internally and just calls them on the annotated code block of the Action and Iteration on visit and traverses all the syntax tree nodes. However, in this case, I'd have had to duplicate the assertion "transformation is applied" in my testcode for the visitAction-Method and the visitIteration-Method. In other words, I had to check the product test cases of the responsibilities Traversion (== {traverse iteration, traverse action, traverse sequence}) x Transformation (well, codeblock transformed, which blew up into iteration transformed and action transformed). Thus, I was tempted to use powermock to remove the transformation-Method and replace it with some 'return "I was transformed!";'-Stub.
However, according to the rule of thumb, I split the class into a class TreeModifier which contains a NodeModifier-instance, which provides methods modifyIteration, modifySequence, modifyCodeblock and so on. Thus, I could easily test the responsibility of traversing, calling the NodeModifier and reconstructing the tree and test the actual modification of the code blocks separately, thus removing the need for the product tests, because the responsibilities were separated now (into traversing and reconstructing and the concrete modification).
It also is interesting to notice that later on, I could heavily reuse the TreeModifier in various other transformations. :)
If you're finding troubles extending the functionality of the class without being afraid that you might end up breaking something else, or you cannot use class without modifying tons of its options which modify its behavior smells like your class doing too much.
Once I was working with the legacy class which had method "ZipAndClean", which was obviously zipping and cleaning specified folder...

Why should you prevent a class from being subclassed?

What can be reasons to prevent a class from being inherited? (e.g. using sealed on a c# class)
Right now I can't think of any.
Because writing classes to be substitutably extended is damn hard and requires you to make accurate predictions of how future users will want to extend what you've written.
Sealing your class forces them to use composition, which is much more robust.
How about if you are not sure about the interface yet and don't want any other code depending on the present interface? [That's off the top of my head, but I'd be interested in other reasons as well!]
Edit:
A bit of googling gave the following:
http://codebetter.com/blogs/patricksmacchia/archive/2008/01/05/rambling-on-the-sealed-keyword.aspx
Quoting:
There are three reasons why a sealed class is better than an unsealed class:
Versioning: When a class is originally sealed, it can change to unsealed in the future without breaking compatibility. (…)
Performance: (…) if the JIT compiler sees a call to a virtual method using a sealed types, the JIT compiler can produce more efficient code by calling the method non-virtually.(…)
Security and Predictability: A class must protect its own state and not allow itself to ever become corrupted. When a class is unsealed, a derived class can access and manipulate the base class’s state if any data fields or methods that internally manipulate fields are accessible and not private.(…)
I want to give you this message from "Code Complete":
Inheritance - subclasses - tends to
work against the primary technical
imperative you have as a programmer,
which is to manage complexity.For the sake of controlling complexity, you should maintain a heavy bias against inheritance.
The only legitimate use of inheritance is to define a particular case of a base class like, for example, when inherit from Shape to derive Circle. To check this look at the relation in opposite direction: is a Shape a generalization of Circle? If the answer is yes then it is ok to use inheritance.
So if you have a class for which there can not be any particular cases that specialize its behavior it should be sealed.
Also due to LSP (Liskov Substitution Principle) one can use derived class where base class is expected and this is actually imposes the greatest impact from use of inheritance: code using base class may be given an inherited class and it still has to work as expected. In order to protect external code when there is no obvious need for subclasses you seal the class and its clients can rely that its behavior will not be changed. Otherwise external code needs to be explicitly designed to expect possible changes in behavior in subclasses.
A more concrete example would be Singleton pattern. You need to seal singleton to ensure one can not break the "singletonness".
This may not apply to your code, but a lot of classes within the .NET framework are sealed purposely so that no one tries to create a sub-class.
There are certain situations where the internals are complex and require certain things to be controlled very specifically so the designer decided no one should inherit the class so that no one accidentally breaks functionality by using something in the wrong way.
#jjnguy
Another user may want to re-use your code by sub-classing your class. I don't see a reason to stop this.
If they want to use the functionality of my class they can achieve that with containment, and they will have much less brittle code as a result.
Composition seems to be often overlooked; all too often people want to jump on the inheritance bandwagon. They should not! Substitutability is difficult. Default to composition; you'll thank me in the long run.
I am in agreement with jjnguy... I think the reasons to seal a class are few and far between. Quite the contrary, I have been in the situation more than once where I want to extend a class, but couldn't because it was sealed.
As a perfect example, I was recently creating a small package (Java, not C#, but same principles) to wrap functionality around the memcached tool. I wanted an interface so in tests I could mock away the memcached client API I was using, and also so we could switch clients if the need arose (there are 2 clients listed on the memcached homepage). Additionally, I wanted to have the opportunity to replace the functionality altogether if the need or desire arose (such as if the memcached servers are down for some reason, we could potentially hot swap with a local cache implementation instead).
I exposed a minimal interface to interact with the client API, and it would have been awesome to extend the client API class and then just add an implements clause with my new interface. The methods that I had in the interface that matched the actual interface would then need no further details and so I wouldn't have to explicitly implement them. However, the class was sealed, so I had to instead proxy calls to an internal reference to this class. The result: more work and a lot more code for no real good reason.
That said, I think there are potential times when you might want to make a class sealed... and the best thing I can think of is an API that you will invoke directly, but allow clients to implement. For example, a game where you can program against the game... if your classes were not sealed, then the players who are adding features could potentially exploit the API to their advantage. This is a very narrow case though, and I think any time you have full control over the codebase, there really is little if any reason to make a class sealed.
This is one reason I really like the Ruby programming language... even the core classes are open, not just to extend but to ADD AND CHANGE functionality dynamically, TO THE CLASS ITSELF! It's called monkeypatching and can be a nightmare if abused, but it's damn fun to play with!
From an object-oriented perspective, sealing a class clearly documents the author's intent without the need for comments. When I seal a class I am trying to say that this class was designed to encapsulate some specific piece of knowledge or some specific service. It was not meant to be enhanced or subclassed further.
This goes well with the Template Method design pattern. I have an interface that says "I perform this service." I then have a class that implements that interface. But, what if performing that service relies on context that the base class doesn't know about (and shouldn't know about)? What happens is that the base class provides virtual methods, which are either protected or private, and these virtual methods are the hooks for subclasses to provide the piece of information or action that the base class does not know and cannot know. Meanwhile, the base class can contain code that is common for all the child classes. These subclasses would be sealed because they are meant to accomplish that one and only one concrete implementation of the service.
Can you make the argument that these subclasses should be further subclassed to enhance them? I would say no because if that subclass couldn't get the job done in the first place then it should never have derived from the base class. If you don't like it then you have the original interface, go write your own implementation class.
Sealing these subclasses also discourages deep levels of inheritence, which works well for GUI frameworks but works poorly for business logic layers.
Because you always want to be handed a reference to the class and not to a derived one for various reasons:
i. invariants that you have in some other part of your code
ii. security
etc
Also, because it's a safe bet with regards to backward compatibility - you'll never be able to close that class for inheritance if it's release unsealed.
Or maybe you didn't have enough time to test the interface that the class exposes to be sure that you can allow others to inherit from it.
Or maybe there's no point (that you see now) in having a subclass.
Or you don't want bug reports when people try to subclass and don't manage to get all the nitty-gritty details - cut support costs.
Sometimes your class interface just isn't meant to be inheirited. The public interface just isn't virtual and while someone could override the functionality that's in place it would just be wrong. Yes in general they shouldn't override the public interface, but you can insure that they don't by making the class non-inheritable.
The example I can think of right now are customized contained classes with deep clones in .Net. If you inherit from them you lose the deep clone ability.[I'm kind of fuzzy on this example, it's been a while since I worked with IClonable] If you have a true singelton class, you probably don't want inherited forms of it around, and a data persistence layer is not normally place you want a lot of inheritance.
Not everything that's important in a class is asserted easily in code. There can be semantics and relationships present that are easily broken by inheriting and overriding methods. Overriding one method at a time is an easy way to do this. You design a class/object as a single meaningful entity and then someone comes along and thinks if a method or two were 'better' it would do no harm. That may or may not be true. Maybe you can correctly separate all methods between private and not private or virtual and not virtual but that still may not be enough. Demanding inheritance of all classes also puts a huge additional burden on the original developer to foresee all the ways an inheriting class could screw things up.
I don't know of a perfect solution. I'm sympathetic to preventing inheritance but that's also a problem because it hinders unit testing.
I exposed a minimal interface to interact with the client API, and it would have been awesome to extend the client API class and then just add an implements clause with my new interface. The methods that I had in the interface that matched the actual interface would then need no further details and so I wouldn't have to explicitly implement them. However, the class was sealed, so I had to instead proxy calls to an internal reference to this class. The result: more work and a lot more code for no real good reason.
Well, there is a reason: your code is now somewhat insulated from changes to the memcached interface.
Performance: (…) if the JIT compiler sees a call to a virtual method using a sealed types, the JIT compiler can produce more efficient code by calling the method non-virtually.(…)
That's a great reason indeed. Thus, for performance-critical classes, sealed and friends make sense.
All the other reasons I've seen mentioned so far boil down to "nobody touches my class!". If you're worried someone might misunderstand its internals, you did a poor job documenting it. You can't possibly know that there's nothing useful to add to your class, or that you already know every imaginable use case for it. Even if you're right and the other developer shouldn't have used your class to solve their problem, using a keyword isn't a great way of preventing such a mistake. Documentation is. If they ignore the documentation, their loss.
Most of answers (when abstracted) state that sealed/finalized classes are tool to protect other programmers against potential mistakes. There is a blurry line between meaningful protection and pointless restriction. But as long as programmer is the one who is expected to understand the program, I see no hardly any reasons to restrict him from reusing parts of a class. Most of you talk about classes. But it's all about objects!
In his first post, DrPizza claims that designing inheritable class means anticipating possible extensions. Do I get it right that you think that class should be inheritable only if it's likely to be extended well? Looks as if you were used to design software from the most abstract classes. Allow me a brief explanation of how do I think when designing:
Starting from the very concrete objects, I find characteristics and [thus] functionality that they have in common and I abstract it to superclass of those particular objects. This is a way to reduce code duplicity.
Unless developing some specific product such as a framework, I should care about my code, not others (virtual) code. The fact that others might find it useful to reuse my code is a nice bonus, not my primary goal. If they decide to do so, it's their responsibility to ensure validity of extensions. This applies team-wide. Up-front design is crucial to productivity.
Getting back to my idea: Your objects should primarily serve your purposes, not some possible shoulda/woulda/coulda functionality of their subtypes. Your goal is to solve given problem. Object oriented languages uses fact that many problems (or more likely their subproblems) are similar and therefore existing code can be used to accelerate further development.
Sealing a class forces people who could possibly take advantage of existing code WITHOUT ACTUALLY MODIFYING YOUR PRODUCT to reinvent the wheel. (This is a crucial idea of my thesis: Inheriting a class doesn't modify it! Which seems quite pedestrian and obvious, but it's being commonly ignored).
People are often scared that their "open" classes will be twisted to something that can not substitute its ascendants. So what? Why should you care? No tool can prevent bad programmer from creating bad software!
I'm not trying to denote inheritable classes as the ultimately correct way of designing, consider this more like an explanation of my inclination to inheritable classes. That's the beauty of programming - virtually infinite set of correct solutions, each with its own cons and pros. Your comments and arguments are welcome.
And finally, my answer to the original question: I'd finalize a class to let others know that I consider the class a leaf of the hierarchical class tree and I see absolutely no possibility that it could become a parent node. (And if anyone thinks that it actually could, then either I was wrong or they don't get me).