How does interfaces (being a substitute of multiple inheritance) achieve code reuse - oop

This is a hard one. I've read this question in forums but nobody could come up with a satisfactory answer.
Coming from a C++ background, I've been told that Java achieves multiple inheritance through interfaces. One of the main purpose of Inheritance happens to be "code reuse".
I've been trying to understand the use of interfaces through the years. I've not understood whether interfaces achieves code reuse. If yes, then how?
Please give a good code example to substantiate that.
I already understand that interfaces are :
used to specify a contract.
used to specify additional roles,
behaviors that the class plays.
used to achieve "polymorphism", (eg: A
method like addKeyListener(KeyListener e) can accept any class that
implements KeyListener as arguments(so that it becomes of type
KeyListener),even if its not in the inheritance hierarchy of
KeyListener.
But how is it useful in the case of code reuse, when I need to add the code for the concrete methods myself....I could as well omit implementing the interface.
So how does Interfaces achieve code reusability (if it does at all)?

Coming from a C++ background, I've been told that Java achieves multiple inheritance through interfaces. One of the main purpose of Inheritance happens to be "code reuse".
Well no, Java just doesn't achieve multiple inheritance. Interfaces are the closest Java can get to multiple inheritance, but it's actually not inheritance, and it doesn't yield code reuse in the same way that inheritance can.
Where it can save you some code is that you can use all the implementations in the same way, rather than having to duplicate calling code.

Related

Deriving from a concrete class?

In book 'Head First Design Patterns', one of the way mentioned to not violate 'Dependency Inversion' principle is as:
No class derive from a concrete class.
Is it possible to follow this rule thoroughly? In many commonly used frameworks and libraries its common to find classes not following this rule.
Inheritance is an important part of c#, ruling it out would be a waste.
Nevertheless, the book emphasizes the open for extension closed for change SOLID principle and this is actually a good thing.
Not to derive from concrete classes ( note, abstract classes and interfaces are not concrete ), helps you to adapt this paradigm. Inheritance is not typically suited for extension, and makes inversion harder ( because the latter relies on interfaces and concretes are not interfaces ).
So in practice, you'll see that base classes are often abstract. Not all, and not every framework adopts it. Sometimes there are good reasons to inherit from a concrete. But the book, is in it's way a easy read and to go into details on the exceptions would make it much harder to read.
So bottom line: no, one should not follow the rule at all cost but only do concrete inheritance if one of the following:
you know what you are doing (so you have a really really really good reason)
you know it doesn't matter ( because it's a simple project/object)
you know the concretes will be contained to the project itself (internals)
As problems in programming are very different, it's hard to tell. Sometimes you it's usefull to do it, sometimes it's not.
It's also possible to redesign the situations that you think you can't to actually achieve this. But in the new design you may end up with more classes that you don't really need and are only used to achieve this.
The question in this case is: Is having more stuff just to achieve some principle without having problems in your code a good design?
In my experience it is better to try and avoid inheriting from concrete classes. Try to design you code so that you don't inherit from concrete classes. This will make your code better to read and understand as it guides you to designing your abstractions better. But sometimes it's usefull to do just that.
As you mentioned frameworks do that. Especially GUI frameworks. You see a lot inheritance from concrete classes there. That's because it's usefull to add additional behavior to already existing controls.
For example a Button is fine on it's own, but sometimes you may need to add an additional behavior for your needs. Inheriting from Button and just adding the new things you need is just fine. Can you do it another way? Sure, but is it worth adding addtional classes and/or interfaces or coping code from Button just to avoid inheriting from a concrete class? Is is so bad? Where can it hust?
You do achieve extensibility this way, as the framework will still work just fine.
GUI frameworks also use composition a-lot too, so what you get is a combination of composition with inheritance from both concrete and abstract classes. Just use the right one where you need it.
Not all problems are like that with a hierarchical structure with a a-lot of related objects. Sometimes inheritance can hurt extensibility and using composition is a better choise.

OOP Reuse without Inheritance: How "real-world" practical is this?

This article describes an approach to OOP I find interesting:
What if objects exist as
encapsulations, and the communicate
via messages? What if code re-use has
nothing to do with inheritance, but
uses composition, delegation, even
old-fashioned helper objects or any
technique the programmer deems fit?
The ontology does not go away, but it
is decoupled from the implementation.
The idea of reuse without inheritance or dependence to a class hierarchy is what I found most astounding, but how feasible is this?
Examples were given but I can't quite see how I can change my current code to adapt this approach.
So how feasible is this approach? Or is there really not a need for changing code but rather a scenario-based approach where "use only when needed or optimal"?
EDIT: oops, I forgot the link: here it is link
I'm sure you've heard of "always prefer composition over inheritance".
The basic idea of this premise is multiple objects with different functionalities are put together to create one fully-featured object. This should be preferred over inheriting functionality from disparate objects that have nothing to do with each other.
The main argument regarding this is contained in the definition of the Liskov Substitution Principle and playfully illustrated by this poster:
If you had a ToyDuck object, which object should you inherit from, from a purely inheritance standpoint? Should you inherit from Duck? No -- most likely you should inherit from Toy.
Bottomline is you should be using the correct method of abstraction -- whether inheritance or composition -- for your code.
For your current objects, consider if there are objects that ought to be removed from the inheritance tree and included merely as a property that you can call and invoke.
Inheritance is not well suited for code reuse. Inheriting for code reuse usually leads to:
Classes with inherited methods that must not be called on them (violating the Liskov substitution principle), which confuses programmers and leads to bugs.
Deep hierarchies where it takes inordinate amount of time to find the method you need when it can be declared anywhere in dozen or more classes.
Generally the inheritance tree should not get more than two or three levels deep and usually you should only inherit interfaces and abstract base classes.
There is however no point in rewriting existing code just for sake of it. However when you need to modify, try to switch to composition where possible. That will usually allow you to modify the code in smaller pieces, since there will be less coupling between the classes.
I just skimmed the text over, but it seems to say what OO design was always about: Inheritance is not meant as a code reuse tool and loose coupling is good. This has been written dozens times before, see the linked references on the article bottom. This does not mean you should skip inheritance entirely, you just have to use it conciously and only when it makes sense. The article also states this.
As for the duck typing, I find the examples and thoughts questionable. Like this one:
function good (foo) {
if ( !foo.baz || !foo.quux ) {
throw new TypeError("We need foo to have baz and quux methods.");
}
return foo.baz(foo.quux(10));
}
What’s the point in adding three new lines just to report an error that would be reported by the runtime automatically?
Inheritance is fundamental
no inheritance, no OOP.
prototyping and delegation can be used to effect inheritance (like in JavaScript), which is fine, and is functionally equivalent to inheritance
objects, messages, and composition but no inheritance is object-based, not object-oriented. VB5, not Java. Yes it can be done; plan on writing a lot of boilerplate code to expose interfaces and forward operations.
Those that insist inheritance is unnecessary, or that it is 'bad' are creating strawmen: it is easy to imagine scenarios where inheritance is used badly; this is not a reflection on the tool, but on the tool-user.

Should I be using inheritance?

This is more of a subjective question, so I'm going to preemptively mark it as community wiki.
Basically, I've found that in most of my code, there are many classes, many of which use each other, but few of which are directly related to each other. I look back at my college days, and think of the traditional class Cat : Animal type examples, where you have huge inheritance trees, but I see none of this in my code. My class diagrams look like giant spiderwebs, not like nice pretty trees.
I feel I've done a good job of separating information logically, and recently I've done a good job of isolating dependencies between classes via DI/IoC techniques, but I'm worried I might be missing something. I do tend to clump behavior in interfaces, but I simply don't subclass.
I can easily understand subclassing in terms of the traditional examples such as class Dog : Animal or class Employee : Person, but I simply don't have anything that obvious I'm dealing with. And things are rarely as clear-cut as class Label : Control. But when it comes to actually modeling real entities in my code as a hierarchy, I have no clue where to begin.
So, I guess my questions boil down to this:
Is it ok to simply not subclass or inherit? Should I be concerned at all?
What are some strategies you have to determine objects that could benefit from inheritance?
Is it acceptable to always inherit based on behavior (interfaces) rather than the actual type?
Inheritance should always represent an "is-a" relationship. You should be able to say "A is a B" if A derives from B. If not, prefer composition. It's perfectly fine to not subclass when it is not necessary.
For example, saying that FileOpenDialog "is-a" Window makes sense, but saying that an Engine "is-a" Car is nonsense. In that case, an instance of Engine inside a Car instance is more appropriate (It can be said that Car "is-implemented-in-terms-of" Engine).
For a good discussion of inheritance, see Part 1 and Part 2 of "Uses and Abuses of Inheritance" on gotw.ca.
As long as you do not miss the clear cut 'is a' relationships, it's ok and in fact, it's best not to inherit, but to use composition.
is-a is the litmus test. if (Is X a Y?) then class X : Y { } else class X { Y myY; } or class Y { X myX; }
Using interfaces, that is, inheriting behavior, is a very neat way to structure the code via adding only the needed behavior and no other. The tricky part is defining those interfaces well.
No technology or pattern should be used for its own sake. You obviously work in a domain where classes tend to not benefit from inheritance, so you shouldn't use inheritance.
You've used DI to keep things neat and clean. You separated the concerns of your classes. Those are all good things. Don't try and force inheritance if you don't really need it.
An interesting follow-up to this question would be: Which programming domains do tend to make good use of inheritance? (UI and db frameworks have already been mentioned and are great examples. Any others?)
I also hate the Dog -> Mammal -> Animal examples, precisely because they do not occur in real life.
I use very little subclassing, because it tightly couples the subclass to the superclass and makes your code really hard to read. Sometimes implementation inheritance is useful (e.g. PostgreSQLDatabaseImpl and MySQLDatabaseImpl extend AbstractSQLDatabase), but most of the time it just makes a mess of things. Most of the time I see subclasses the concept has been misused and either interfaces or a property should be used.
Interfaces, however, are great and you should use those.
Generally, favour composition over inheritance. Inheritance tends to break encapsulation. e.g. If a class depends on a method of a super class and the super class changes the implementation of that method in some release, the subclass may break.
At times when you are designing a framework, you will have to design classes to be inherited. If you want to use inheritance, you will have to document and design for it carefully. e.g. Not calling any instance methods (that could be overridden by your subclasses) in the constructor. Also if its a genuine 'is-a' relationship, inheritance is useful but is more robust if used within a package.
See Effective Java (Item 14, and 15). It gives a great argument for why you should favour composition over inheritance. It talks about inheritance and encapsulation in general (with java examples). So its a good resource even if you are not using java.
So to answer your 3 questions:
Is it ok to simply not subclass or inherit? Should I be concerned at all?
Ans: Ask yourself the question is it a truly "is-a" relationship? Is decoration possible? Go for decoration
// A collection decorator that is-a collection with
public class MyCustomCollection implements java.util.Collection {
private Collection delegate;
// decorate methods with custom code
}
What are some strategies you have to determine objects that could benefit from inheritance?
Ans: Usually when you are writing a framework, you may want to provide certain interfaces and "base" classes specifically designed for inheritance.
Is it acceptable to always inherit based on behavior (interfaces) rather than the actual type?
Ans: Mostly yes, but you'd be better off if the super class is designed for inheritance and/or under your control. Or else go for composition.
IMHO, you should never do #3, unless you're building an abstract base class specifically for that purpose, and its name makes it clear what its purpose is:
class DataProviderBase {...}
class SqlDataProvider : DataProviderBase {...}
class DB2DataProvider : DataProviderBase {...}
class AccountDataProvider : SqlDataProvider {...}
class OrderDataProvider : SqlDataProvider {...}
class ShippingDataProvider : DB2DataProvider {...}
etc.
Also following this type of model, sometimes if you provide an interface (IDataProvider) it's good to also provide a base class (DataProviderBase) that future consumers can use to conveniently access logic that's common to all/most DataProviders in your application model.
As a general rule, though, I only use inheritance if I have a true "is-a" relationship, or if it will improve the overall design for me to create an "is-a" relationship (provider model, for instance.)
Where you have shared functionality, programming to the interface is more important than inheritance.
Essentially, inheritance is more about relating objects together.
Most of the time we are concerned with what an object can DO, as opposed to what it is.
class Product
class Article
class NewsItem
Are the NewsItem and Article both Content items? Perhaps, and you may find it useful to be able to have a list of content which contains both Article items and NewsItem items.
However, it's probably more likely you'll have them implement similar interfaces. For example, IRssFeedable could be an interface that they both implement. In fact, Product could also implement this interface.
Then they can all be thrown to an RSS Feed easily to provide lists of things on your web page. This is a great example when the interface is important whereas the inheritance model is perhaps less useful.
Inheritance is all about identifying the nature of Objects
Interfaces are all about identifying what Objects can DO.
My class hierarchies tend to be fairly flat as well, with interfaces and composition providing the necessary coupling. Inheritance seems to pop up mostly when I'm storing collections of things, where the different kinds of things will have data/properties in common. Inheritance often feels more natural to me when there is common data, whereas interfaces are a very natural way to express common behavior.
The answer to each of your 3 questions is "it depends". Ultimately it will all depend on your domain and what your program does with it. A lot of times, I find the design patterns I choose to use actually help with finding points where inheritance works well.
For example, consider a 'transformer' used to massage data into a desired form. If you get 3 data sources as CSV files, and want to put them into three different object models (and maybe persist them into a database), you could create a 'csv transformer' base and then override some methods when you inherit from it in order to handle the different specific objects.
'Casting' the development process into the pattern language will help you find objects/methods that behave similarly and help in reducing redundant code (maybe through inheritance, maybe through the use of shared libraries - whichever suits the situation best).
Also, if you keep your layers separate (business, data, presentation, etc.), your class diagram will be simpler, and you could then 'visualize' those objects that aught to be inherited.
I wouldn't get too worried about how your class diagram looks, things are rarely like the classroom...
Rather ask yourself two questions:
Does your code work?
Is it extremely time consuming to maintain? Does a change sometimes require changing the 'same' code in many places?
If the answer to (2) is yes, you might want to look at how you have structured your code to see if there is a more sensible fashion, but always bearing in mind that at the end of the day, you need to be able to answer yes to question (1)... Pretty code that doesn't work is of no use to anybody, and hard to explain to the management.
IMHO, the primary reason to use inheritance is to allow code which was written to operate upon a base-class object to operate upon a derived-class object instead.

When do you need to create abstractions in the form of interfaces?

When do you encourage programming against an interface and not directly to a concrete class?
A guideline that I follow is to create abstractions whenever code requires to cross a logical/physical boundary, most especially when infrastructure-related concerns are involved.
Another checkpoint would be if a dependency will likely change in the future, due to possible additional concerns code (such as caching, transactional awareness, invoking a webservice instead of in-process execution) or if such dependencies have direct references to infrastructure integration points.
If code depends on something that does not require control to cross a logical/physical boundary, I more or less don't create abstractions to interact with those.
Am I missing anything?
Also, use interfaces when
Multiple objects will need to be acted upon in a particular fashion, but are not fundamentally related. Perhaps many of your business objects access a particular utility object, and when they do they need to give a reference of themselves to that utility object so the utility object can call a particular method. Have that method in an interface and pass that interface to that utility object.
Passing around interfaces as parameters can be very helpful in unit testing. Even if you have just one type of object that sports a particular interface, and hence don't really need a defined interface, you might define/implement an interface solely to "fake" that object in unit tests.
related to the first 2 bullets, check out the Observer pattern and the Dependency Injection. I'm not saying to implement these patterns, but they illustrate types of places where interfaces are really helpful.
Another twist on this is for implementing a couple of the SOLID Principals, Open Closed principal and the Interface Segregation principle. Like the previous bullet, don't get stressed about strictly implementing these principals everywhere (right away at least), but use these concepts to help move your thinking away from just what objects go where to thinking more about contracts and dependency
In the end, let's not make it too complicated: we're in a strongly typed world in .NET. If you need to call a method or set a property but the object you're passing/using could be fundamentally different, use an interface.
I would add that if your code is not going to be referenced by another library (for a while at least), then the decision of whether to use an interface in a particular situation is one that you can responsibly put off. The "extract interface" refactoring is easy to do these days. In my current project, I've got an object being passed around that I'm thinking maybe I should switch to an interface; I'm not stressing about it.
Interfaces abstraction are convenient when doing unit test. It helps for mocking test objects. It very useful in TDD for developing without actually using data from your database.
If you don't need any features of the class that aren't found in the Interface...then why not always prefer the Interface implementation?
It will make your code easier to modify in the future and easier to test (mocking).
you have the right idea, already. i would only add a couple of notes to this...
first, abstraction does not mean 'interface'. for example, a "connection string" is an abstraction, even though it's just a string... it's not about the 'type' of the thing in question, it's about the intention of use for that thing.
and secondly, if you are doing test automation of any kind, look for the pain and friction that are exposed by writing the tests. if you find yourself having to set up too many external conditions for a test, it's a sign that you need a better abstraction between the thing your testing and the things it interacts with.
I think you've said it pretty well. Much of this will be a stylistic thing. There are open source projects I've looked at where everything has an interface and an implementation, and it's kind of frustrating, but it might make iterative development a little easier, since any objects implementation can break but dummies will still work. But honestly, I can dummy any class that doesn't overuse the final keyword by inheritance.
I would add to your list this: anything which can be thought of as a black box should be abstracted. This includes some of the things you've mentioned, but it also includes hairy algorithms, which are likely to have multiple useful implementations with different advantages for different situation.
Additionally, interfaces come in handy very often with composite objects. That's the only way something like java's swing library gets anything done, but it can also be useful for more mundane objects. (I personally like having an interface like ValidityChecker with ways to and-compose or or-compose subordinate ValidityCheckers.)
Most of the useful things that come with the Interface passing have been already said. However I would add:
implementing an interface to an object, or later multiple objects, FORCES all the implementers to follow an IDENTICAL pattern to implement contract with the object. This can be useful in case you have not so OOP-experienced-programmers actually writing the implementation code.
in some languages you can add attributes on the interface itself, which can be different from the actual object implementation attribute as sense and intent

Why should you prevent a class from being subclassed?

What can be reasons to prevent a class from being inherited? (e.g. using sealed on a c# class)
Right now I can't think of any.
Because writing classes to be substitutably extended is damn hard and requires you to make accurate predictions of how future users will want to extend what you've written.
Sealing your class forces them to use composition, which is much more robust.
How about if you are not sure about the interface yet and don't want any other code depending on the present interface? [That's off the top of my head, but I'd be interested in other reasons as well!]
Edit:
A bit of googling gave the following:
http://codebetter.com/blogs/patricksmacchia/archive/2008/01/05/rambling-on-the-sealed-keyword.aspx
Quoting:
There are three reasons why a sealed class is better than an unsealed class:
Versioning: When a class is originally sealed, it can change to unsealed in the future without breaking compatibility. (…)
Performance: (…) if the JIT compiler sees a call to a virtual method using a sealed types, the JIT compiler can produce more efficient code by calling the method non-virtually.(…)
Security and Predictability: A class must protect its own state and not allow itself to ever become corrupted. When a class is unsealed, a derived class can access and manipulate the base class’s state if any data fields or methods that internally manipulate fields are accessible and not private.(…)
I want to give you this message from "Code Complete":
Inheritance - subclasses - tends to
work against the primary technical
imperative you have as a programmer,
which is to manage complexity.For the sake of controlling complexity, you should maintain a heavy bias against inheritance.
The only legitimate use of inheritance is to define a particular case of a base class like, for example, when inherit from Shape to derive Circle. To check this look at the relation in opposite direction: is a Shape a generalization of Circle? If the answer is yes then it is ok to use inheritance.
So if you have a class for which there can not be any particular cases that specialize its behavior it should be sealed.
Also due to LSP (Liskov Substitution Principle) one can use derived class where base class is expected and this is actually imposes the greatest impact from use of inheritance: code using base class may be given an inherited class and it still has to work as expected. In order to protect external code when there is no obvious need for subclasses you seal the class and its clients can rely that its behavior will not be changed. Otherwise external code needs to be explicitly designed to expect possible changes in behavior in subclasses.
A more concrete example would be Singleton pattern. You need to seal singleton to ensure one can not break the "singletonness".
This may not apply to your code, but a lot of classes within the .NET framework are sealed purposely so that no one tries to create a sub-class.
There are certain situations where the internals are complex and require certain things to be controlled very specifically so the designer decided no one should inherit the class so that no one accidentally breaks functionality by using something in the wrong way.
#jjnguy
Another user may want to re-use your code by sub-classing your class. I don't see a reason to stop this.
If they want to use the functionality of my class they can achieve that with containment, and they will have much less brittle code as a result.
Composition seems to be often overlooked; all too often people want to jump on the inheritance bandwagon. They should not! Substitutability is difficult. Default to composition; you'll thank me in the long run.
I am in agreement with jjnguy... I think the reasons to seal a class are few and far between. Quite the contrary, I have been in the situation more than once where I want to extend a class, but couldn't because it was sealed.
As a perfect example, I was recently creating a small package (Java, not C#, but same principles) to wrap functionality around the memcached tool. I wanted an interface so in tests I could mock away the memcached client API I was using, and also so we could switch clients if the need arose (there are 2 clients listed on the memcached homepage). Additionally, I wanted to have the opportunity to replace the functionality altogether if the need or desire arose (such as if the memcached servers are down for some reason, we could potentially hot swap with a local cache implementation instead).
I exposed a minimal interface to interact with the client API, and it would have been awesome to extend the client API class and then just add an implements clause with my new interface. The methods that I had in the interface that matched the actual interface would then need no further details and so I wouldn't have to explicitly implement them. However, the class was sealed, so I had to instead proxy calls to an internal reference to this class. The result: more work and a lot more code for no real good reason.
That said, I think there are potential times when you might want to make a class sealed... and the best thing I can think of is an API that you will invoke directly, but allow clients to implement. For example, a game where you can program against the game... if your classes were not sealed, then the players who are adding features could potentially exploit the API to their advantage. This is a very narrow case though, and I think any time you have full control over the codebase, there really is little if any reason to make a class sealed.
This is one reason I really like the Ruby programming language... even the core classes are open, not just to extend but to ADD AND CHANGE functionality dynamically, TO THE CLASS ITSELF! It's called monkeypatching and can be a nightmare if abused, but it's damn fun to play with!
From an object-oriented perspective, sealing a class clearly documents the author's intent without the need for comments. When I seal a class I am trying to say that this class was designed to encapsulate some specific piece of knowledge or some specific service. It was not meant to be enhanced or subclassed further.
This goes well with the Template Method design pattern. I have an interface that says "I perform this service." I then have a class that implements that interface. But, what if performing that service relies on context that the base class doesn't know about (and shouldn't know about)? What happens is that the base class provides virtual methods, which are either protected or private, and these virtual methods are the hooks for subclasses to provide the piece of information or action that the base class does not know and cannot know. Meanwhile, the base class can contain code that is common for all the child classes. These subclasses would be sealed because they are meant to accomplish that one and only one concrete implementation of the service.
Can you make the argument that these subclasses should be further subclassed to enhance them? I would say no because if that subclass couldn't get the job done in the first place then it should never have derived from the base class. If you don't like it then you have the original interface, go write your own implementation class.
Sealing these subclasses also discourages deep levels of inheritence, which works well for GUI frameworks but works poorly for business logic layers.
Because you always want to be handed a reference to the class and not to a derived one for various reasons:
i. invariants that you have in some other part of your code
ii. security
etc
Also, because it's a safe bet with regards to backward compatibility - you'll never be able to close that class for inheritance if it's release unsealed.
Or maybe you didn't have enough time to test the interface that the class exposes to be sure that you can allow others to inherit from it.
Or maybe there's no point (that you see now) in having a subclass.
Or you don't want bug reports when people try to subclass and don't manage to get all the nitty-gritty details - cut support costs.
Sometimes your class interface just isn't meant to be inheirited. The public interface just isn't virtual and while someone could override the functionality that's in place it would just be wrong. Yes in general they shouldn't override the public interface, but you can insure that they don't by making the class non-inheritable.
The example I can think of right now are customized contained classes with deep clones in .Net. If you inherit from them you lose the deep clone ability.[I'm kind of fuzzy on this example, it's been a while since I worked with IClonable] If you have a true singelton class, you probably don't want inherited forms of it around, and a data persistence layer is not normally place you want a lot of inheritance.
Not everything that's important in a class is asserted easily in code. There can be semantics and relationships present that are easily broken by inheriting and overriding methods. Overriding one method at a time is an easy way to do this. You design a class/object as a single meaningful entity and then someone comes along and thinks if a method or two were 'better' it would do no harm. That may or may not be true. Maybe you can correctly separate all methods between private and not private or virtual and not virtual but that still may not be enough. Demanding inheritance of all classes also puts a huge additional burden on the original developer to foresee all the ways an inheriting class could screw things up.
I don't know of a perfect solution. I'm sympathetic to preventing inheritance but that's also a problem because it hinders unit testing.
I exposed a minimal interface to interact with the client API, and it would have been awesome to extend the client API class and then just add an implements clause with my new interface. The methods that I had in the interface that matched the actual interface would then need no further details and so I wouldn't have to explicitly implement them. However, the class was sealed, so I had to instead proxy calls to an internal reference to this class. The result: more work and a lot more code for no real good reason.
Well, there is a reason: your code is now somewhat insulated from changes to the memcached interface.
Performance: (…) if the JIT compiler sees a call to a virtual method using a sealed types, the JIT compiler can produce more efficient code by calling the method non-virtually.(…)
That's a great reason indeed. Thus, for performance-critical classes, sealed and friends make sense.
All the other reasons I've seen mentioned so far boil down to "nobody touches my class!". If you're worried someone might misunderstand its internals, you did a poor job documenting it. You can't possibly know that there's nothing useful to add to your class, or that you already know every imaginable use case for it. Even if you're right and the other developer shouldn't have used your class to solve their problem, using a keyword isn't a great way of preventing such a mistake. Documentation is. If they ignore the documentation, their loss.
Most of answers (when abstracted) state that sealed/finalized classes are tool to protect other programmers against potential mistakes. There is a blurry line between meaningful protection and pointless restriction. But as long as programmer is the one who is expected to understand the program, I see no hardly any reasons to restrict him from reusing parts of a class. Most of you talk about classes. But it's all about objects!
In his first post, DrPizza claims that designing inheritable class means anticipating possible extensions. Do I get it right that you think that class should be inheritable only if it's likely to be extended well? Looks as if you were used to design software from the most abstract classes. Allow me a brief explanation of how do I think when designing:
Starting from the very concrete objects, I find characteristics and [thus] functionality that they have in common and I abstract it to superclass of those particular objects. This is a way to reduce code duplicity.
Unless developing some specific product such as a framework, I should care about my code, not others (virtual) code. The fact that others might find it useful to reuse my code is a nice bonus, not my primary goal. If they decide to do so, it's their responsibility to ensure validity of extensions. This applies team-wide. Up-front design is crucial to productivity.
Getting back to my idea: Your objects should primarily serve your purposes, not some possible shoulda/woulda/coulda functionality of their subtypes. Your goal is to solve given problem. Object oriented languages uses fact that many problems (or more likely their subproblems) are similar and therefore existing code can be used to accelerate further development.
Sealing a class forces people who could possibly take advantage of existing code WITHOUT ACTUALLY MODIFYING YOUR PRODUCT to reinvent the wheel. (This is a crucial idea of my thesis: Inheriting a class doesn't modify it! Which seems quite pedestrian and obvious, but it's being commonly ignored).
People are often scared that their "open" classes will be twisted to something that can not substitute its ascendants. So what? Why should you care? No tool can prevent bad programmer from creating bad software!
I'm not trying to denote inheritable classes as the ultimately correct way of designing, consider this more like an explanation of my inclination to inheritable classes. That's the beauty of programming - virtually infinite set of correct solutions, each with its own cons and pros. Your comments and arguments are welcome.
And finally, my answer to the original question: I'd finalize a class to let others know that I consider the class a leaf of the hierarchical class tree and I see absolutely no possibility that it could become a parent node. (And if anyone thinks that it actually could, then either I was wrong or they don't get me).