From what I have read best practice is to have classes based on an interface and loosely couple the objects, in order to help code re-use and unit test.
Is this correct and is it a rule that should always be followed?
The reason I ask is I have recently worked on a system with 100’s of very different objects. A few shared common interfaces but most do not and wonder if it should have had an interface mirroring every property and function in those classes?
I am using C# and dot net 2.0 however I believe this question would fit many languages.
It's useful for objects which really provide a service - authentication, storage etc. For simple types which don't have any further dependencies, and where there are never going to be any alternative implementations, I think it's okay to use the concrete types.
If you go overboard with this kind of thing, you end up spending a lot of time mocking/stubbing everything in the world - which can often end up creating brittle tests.
Not really. Service components (class that do things for your application) are a good fit for interfaces, but as a rule I wouldn't bother having interfaces for, say, basic entity classes.
For example:
If you're working on a domain model, then that model shouldn't be interfaces. However if that domain model wants to call service classes (like data access, operating system functions etc) then you should be looking at interfaces for those components. This reduces coupling between the classes and means it's the interface, or "contract" that is coupled.
In this situation you then start to find it much easier to write unit tests (because you can have stubs/mocks/fakes for database access etc) and can use IoC to swap components without recompiling your applications.
I'd only use interfaces where that level of abstraction was required - i.e. you need to use polymorphic behaviour. Common examples would be dependency injection or where you have a factory-type scenario going on somewhere, or you need to establish a "multiple inheritance" type behaviour.
In my case, with my development style, this is quite often (I favour aggregation over deep inheritance hierarchies for most things other than UI controls), but I have seen perfectly fine apps that use very little. It all depends...
Oh yes, and if you do go heavily into interfaces - beware web services. If you need to expose your object methods via a web service they can't really return or take interface types, only concrete types (unless you are going to hand-write all your own serialization/deserialization). Yes, that has bitten me big time...
A downside to interface is that they can't be versioned. Once you shipped the interface you won't be making changes to it. If you use abstract classes then you can easily extend the contract over time by adding new methods and flagging them as virtual.
As an example, all stream objects in .NET derive from System.IO.Stream which is an abstract class. This makes it easy for Microsoft to add new features. In version 2 of the frameworkj they added the ReadTimeout and WriteTimeout properties without breaking any code. If they used an interface(say IStream) then they wouldn't have been able to do this. Instead they'd have had to create a new interface to define the timeout methods and we'd have to write code to conditionally cast to this interface if we wanted to use the functionality.
Interfaces should be used when you want to clearly define the interaction between two different sections of your software. Especially when it is possible that you want to rip out either end of the connection and replace it with something else.
For example in my CAM application I have a CuttingPath connected to a Collection of Points. It makes no sense to have a IPointList interface as CuttingPaths are always going to be comprised of Points in my application.
However I uses the interface IMotionController to communicate with the machine because we support many different types of cutting machine each with their own commend set and method of communications. So in that case it makes sense to put it behind a interface as one installation may be using a different machine than another.
Our applications has been maintain since the mid 80s and went to a object oriented design in late 90s. I have found that what could change greatly exceeded what I originally thought and the use of interfaces has grown. For example it used to be that our DrawingPath was comprised of points. But now it is comprised of entities (splines, arcs, ec) So it is pointed to a EntityList that is a collection of Object implementing IEntity interface.
But that change was propelled by the realization that a DrawingPath could be drawn using many different methods. Once that it was realized that a variety of drawing methods was needed then the need for a interface as opposed to a fixed relationship to a Entity Object was indicated.
Note that in our system DrawingPaths are rendered down to a low level cutting path which are always series of point segments.
I tried to take the advice of 'code to an interface' literally on a recent project. The end result was essentially duplication of the public interface (small i) of each class precisely once in an Interface (big I) implementation. This is pretty pointless in practice.
A better strategy I feel is to confine your interface implementations to verbs:
Print()
Draw()
Save()
Serialize()
Update()
...etc etc. This means that classes whose primary role is to store data - and if your code is well-designed they would usually only do that - don't want or need interface implementations. Anywhere you might want runtime-configurable behaviour, for example a variety of different graph styles representing the same data.
It's better still when the thing asking for the work really doesn't want to know how the work is done. This means you can give it a macguffin that it can simply trust will do whatever its public interface says it does, and let the component in question simply choose when to do the work.
I agree with kpollock. Interfaces are used to get a common ground for objects. The fact that they can be used in IOC containers and other purposes is an added feature.
Let's say you have several types of customer classes that vary slightly but have common properties. In this case it is great to have a ICustomer interface to bound them together, logicaly. By doing that you could create a CustomerHander class/method that handels ICustomer objects the same way instead of creating a handerl method for each variation of customers.
This is the strength of interfaces.
If you only have a single class that implements an interface, then the interface isn't to much help, it just sits there and does nothing.
Related
I've been working on a small project using C++ (although this question might be considered language-agnostic) and I'm trying to write my program so that it is as efficient and encapsulated as possible. I'm a self-taught and inexperienced programmer but I'm trying to teach myself good habits when it comes to using interfaces and OOP practices. I'm mainly interested in the typical 'best' practices when it comes to accessing the methods of an object that is acting as a subsystem for another class.
First, let me explain what I mean:
An instance of ClassGame wants to render out a 2d sprite image using the private ClassRenderer subsystem of ClassEngine. ClassGame only has access to the interface of ClassEngine, and ClassRenderer is supposed to be a subsystem of ClassEngine (behind a layer of abstraction).
My question is based on the way that the ClassGame object can indirectly make use of ClassRenderer's functionality while still remaining fast and behind a layer of abstraction. From what I've seen in lessons and other people's code examples, there seems to be two basic ways of doing this:
The first method that I learned via a series of online lectures on OOP design was to have one class delegate tasks to it's private member objects internally. [ In this example, ClassGame would call a method that belongs to ClassEngine, and ClassEngine would 'secretly' pass that request on to it's ClassRenderer subsystem by calling one of its methods. ] Kind of a 'daisy chain' of function calls. This makes sense to me, but it seems like it may be slower than some alternative options.
Another way that I've seen in other people's code is have an accessor method that returns a reference or pointer to the location of a particular subsystem. [ So, ClassGame would call a simple method in ClassEngine that would return a reference/pointer to the object that makes up its ClassRenderer subsystem ]. This route seems convenient to me, but it also seems to eliminate the point of having a private member act as a sub-component of a bigger class. Of course, this also means writing much fewer 'mindless' functions that simply pass a particular task on, due to the fact that you can simply write one getter function for each independent subsystem.
Considering the various important aspects of OO design (abstraction, encapsulation, modularity, usability, extensibility, etc.) while also considering speed and performance, is it better to use the first or the second type of method for delegating tasks to a sub-component?
The book Design Patterns Explained discusses a very similar problem in its chapter about the Bridge Pattern. Apparently this chapter is freely available online.
I would recommend you to read it :)
I think your type-1 approach resembles the OOP bridge pattern the most.
Type-2, returning handles to internal data is something that should generally be avoided.
There are many ways to do what you want, and it really depends on the context (for example, how big the project is, how much you expect to reuse from it in other projects etc.)
You have three classes: Game, Engine and Renderer. Both of your solutions make the Game commit to the Engine's interface. The second solution also makes the Game commit to the Renderer's interface. Clearly, the more interfaces you use, the more you have to change if the interfaces change.
How about a third option: The Game knows what it needs in terms of rendering, so you can create an abstract class that describes those requirements. That would be the only interface that the Game commits to. Let's call this interface AbstractGameRenderer.
To tie this into the rest of the system, again there are many ways. One option would be:
1. Implement this abstract interface using your existing Renderer class. So we have a class GameRenderer that uses Renderer and implements the AbstractGameRenderer interface.
2. The Engine creates both the Game object and the GameRenderer object.
3. The Engine passes the GameRenderer object to the Game object (using a pointer to AbstractGameRenderer).
The result: The Game can use a renderer that does what it wants. It doesn't know where it comes from, how it renders, who owns it - nothing. The GameRenderer is a specific implementation, but other implementations (using other renderers) could be written later. The Engine "knows everything" (but that may be acceptable).
Later, you want to take your Game's logic and use it as a mini-game in another game. All you need to do is create the appropriate GameRenderer (implementing AbstractGameRenderer) and pass it to the Game object. The Game object does not care that it's a different game, a different Engine and a different Renderer - it doesn't even know.
The point is that there are many solutions to design problems. This suggestion may not be appropriate or acceptable, or maybe it's exactly what you need. The principles I try to follow are:
1. Try not to commit to interfaces you can't control (you'll have to change if they change)
2. Try to prevent now the pain that will come later
In my example, the assumption is that it's less painful to change GameRenderer if Renderer changes, than it is to change a large component such as Game. But it's better to stick to principles (and minimise pain) rather than follow patterns blindly.
VB.NET, VS 2010, .NET 4
Hello,
I've written an application. I've discovered that it is full of circular references. I would like to rewrite portions of my code to improve its design. I have read about tiered programming and want to implement something like it for my application.
Background: My application is a control program for an industrial machine. It parses a recipe (from an Excel file) which contains timing information and setpoints for the various attached devices. There are three main types of devices: Most are connected through Beckhoff terminals and communicate via TwinCAT (Beckhoff's pseudo-PLC software) and two are RS-232 devices, each with a different communication protocol. I can communicate with the Beckhoff devices via a .NET API provided by Beckhoff. I have written parser classes for the two RS-232 devices. Some of the Beckhoff devices are inputs, some are outputs; some are digital, some are (pseudo-)analog.
I think my problem is that I was trying to wrap my head around OOP while writing the application so I created classes willy-nilly without a clear idea of their hierarchy. At some points, I tried to do what I thought was right by, say, making a "Device" class which was inherited by, say, a "TwinCatDevice" class and a "Rs232Device" class. But then I just stuffed all of the communication code in those classes.
What I'm trying now is creating some communication modules (e.g., UtilTwinCat, UtilRs232) that contain abstract methods like "Connect", "Disconnect", "Read", "Write", etc. Then I'm trying to rewrite my "Device" class and subclasses to use these modules so they don't have to contain any of the (sometimes redundant) communication code.
Here's my question: Would it be good design to create separate classes for each type of communication type? i.e., should I have, say, "TwinCatReadOnlyDigital", "TwinCatReadOnlyAnalog", "TwinCatWriteOnlyDigital", "TwinCatWriteOnlyAnalog", "TwinCatReadWriteDigital", "TwinCatReadWriteAnalog", "Rs232ReadOnlyDigital", etc? Or perhaps some interfaces like IReadOnly, IWriteOnly, IDuplex?
This seems like it can't be the right approach in that I imagine someone who is good at programming wouldn't end up with a billion different classes for every eventuality. Is there some way I could selectively implement an interface on a class at run time? I think that's a stupid question... I'm still trying to wrap my head around why one would use an interface. I'm looking for some basic insight on how to grapple with this type of design problem. Specifically, if you have a lot of "things" that differ slightly, is the best approach to create lots of classes that differ slightly?
Thanks a lot in advance,
Brian
Edit: I just wanted to add, to be clear, there is a finite number of devices whose parameters are known, so it would be straightforward to write classes for all the types I'd need. I just wonder if there's a better approach.
That's not really enough information for any concrete answers, but let me make a few suggestions.
"I'm still trying to wrap my head around why one would use an interface."
Mainly because then you can "program to the interface", that is you needn't care if you're working with a Beckhoff or an RS-232 device - you only care that you can send data to it. Just to make it clear: interfaces don't contain implementations. Classes implementing an interface promise to provide concrete implementations for the functions of the interface.
Instead of IReadOnly, IWriteOnly and IDuplex use two interfaces: IWriteable and IReadable (or whatever names make sense). Duplex classes will implement both interfaces.
The Strategy or more likely the Template method patterns may help you with the slightly different classes. Maybe even simple subclassing, but keep in mind that there's often a simpler, nicer solution.
Don't repeat yourself (DRY): try to find one and only one place for every piece of logic.
Functionality shared by several classes should reside in a superclass or utility classes.
Also, more concrete questions will result in more concrete answers :)
When do you encourage programming against an interface and not directly to a concrete class?
A guideline that I follow is to create abstractions whenever code requires to cross a logical/physical boundary, most especially when infrastructure-related concerns are involved.
Another checkpoint would be if a dependency will likely change in the future, due to possible additional concerns code (such as caching, transactional awareness, invoking a webservice instead of in-process execution) or if such dependencies have direct references to infrastructure integration points.
If code depends on something that does not require control to cross a logical/physical boundary, I more or less don't create abstractions to interact with those.
Am I missing anything?
Also, use interfaces when
Multiple objects will need to be acted upon in a particular fashion, but are not fundamentally related. Perhaps many of your business objects access a particular utility object, and when they do they need to give a reference of themselves to that utility object so the utility object can call a particular method. Have that method in an interface and pass that interface to that utility object.
Passing around interfaces as parameters can be very helpful in unit testing. Even if you have just one type of object that sports a particular interface, and hence don't really need a defined interface, you might define/implement an interface solely to "fake" that object in unit tests.
related to the first 2 bullets, check out the Observer pattern and the Dependency Injection. I'm not saying to implement these patterns, but they illustrate types of places where interfaces are really helpful.
Another twist on this is for implementing a couple of the SOLID Principals, Open Closed principal and the Interface Segregation principle. Like the previous bullet, don't get stressed about strictly implementing these principals everywhere (right away at least), but use these concepts to help move your thinking away from just what objects go where to thinking more about contracts and dependency
In the end, let's not make it too complicated: we're in a strongly typed world in .NET. If you need to call a method or set a property but the object you're passing/using could be fundamentally different, use an interface.
I would add that if your code is not going to be referenced by another library (for a while at least), then the decision of whether to use an interface in a particular situation is one that you can responsibly put off. The "extract interface" refactoring is easy to do these days. In my current project, I've got an object being passed around that I'm thinking maybe I should switch to an interface; I'm not stressing about it.
Interfaces abstraction are convenient when doing unit test. It helps for mocking test objects. It very useful in TDD for developing without actually using data from your database.
If you don't need any features of the class that aren't found in the Interface...then why not always prefer the Interface implementation?
It will make your code easier to modify in the future and easier to test (mocking).
you have the right idea, already. i would only add a couple of notes to this...
first, abstraction does not mean 'interface'. for example, a "connection string" is an abstraction, even though it's just a string... it's not about the 'type' of the thing in question, it's about the intention of use for that thing.
and secondly, if you are doing test automation of any kind, look for the pain and friction that are exposed by writing the tests. if you find yourself having to set up too many external conditions for a test, it's a sign that you need a better abstraction between the thing your testing and the things it interacts with.
I think you've said it pretty well. Much of this will be a stylistic thing. There are open source projects I've looked at where everything has an interface and an implementation, and it's kind of frustrating, but it might make iterative development a little easier, since any objects implementation can break but dummies will still work. But honestly, I can dummy any class that doesn't overuse the final keyword by inheritance.
I would add to your list this: anything which can be thought of as a black box should be abstracted. This includes some of the things you've mentioned, but it also includes hairy algorithms, which are likely to have multiple useful implementations with different advantages for different situation.
Additionally, interfaces come in handy very often with composite objects. That's the only way something like java's swing library gets anything done, but it can also be useful for more mundane objects. (I personally like having an interface like ValidityChecker with ways to and-compose or or-compose subordinate ValidityCheckers.)
Most of the useful things that come with the Interface passing have been already said. However I would add:
implementing an interface to an object, or later multiple objects, FORCES all the implementers to follow an IDENTICAL pattern to implement contract with the object. This can be useful in case you have not so OOP-experienced-programmers actually writing the implementation code.
in some languages you can add attributes on the interface itself, which can be different from the actual object implementation attribute as sense and intent
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I understand that they force you to implement methods and such but what I cant understand is why you would want to use them. Can anybody give me a good example or explanation on why I would want to implement this.
One specific example: interfaces are a good way of specifying a contract that other people's code must meet.
If I'm writing a library of code, I may write code that is valid for objects that have a certain set of behaviours. The best solution is to specify those behaviours in an interface (no implementation, just a description) and then use references to objects implementing that interface in my library code.
Then any random person can come along, create a class that implements that interface, instantiate an object of that class and pass it to my library code and expect it to work. Note: it is of course possible to strictly implement an interface while ignoring the intention of the interface, so merely implementing an interface is no guarantee that things will work. Stupid always finds a way! :-)
Another specific example: two teams working on different components that must co-operate. If the two teams sit down on day 1 and agree on a set of interfaces, then they can go their separate ways and implement their components around those interfaces. Team A can build test harnesses that simulate the component from Team B for testing, and vice versa. Parallel development, and fewer bugs.
The key point is that interfaces provide a layer of abstraction so that you can write code that is ignorant of unnecessary details.
The canonical example used in most textbooks is that of sorting routines. You can sort any class of objects so long as you have a way of comparing any two of the objects. You can make any class sortable therefore by implementing the IComparable interface, which forces you to implement a method for comparing two instances. All of the sort routines are written to handle references to IComparable objects, so as soon as you implement IComparable you can use any of those sort routines on collections of objects of your class.
The easiest way of understanding interfaces is that they allow different objects to expose COMMON functionality. This allows the programmer to write much simplier, shorter code that programs to an interface, then as long as the objects implement that interface it will work.
Example 1:
There are many different database providers, MySQL, MSSQL, Oracle, etc. However all database objects can DO the same things so you will find many interfaces for database objects. If an object implements IDBConnection then it exposes the methods Open() and Close(). So if I want my program to be database provider agnostic, I program to the interface and not to the specific providers.
IDbConnection connection = GetDatabaseConnectionFromConfig()
connection.Open()
// do stuff
connection.Close()
See by programming to an interface (IDbconnection) I can now SWAP out any data provider in my config but my code stays the exact same. This flexibility can be extremely useful and easy to maintain. The downside to this is that I can only perform 'generic' database operations and may not fully utilize the strength that each particular provider offers so as with everything in programming you have a trade off and you must determine which scenario will benefit you the most.
Example 2:
If you notice almost all collections implement this interface called IEnumerable. IEnumerable returns an IEnumerator which has MoveNext(), Current, and Reset(). This allows C# to easily move through your collection. The reason it can do this is since it exposes the IEnumerable interface it KNOWS that the object exposes the methods it needs to go through it. This does two things. 1) foreach loops will now know how to enumerate the collection and 2) you can now apply powerful LINQ exprssions to your collection. Again the reason why interfaces are so useful here is because all collections have something in COMMON, they can be moved through. Each collection may be moved through a different way (linked list vs array) but that is the beauty of interfaces is that the implementation is hidden and irrelevant to the consumer of the interface. MoveNext() gives you the next item in the collection, it doesn't matter HOW it does it. Pretty nice, huh?
Example 3:
When you are designing your own interfaces you just have to ask yourself one question. What do these things have in common? Once you find all the things that the objects share, you abstract those properties/methods into an interface so that each object can inherit from it. Then you can program against several objects using one interface.
And of course I have to give my favorite C++ polymorphic example, the animals example. All animals share certain characteristics. Lets say they can Move, Speak, and they all have a Name. Since I just identified what all my animals have in common and I can abstract those qualities into the IAnimal interface. Then I create a Bear object, an Owl object, and a Snake object all implementing this interface. The reason why you can store different objects together that implement the same interface is because interfaces represent an IS-A replationship. A bear IS-A animal, an owl IS-A animal, so it makes since that I can collect them all as Animals.
var animals = new IAnimal[] = {new Bear(), new Owl(), new Snake()} // here I can collect different objects in a single collection because they inherit from the same interface
foreach (IAnimal animal in animals)
{
Console.WriteLine(animal.Name)
animal.Speak() // a bear growls, a owl hoots, and a snake hisses
animal.Move() // bear runs, owl flys, snake slithers
}
You can see that even though these animals perform each action in a different way, I can program against them all in one unified model and this is just one of the many benefits of Interfaces.
So again the most important thing with interfaces is what do objects have in common so that you can program against DIFFERENT objects in the SAME way. Saves time, creates more flexible applications, hides complexity/implementation, models real-world objects / situations, among many other benefits.
Hope this helps.
One typical example is a plugin architecture. Developer A writes the main app, and wants to make certain that all plugins written by developer B, C and D conform to what his app expects of them.
Interfaces define contracts, and that's the key word.
You use an interface when you need to define a contract in your program but you don't really care about the rest of the properties of the class that fulfills that contract as long as it does.
So, let's see an example. Suppose you have a method which provides the functionality to sort a list. First thing .. what's a list? Do you really care what elements does it holds in order to sort the list? Your answer should be no... In .NET (for example) you have an interface called IList which defines the operations that a list MUST support so you don't care the actual details underneath the surface.
Back to the example, you don't really know the class of the objects in the list... neither you care. If you can just compare the object you might as well sort them. So you declare a contract:
interface IComparable
{
// Return -1 if this is less than CompareWith
// Return 0 if object are equal
// Return 1 if CompareWith is less than this
int Compare(object CompareWith);
}
that contract specify that a method which accepts an object and returns an int must be implemented in order to be comparable. Now you have defined an contract and for now on you don't care about the object itself but about the contract so you can just do:
IComparable comp1 = list.GetItem(i) as IComparable;
if (comp1.Compare(list.GetItem(i+1)) < 0)
swapItem(list,i, i+1)
PS: I know the examples are a bit naive but they are examples ...
When you need different classes to share same methods you use Interfaces.
Interfaces are absolutely necessary in an object-oriented system that expects to make good use of polymorphism.
A classic example might be IVehicle, which has a Move() method. You could have classes Car, Bike and Tank, which implement IVehicle. They can all Move(), and you could write code that didn't care what kind of vehicle it was dealing with, just so it can Move().
void MoveAVehicle(IVehicle vehicle)
{
vehicle.Move();
}
The pedals on a car implement an interface. I'm from the US where we drive on the right side of the road. Our steering wheels are on the left side of the car. The pedals for a manual transmission from left to right are clutch -> brake -> accelerator. When I went to Ireland, the driving is reversed. Cars' steering wheels are on the right and they drive on the left side of the road... but the pedals, ah the pedals... they implemented the same interface... all three pedals were in the same order... so even if the class was different and the network that class operated on was different, i was still comfortable with the pedal interface. My brain was able to call my muscles on this car just like every other car.
Think of the numerous non-programming interfaces we can't live without. Then answer your own question.
Imagine the following basic interface which defines a basic CRUD mechanism:
interface Storable {
function create($data);
function read($id);
function update($data, $id);
function delete($id);
}
From this interface, you can tell that any object that implements it, must have functionality to create, read, update and delete data. This could by a database connection, a CSV file reader, and XML file reader, or any other kind of mechanism that might want to use CRUD operations.
Thus, you could now have something like the following:
class Logger {
Storable storage;
function Logger(Storable storage) {
this.storage = storage;
}
function writeLogEntry() {
this.storage.create("I am a log entry");
}
}
This logger doesn't care if you pass in a database connection, or something that manipulates files on disk. All it needs to know is that it can call create() on it, and it'll work as expected.
The next question to arise from this then is, if databases and CSV files, etc, can all store data, shouldn't they be inherited from a generic Storable object and thus do away with the need for interfaces? The answer to this is no... not every database connection might implement CRUD operations, and the same applies to every file reader.
Interfaces define what the object is capable of doing and how you need to use it... not what it is!
Interfaces are a form of polymorphism. An example:
Suppose you want to write some logging code. The logging is going to go somewhere (maybe to a file, or a serial port on the device the main code runs on, or to a socket, or thrown away like /dev/null). You don't know where: the user of your logging code needs to be free to determine that. In fact, your logging code doesn't care. It just wants something it can write bytes to.
So, you invent an interface called "something you can write bytes to". The logging code is given an instance of this interface (perhaps at runtime, perhaps it's configured at compile time. It's still polymorphism, just different kinds). You write one or more classes implementing the interface, and you can easily change where logging goes just by changing which one the logging code will use. Someone else can change where logging goes by writing their own implementations of the interface, without changing your code. That's basically what polymorphism amounts to - knowing just enough about an object to use it in a particular way, while allowing it to vary in all the respects you don't need to know about. An interface describes things you need to know.
C's file descriptors are basically an interface "something I can read and/or write bytes from and/or to", and almost every typed language has such interfaces lurking in its standard libraries: streams or whatever. Untyped languages usually have informal types (perhaps called contracts) that represent streams. So in practice you almost never have to actually invent this particular interface yourself: you use what the language gives you.
Logging and streams are just one example - interfaces happen whenever you can describe in abstract terms what an object is supposed to do, but don't want to tie it down to a particular implementation/class/whatever.
There are a number of reasons to do so. When you use an interface, you're ready in the future when you need to refactor/rewrite the code. You can also provide an sort of standardized API for simple operations.
For example, if you want to write a sort algorithm like the quicksort, all you need to sort any list of objects is that you can successfuuly compare two of the objects. If you create an interface, say ISortable, than anyone who creates objects can implement the ISortable interface and they can use your sort code.
If you're writing code that uses a database storage, and you write to an storage interface, you can replace that code down the line.
Interfaces encourage looser coupling of your code so that you can have greater flexibility.
In an article in my blog I briefly describe three purposes interfaces have.
Interfaces may have different
purposes:
Provide different implementations for the same goal. The typical example
is a list, which may have different
implementations for different
performance use cases (LinkedList,
ArrayList, etc.).
Allow criteria modification. For example, a sort function may accept a
Comparable interface in order to
provide any kind of sort criteria,
based on the same algorithm.
Hide implementation details. This also makes it easier for a user to
read the comments, since in the body
of the interface there are only
methods, fields and comments, no long
chunks of code to skip.
Here's the article's full text: http://weblogs.manas.com.ar/ary/2007/11/
The best Java code I have ever seen defined almost all object references as instances of interfaces instead of instances of classes. It is a strong sign of quality code designed for flexibility and change.
As you noted, interfaces are good for when you want to force someone to make it in a certain format.
Interfaces are good when data not being in a certain format can mean making dangerous assumptions in your code.
For example, at the moment I'm writing an application that will transform data from one format in to another. I want to force them to place those fields in so I know they will exist and will have a greater chance of being properly implemented. I don't care if another version comes out and it doesn't compile for them because it's more likely that data is required anyways.
Interfaces are rarely used because of this, since usually you can make assumptions or don't really require the data to do what you need to do.
An interface, defines merely the interface. Later, you can define method (on other classes), which accepted interfaces as parameters (or more accurately, object which implement that interface). This way your method can operate on a large variety of objects, whose only commonality is that they implement that interface.
First, they give you an additional layer of abstraction. You can say "For this function, this parameter must be an object that has these methods with these parameters". And you probably want to also set the meaning of these methods, in somehow abstracted terms, yet allowing you to reason about the code. In duck-typed languages you get that for free. No need for explicit, syntax "interfaces". Yet you probably still create a set of conceptual interfaces, something like contracts (like in Design by Contract).
Furthermore, interfaces are sometimes used for less "pure" purposes. In Java, they can be used to emulate multiple inheritance. In C++, you can use them to reduce compile times.
In general, they reduce coupling in your code. That's a good thing.
Your code may also be easier to test this way.
Let's say you want to keep track of a collection of stuff. Said collections must support a bunch of things, like adding and removing items, and checking if an item is in the collection.
You could then specify an interface ICollection with the methods add(), remove() and contains().
Code that doesn't need to know what kind of collection (List, Array, Hash-table, Red-black tree, etc) could accept objects that implemented the interface and work with them without knowing their actual type.
In .Net, I create base classes and inherit from them when the classes are somehow related. For example, base class Person could be inherited by Employee and Customer. Person might have common properties like address fields, name, telephone, and so forth. Employee might have its own department property. Customer has other exclusive properties.
Since a class can only inherit from one other class in .Net, I use interfaces for additional shared functionality. Sometimes interfaces are shared by classes that are otherwise unrelated. Using an interface creates a contract that developers will know is shared by all of the other classes implementing it. I also forces those classes to implement all of its members.
In C# interfaces are also extremely useful for allowing polymorphism for classes that do not share the same base classes. Meaning, since we cannot have multiple inheritance you can use interfaces to allow different types to be used. It's also a way to allow you to expose private members for use without reflection (explicit implementation), so it can be a good way to implement functionality while keeping your object model clean.
For example:
public interface IExample
{
void Foo();
}
public class Example : IExample
{
// explicit implementation syntax
void IExample.Foo() { ... }
}
/* Usage */
Example e = new Example();
e.Foo(); // error, Foo does not exist
((IExample)e).Foo(); // success
I think you need to get a good understand of design patterns so see there power.
Check out
Head First Design Patterns
What can be reasons to prevent a class from being inherited? (e.g. using sealed on a c# class)
Right now I can't think of any.
Because writing classes to be substitutably extended is damn hard and requires you to make accurate predictions of how future users will want to extend what you've written.
Sealing your class forces them to use composition, which is much more robust.
How about if you are not sure about the interface yet and don't want any other code depending on the present interface? [That's off the top of my head, but I'd be interested in other reasons as well!]
Edit:
A bit of googling gave the following:
http://codebetter.com/blogs/patricksmacchia/archive/2008/01/05/rambling-on-the-sealed-keyword.aspx
Quoting:
There are three reasons why a sealed class is better than an unsealed class:
Versioning: When a class is originally sealed, it can change to unsealed in the future without breaking compatibility. (…)
Performance: (…) if the JIT compiler sees a call to a virtual method using a sealed types, the JIT compiler can produce more efficient code by calling the method non-virtually.(…)
Security and Predictability: A class must protect its own state and not allow itself to ever become corrupted. When a class is unsealed, a derived class can access and manipulate the base class’s state if any data fields or methods that internally manipulate fields are accessible and not private.(…)
I want to give you this message from "Code Complete":
Inheritance - subclasses - tends to
work against the primary technical
imperative you have as a programmer,
which is to manage complexity.For the sake of controlling complexity, you should maintain a heavy bias against inheritance.
The only legitimate use of inheritance is to define a particular case of a base class like, for example, when inherit from Shape to derive Circle. To check this look at the relation in opposite direction: is a Shape a generalization of Circle? If the answer is yes then it is ok to use inheritance.
So if you have a class for which there can not be any particular cases that specialize its behavior it should be sealed.
Also due to LSP (Liskov Substitution Principle) one can use derived class where base class is expected and this is actually imposes the greatest impact from use of inheritance: code using base class may be given an inherited class and it still has to work as expected. In order to protect external code when there is no obvious need for subclasses you seal the class and its clients can rely that its behavior will not be changed. Otherwise external code needs to be explicitly designed to expect possible changes in behavior in subclasses.
A more concrete example would be Singleton pattern. You need to seal singleton to ensure one can not break the "singletonness".
This may not apply to your code, but a lot of classes within the .NET framework are sealed purposely so that no one tries to create a sub-class.
There are certain situations where the internals are complex and require certain things to be controlled very specifically so the designer decided no one should inherit the class so that no one accidentally breaks functionality by using something in the wrong way.
#jjnguy
Another user may want to re-use your code by sub-classing your class. I don't see a reason to stop this.
If they want to use the functionality of my class they can achieve that with containment, and they will have much less brittle code as a result.
Composition seems to be often overlooked; all too often people want to jump on the inheritance bandwagon. They should not! Substitutability is difficult. Default to composition; you'll thank me in the long run.
I am in agreement with jjnguy... I think the reasons to seal a class are few and far between. Quite the contrary, I have been in the situation more than once where I want to extend a class, but couldn't because it was sealed.
As a perfect example, I was recently creating a small package (Java, not C#, but same principles) to wrap functionality around the memcached tool. I wanted an interface so in tests I could mock away the memcached client API I was using, and also so we could switch clients if the need arose (there are 2 clients listed on the memcached homepage). Additionally, I wanted to have the opportunity to replace the functionality altogether if the need or desire arose (such as if the memcached servers are down for some reason, we could potentially hot swap with a local cache implementation instead).
I exposed a minimal interface to interact with the client API, and it would have been awesome to extend the client API class and then just add an implements clause with my new interface. The methods that I had in the interface that matched the actual interface would then need no further details and so I wouldn't have to explicitly implement them. However, the class was sealed, so I had to instead proxy calls to an internal reference to this class. The result: more work and a lot more code for no real good reason.
That said, I think there are potential times when you might want to make a class sealed... and the best thing I can think of is an API that you will invoke directly, but allow clients to implement. For example, a game where you can program against the game... if your classes were not sealed, then the players who are adding features could potentially exploit the API to their advantage. This is a very narrow case though, and I think any time you have full control over the codebase, there really is little if any reason to make a class sealed.
This is one reason I really like the Ruby programming language... even the core classes are open, not just to extend but to ADD AND CHANGE functionality dynamically, TO THE CLASS ITSELF! It's called monkeypatching and can be a nightmare if abused, but it's damn fun to play with!
From an object-oriented perspective, sealing a class clearly documents the author's intent without the need for comments. When I seal a class I am trying to say that this class was designed to encapsulate some specific piece of knowledge or some specific service. It was not meant to be enhanced or subclassed further.
This goes well with the Template Method design pattern. I have an interface that says "I perform this service." I then have a class that implements that interface. But, what if performing that service relies on context that the base class doesn't know about (and shouldn't know about)? What happens is that the base class provides virtual methods, which are either protected or private, and these virtual methods are the hooks for subclasses to provide the piece of information or action that the base class does not know and cannot know. Meanwhile, the base class can contain code that is common for all the child classes. These subclasses would be sealed because they are meant to accomplish that one and only one concrete implementation of the service.
Can you make the argument that these subclasses should be further subclassed to enhance them? I would say no because if that subclass couldn't get the job done in the first place then it should never have derived from the base class. If you don't like it then you have the original interface, go write your own implementation class.
Sealing these subclasses also discourages deep levels of inheritence, which works well for GUI frameworks but works poorly for business logic layers.
Because you always want to be handed a reference to the class and not to a derived one for various reasons:
i. invariants that you have in some other part of your code
ii. security
etc
Also, because it's a safe bet with regards to backward compatibility - you'll never be able to close that class for inheritance if it's release unsealed.
Or maybe you didn't have enough time to test the interface that the class exposes to be sure that you can allow others to inherit from it.
Or maybe there's no point (that you see now) in having a subclass.
Or you don't want bug reports when people try to subclass and don't manage to get all the nitty-gritty details - cut support costs.
Sometimes your class interface just isn't meant to be inheirited. The public interface just isn't virtual and while someone could override the functionality that's in place it would just be wrong. Yes in general they shouldn't override the public interface, but you can insure that they don't by making the class non-inheritable.
The example I can think of right now are customized contained classes with deep clones in .Net. If you inherit from them you lose the deep clone ability.[I'm kind of fuzzy on this example, it's been a while since I worked with IClonable] If you have a true singelton class, you probably don't want inherited forms of it around, and a data persistence layer is not normally place you want a lot of inheritance.
Not everything that's important in a class is asserted easily in code. There can be semantics and relationships present that are easily broken by inheriting and overriding methods. Overriding one method at a time is an easy way to do this. You design a class/object as a single meaningful entity and then someone comes along and thinks if a method or two were 'better' it would do no harm. That may or may not be true. Maybe you can correctly separate all methods between private and not private or virtual and not virtual but that still may not be enough. Demanding inheritance of all classes also puts a huge additional burden on the original developer to foresee all the ways an inheriting class could screw things up.
I don't know of a perfect solution. I'm sympathetic to preventing inheritance but that's also a problem because it hinders unit testing.
I exposed a minimal interface to interact with the client API, and it would have been awesome to extend the client API class and then just add an implements clause with my new interface. The methods that I had in the interface that matched the actual interface would then need no further details and so I wouldn't have to explicitly implement them. However, the class was sealed, so I had to instead proxy calls to an internal reference to this class. The result: more work and a lot more code for no real good reason.
Well, there is a reason: your code is now somewhat insulated from changes to the memcached interface.
Performance: (…) if the JIT compiler sees a call to a virtual method using a sealed types, the JIT compiler can produce more efficient code by calling the method non-virtually.(…)
That's a great reason indeed. Thus, for performance-critical classes, sealed and friends make sense.
All the other reasons I've seen mentioned so far boil down to "nobody touches my class!". If you're worried someone might misunderstand its internals, you did a poor job documenting it. You can't possibly know that there's nothing useful to add to your class, or that you already know every imaginable use case for it. Even if you're right and the other developer shouldn't have used your class to solve their problem, using a keyword isn't a great way of preventing such a mistake. Documentation is. If they ignore the documentation, their loss.
Most of answers (when abstracted) state that sealed/finalized classes are tool to protect other programmers against potential mistakes. There is a blurry line between meaningful protection and pointless restriction. But as long as programmer is the one who is expected to understand the program, I see no hardly any reasons to restrict him from reusing parts of a class. Most of you talk about classes. But it's all about objects!
In his first post, DrPizza claims that designing inheritable class means anticipating possible extensions. Do I get it right that you think that class should be inheritable only if it's likely to be extended well? Looks as if you were used to design software from the most abstract classes. Allow me a brief explanation of how do I think when designing:
Starting from the very concrete objects, I find characteristics and [thus] functionality that they have in common and I abstract it to superclass of those particular objects. This is a way to reduce code duplicity.
Unless developing some specific product such as a framework, I should care about my code, not others (virtual) code. The fact that others might find it useful to reuse my code is a nice bonus, not my primary goal. If they decide to do so, it's their responsibility to ensure validity of extensions. This applies team-wide. Up-front design is crucial to productivity.
Getting back to my idea: Your objects should primarily serve your purposes, not some possible shoulda/woulda/coulda functionality of their subtypes. Your goal is to solve given problem. Object oriented languages uses fact that many problems (or more likely their subproblems) are similar and therefore existing code can be used to accelerate further development.
Sealing a class forces people who could possibly take advantage of existing code WITHOUT ACTUALLY MODIFYING YOUR PRODUCT to reinvent the wheel. (This is a crucial idea of my thesis: Inheriting a class doesn't modify it! Which seems quite pedestrian and obvious, but it's being commonly ignored).
People are often scared that their "open" classes will be twisted to something that can not substitute its ascendants. So what? Why should you care? No tool can prevent bad programmer from creating bad software!
I'm not trying to denote inheritable classes as the ultimately correct way of designing, consider this more like an explanation of my inclination to inheritable classes. That's the beauty of programming - virtually infinite set of correct solutions, each with its own cons and pros. Your comments and arguments are welcome.
And finally, my answer to the original question: I'd finalize a class to let others know that I consider the class a leaf of the hierarchical class tree and I see absolutely no possibility that it could become a parent node. (And if anyone thinks that it actually could, then either I was wrong or they don't get me).