Related
After reading lots of blogs, forum entries and several Apple docs, I still don't know whether extensive subclassing in Objective-C is a wise thing to do or not.
Take for example the following case:
Say I'm developing a puzzle game which
has a lot of elements. All of those
elements share a certain amount of the
same behaviour. Then, within my
collection of elements, different
groups of elements share equal
behaviour, distinguishing groups from
groups, etc...
So, after determining what inherits
from what, I decided to subclass out
of oblivion. And why shouldn't I?
Considering the ease tweaking general
behaviour takes with this model, I
think I accomplished something OOP is
meant for.
But, - and this is the source of my question - Apple mentions using delegates, data source methods, and informal protocols in favour of subclassing. It really boggles my mind why?
There seem to be two camps. Those in favor of subclassing, those in fafor of not. It depends on personal taste apparently. I'm wondering what the pros and cons are of subclassing massively and not subclassing massively?
To wrap it up, my question is simple: Am I right? And why or why not?
Delegation is a means of using the composition technique to replace some aspects of coding you would otherwise subclass for. As such, it boils down to the age old question of the task at hand needing one large thing that knows how to do a lot, or if you have a loose network of specialized objects (a very UNIX sort of model of responsibility).
Using a combination of delegates and protocols (to define what the delegates are supposed to be able to do) provides a great deal of flexibility of behavior and ease of coding - going back to that Liskov substitution principle, when you subclass you have to be careful you don't do anything a user of the whole class would find unexpected. But if you are simply making a delegate object then you have much less to be responsible for, only that the delegate methods you implement do what that one protocol calls for, beyond that you don't care.
There are still many good reasons to use subclasses, if you truly have shared behavior and variables between a number of classes it may make a lot of sense to subclass. But if you can take advantage of the delegate concept you'll often make your classes easier to extend or use in ways you the designer may not have expected.
I tend to be more of a fan of formal protocols than informal ones, because not only do formal protocols make sure you have the methods a class treating you as a delegate expect, but also because the protocol definition is a natural place to document what you expect from a delegate that implements those methods.
Personally, I follow this rule: I can create a subclass if it respects the Liskov substitution principle.
Subclassing has it's benefits, but it also has some drawbacks. As a general rule, I try to avoid implementation inheritance and instead use interface inheritance and delegation.
One of the reasons I do this is because when you inherit implementation, you can wind up with problems if you override methods but don't adhere to their (sometimes undocumented contract). Additionally, I find walking class hierarchies with implementation inheritance difficult because methods can be overridden or implemented at any level. Finally, when subclassing you can only widen an interface, you can't narrow it. This leads to leaky abstractions. A good example of this is java.util.Stack which extends java.util.Vector. I shouldn't be able to treat a stack as a Vector. Doing so only allows the consumer to run around the interface.
Others have mentioned the Liskov Substitution Principle. I think that using that would have certainly cleared up the java.util.Stack problem but it can also lead to very deep class hierarchies in order to put ensure that classes get only the methods they are supposed to have.
Instead, with interface inheritance there is essentially no class hierarchy because interfaces rarely need to extend one another. The classes simply implement the interfaces that they need to and can therefore be treated in the correct way by the consumer. Additionally, because there is no implementation inheritance, consumers of these classes won't infer their behavior due to previous experience with a parent class.
In the end though, it doesn't really matter which way you go. Both are perfectly acceptable. It's really more a matter of what you're more comfortable with and what the frameworks that you're working with encourage. As the old saying goes: "When in Rome do as Romans do."
There's nothing wrong with using inheritance in Objective-C. Apple uses it quite a bit. For instance, in Cocoa-touch, the inheritance tree of UIButton is UIControl : UIView : UIResponder : NSObject.
I think Martin hit on an important point in mentioning the Liskov substitution principle. Also, proper use of inheritance requires that the implementer of the subclass has a deep knowledge of the super class. If you've ever struggled to extend a non-trivial class in a complex framework, you know that there's always a learning curve. In addition, implementation details of the super class often "leak through" to the subclass, which is a big pain in the #$& for framework builders.
Apple chose to use delegation in many instances to address these problems; non-trivial classes like UIApplication expose common extension points through a delegate object so most developers have both an easier learning curve and a more loosely coupled way to add application specific behavior -- extending UIApplication directly is rarely necessary.
In your case, for your application specific code, use which ever techniques you're comfortable with and work best for your design. Inheritance is a great tool when used appropriately.
I frequently see application programmers draw lessons from framework designs and trying to apply them to their application code (this is common in Java, C++ and Python worlds as well as Objective-C). While it's good to think about and understand the choices framework designers made, those lessons don't always apply to application code.
In general you should avoid subclassing API classes if there exist delegates, etc that accomplish what you want to do. In your own code subclassing is often nicer, but it really does depend on your goals, eg. if you're providing an API you should provide a delegate based API rather than assuming subclassing.
When dealing with APIs subclassing has more potential bugs -- eg. if any class in the class hierarchy gets a new method that has the same name as your addition you make break stuff. And also, if you're providing a useful/helper type function there's a chance that in the future something similar will be added to the actual class you were subclassing, and that might be more efficient, etc but your override will hide it.
Please read the Apple documentation "Adding behavior to a Cocoa program"!. Under "Inheriting from a Cocoa class" section, see the 2nd paragraph. Apple clearly mentions that Subclassing is the primary way of adding application specific behavior to the framework (please note, FRAMEWORK).
MVC pattern does not completely disallow the use of subclasses or subtypes. Atleast I have not seen this recommendation from either Apple or others (if I have missed please feel free to point me to the right source of information about this). If you are subclassing api classes only within your application, please go ahead, no one's stopping you but do take care that it does not break the behavior of the class/api as a whole. Subclassing is great way of extending the framework api's functionality. We see a lot of subclassing within the Apple IOS framework APIs too.
As a developer one has to take care the implementation is well documented and not duplicated accidentally by another developer. Its another ball game altogether if your application is a set of API classes that you plan to distribute as reusable component.
IMHO, rather than asking around what the best practice is, first read the related documentation thoroughly, implement and test it. Make your own judgement. You know best about what you're up to.
It's easy for others (like me and so many others) to just read stuff from different sources on the Net and throw around terms. Be your own judge, it has worked for me so far.
I really think it depends on what you're trying to do. If the puzzle game you describe in the example really does have a set of unique elements that share common attributes, and there's no provided classes - say, for example, "NSPuzzlePiece" - that fit your needs, then I don't see a problem with subclassing extensively.
In my experience, delegates, data source methods, and informal protocols are much more useful when Apple has provided a class that already does something close to what you want it to do.
For example, say you're building an app that uses a table. There is (and I speak here of the iPhone SDK, since that's where I have experience) a class UITableView that does all the little niceties of creating a table for interaction with the user, and it's much more efficient to define a data source for an instance of UITableView than it is to completely subclass UITableView and redefine or extend its methods to customize its behavior.
Similar concepts go for delegates and protocols. If you can fit your ideas into Apple's classes, then it's usually easier (and will work more smoothly) to do so and use data source, delegates, and protocols than it is to create your own subclasses. It helps you avoid extra work and wasting time, and is usually less error-prone. Apple's classes have taken care of the business of making functions efficient and debugging; the more you can work with them, the fewer mistakes your program will have in the long run.
my impression of ADC's emphasis 'against' subclassing has more to do with the legacy of how the operating system has evolved... back in the day (Mac Classic aka os9) when c++ was the primary interface to most of the mac toolbox, subclassing was the de-facto standard in order for a programmer to modify the behaviour of commonplace OS features (and this was indeed sometimes a pain in the neck and meant that one had to be very careful that any and all modifications behaved predictably and didn't break any standard behaviour).
this being said, MY IMPRESSION of ADC's emphasis against subclassing is not putting forth a case for designing an application's class hierarchy without inheritance, BUT INSTEAD to point out that in the new way of doing things (ie OSX) there are in most cases more appropriate means to go about customizing standard behavior without needing to subclass.
So, by all means, design your puzzle program's architecture as robustly as you can, leveraging inheritance as you see fit!
looking forward to seeing your cool new puzzle application!
|K<
Apple indeed appears to passively discourage subclassing with Objective-C.
It is an axiom of OOP design to Favor composition over implementation.
There is a Design Principle that says Favor composition over inheritance and its advertised benefit is that it simplifies design. Let's agree on that as background for this question.
So, could override be deprecated? Could we, in theory, get rid of it for good?
Let's be a bit over zealous on the above mentioned Design Principle and take it to the extreme: composition all the way. One reason should be enough for now, override abuse.
One question arises: are we, programmers, going to loose something? Is any power lost trying to prevent some possible abuse?
So, what applications are there for override and can they be achieved otherwise? Should they?
Not only is this a completely radical and impractical proposal, it's not a particularly compelling one. Just because a feature gets abused doesn't mean that it should be removed entirely. People have been abusing all sorts of things for a very long time, but that hardly implies that they don't serve a useful purpose when used correctly.
Design patterns are one thing; designing an intentionally limited language to conform with your ideal notion of a good design pattern is quite another. To my mind, it's an exercise in futility. Programmers will still find something to abuse.
And I take issue with the central assumption that any use of override is inappropriate or abusive. There are lots of cases where you want to take advantage of inheritance implying an is-a relationship. Sure, this model doesn't fit the real world 100% of the time, but there are plenty of times that it does.
The Animal and Shape class examples that you read about in textbooks might be a bit contrived, but I frequently use inheritance in real-world applications.
That's not to imply that I disagree with the sentiment that one should generally or when in doubt, favor composition over inheritance. But that's not saying that inheritance is bad and should never be used.
If you remove inheritance altogether you remove a significant feature of OOP design.
Using inheritance allows you to use a "is a" design, which has a strong meaning in OOP design, and of course saves code redundancy.
If you'd use only encapsulation you'd have to either expose the members (which isn't always what you want (raises design complexity because of the amount of stuff the programmer needs to know about).
Or, make wrapper methods that will call the member's methods (which is redundant).
Besides that, lets assume you know the difference between overriding and hiding, you can see that most OOP languages will choose to use strictly overriding when given the choice.
This is because overriding is usually more intuitive than hiding.
So, if you remove overriding, and still allow inheritance, you are left with hiding. That usually leads to many runtime errors and un-expected results with type conflicts.
Farther more you won't be able to have things like an array or list of base class pointers that point a lot of different derived classes. Because if you don't have overrides it won't be able to call the specified derived class method, it will only call the same base class method for all of them.
I've added a response on behalf of astander extracting from his link (hope you don't mind)
For example, one advantage with inheritance is that it is easier to
use than composition. However, that ease of use comes at the cost that
it is harder to reuse because the subclass is tied to the parent
class.
One advantage of composition is that it is more flexible because
behavior can be swapped at runtime. One disadvantage of composition is
that the behavior of the system may be harder to understand just by
looking at the source. These are all factors one should think about
when applying composition over inheritance.
I'm always using polymorphism. I always seem to have a bunch of objects with some common concept behind them and a lot of code that is interested in that concept--that is, they care about Animals, not Lions and Tigers and Bears or even Carnivores. Interfaces often work better for this than superclasses, so I suppose I could get by without subclassing. (Are interfaces okay when subclassing is not?) However, I have often found that a lot of classes using an interface have identical code for the interface methods. Changing the interface to a superclass can let me get rid of a lot of duplicate code. The other situation I find myself in is where a large, complex class does what I need except for one teeny, tiny little thing. With subclassing, I can create a new class that does exactly what I need in just a few lines.
There may be a language component to this debate. When I'm writing in Java I subclass at a furious rate. When I'm writing in C# I think long and hard before overriding anything or even using interfaces. I'm not sure why and it may have more to do with the type of work I do in those languages than the languages themselves. But working in C#, I am quite sympathetic to this idea, while when working in Java...well, I'd have to toss almost all my Java code if I couldn't override.
When do you encourage programming against an interface and not directly to a concrete class?
A guideline that I follow is to create abstractions whenever code requires to cross a logical/physical boundary, most especially when infrastructure-related concerns are involved.
Another checkpoint would be if a dependency will likely change in the future, due to possible additional concerns code (such as caching, transactional awareness, invoking a webservice instead of in-process execution) or if such dependencies have direct references to infrastructure integration points.
If code depends on something that does not require control to cross a logical/physical boundary, I more or less don't create abstractions to interact with those.
Am I missing anything?
Also, use interfaces when
Multiple objects will need to be acted upon in a particular fashion, but are not fundamentally related. Perhaps many of your business objects access a particular utility object, and when they do they need to give a reference of themselves to that utility object so the utility object can call a particular method. Have that method in an interface and pass that interface to that utility object.
Passing around interfaces as parameters can be very helpful in unit testing. Even if you have just one type of object that sports a particular interface, and hence don't really need a defined interface, you might define/implement an interface solely to "fake" that object in unit tests.
related to the first 2 bullets, check out the Observer pattern and the Dependency Injection. I'm not saying to implement these patterns, but they illustrate types of places where interfaces are really helpful.
Another twist on this is for implementing a couple of the SOLID Principals, Open Closed principal and the Interface Segregation principle. Like the previous bullet, don't get stressed about strictly implementing these principals everywhere (right away at least), but use these concepts to help move your thinking away from just what objects go where to thinking more about contracts and dependency
In the end, let's not make it too complicated: we're in a strongly typed world in .NET. If you need to call a method or set a property but the object you're passing/using could be fundamentally different, use an interface.
I would add that if your code is not going to be referenced by another library (for a while at least), then the decision of whether to use an interface in a particular situation is one that you can responsibly put off. The "extract interface" refactoring is easy to do these days. In my current project, I've got an object being passed around that I'm thinking maybe I should switch to an interface; I'm not stressing about it.
Interfaces abstraction are convenient when doing unit test. It helps for mocking test objects. It very useful in TDD for developing without actually using data from your database.
If you don't need any features of the class that aren't found in the Interface...then why not always prefer the Interface implementation?
It will make your code easier to modify in the future and easier to test (mocking).
you have the right idea, already. i would only add a couple of notes to this...
first, abstraction does not mean 'interface'. for example, a "connection string" is an abstraction, even though it's just a string... it's not about the 'type' of the thing in question, it's about the intention of use for that thing.
and secondly, if you are doing test automation of any kind, look for the pain and friction that are exposed by writing the tests. if you find yourself having to set up too many external conditions for a test, it's a sign that you need a better abstraction between the thing your testing and the things it interacts with.
I think you've said it pretty well. Much of this will be a stylistic thing. There are open source projects I've looked at where everything has an interface and an implementation, and it's kind of frustrating, but it might make iterative development a little easier, since any objects implementation can break but dummies will still work. But honestly, I can dummy any class that doesn't overuse the final keyword by inheritance.
I would add to your list this: anything which can be thought of as a black box should be abstracted. This includes some of the things you've mentioned, but it also includes hairy algorithms, which are likely to have multiple useful implementations with different advantages for different situation.
Additionally, interfaces come in handy very often with composite objects. That's the only way something like java's swing library gets anything done, but it can also be useful for more mundane objects. (I personally like having an interface like ValidityChecker with ways to and-compose or or-compose subordinate ValidityCheckers.)
Most of the useful things that come with the Interface passing have been already said. However I would add:
implementing an interface to an object, or later multiple objects, FORCES all the implementers to follow an IDENTICAL pattern to implement contract with the object. This can be useful in case you have not so OOP-experienced-programmers actually writing the implementation code.
in some languages you can add attributes on the interface itself, which can be different from the actual object implementation attribute as sense and intent
After reading lots of blogs, forum entries and several Apple docs, I still don't know whether extensive subclassing in Objective-C is a wise thing to do or not.
Take for example the following case:
Say I'm developing a puzzle game which
has a lot of elements. All of those
elements share a certain amount of the
same behaviour. Then, within my
collection of elements, different
groups of elements share equal
behaviour, distinguishing groups from
groups, etc...
So, after determining what inherits
from what, I decided to subclass out
of oblivion. And why shouldn't I?
Considering the ease tweaking general
behaviour takes with this model, I
think I accomplished something OOP is
meant for.
But, - and this is the source of my question - Apple mentions using delegates, data source methods, and informal protocols in favour of subclassing. It really boggles my mind why?
There seem to be two camps. Those in favor of subclassing, those in fafor of not. It depends on personal taste apparently. I'm wondering what the pros and cons are of subclassing massively and not subclassing massively?
To wrap it up, my question is simple: Am I right? And why or why not?
Delegation is a means of using the composition technique to replace some aspects of coding you would otherwise subclass for. As such, it boils down to the age old question of the task at hand needing one large thing that knows how to do a lot, or if you have a loose network of specialized objects (a very UNIX sort of model of responsibility).
Using a combination of delegates and protocols (to define what the delegates are supposed to be able to do) provides a great deal of flexibility of behavior and ease of coding - going back to that Liskov substitution principle, when you subclass you have to be careful you don't do anything a user of the whole class would find unexpected. But if you are simply making a delegate object then you have much less to be responsible for, only that the delegate methods you implement do what that one protocol calls for, beyond that you don't care.
There are still many good reasons to use subclasses, if you truly have shared behavior and variables between a number of classes it may make a lot of sense to subclass. But if you can take advantage of the delegate concept you'll often make your classes easier to extend or use in ways you the designer may not have expected.
I tend to be more of a fan of formal protocols than informal ones, because not only do formal protocols make sure you have the methods a class treating you as a delegate expect, but also because the protocol definition is a natural place to document what you expect from a delegate that implements those methods.
Personally, I follow this rule: I can create a subclass if it respects the Liskov substitution principle.
Subclassing has it's benefits, but it also has some drawbacks. As a general rule, I try to avoid implementation inheritance and instead use interface inheritance and delegation.
One of the reasons I do this is because when you inherit implementation, you can wind up with problems if you override methods but don't adhere to their (sometimes undocumented contract). Additionally, I find walking class hierarchies with implementation inheritance difficult because methods can be overridden or implemented at any level. Finally, when subclassing you can only widen an interface, you can't narrow it. This leads to leaky abstractions. A good example of this is java.util.Stack which extends java.util.Vector. I shouldn't be able to treat a stack as a Vector. Doing so only allows the consumer to run around the interface.
Others have mentioned the Liskov Substitution Principle. I think that using that would have certainly cleared up the java.util.Stack problem but it can also lead to very deep class hierarchies in order to put ensure that classes get only the methods they are supposed to have.
Instead, with interface inheritance there is essentially no class hierarchy because interfaces rarely need to extend one another. The classes simply implement the interfaces that they need to and can therefore be treated in the correct way by the consumer. Additionally, because there is no implementation inheritance, consumers of these classes won't infer their behavior due to previous experience with a parent class.
In the end though, it doesn't really matter which way you go. Both are perfectly acceptable. It's really more a matter of what you're more comfortable with and what the frameworks that you're working with encourage. As the old saying goes: "When in Rome do as Romans do."
There's nothing wrong with using inheritance in Objective-C. Apple uses it quite a bit. For instance, in Cocoa-touch, the inheritance tree of UIButton is UIControl : UIView : UIResponder : NSObject.
I think Martin hit on an important point in mentioning the Liskov substitution principle. Also, proper use of inheritance requires that the implementer of the subclass has a deep knowledge of the super class. If you've ever struggled to extend a non-trivial class in a complex framework, you know that there's always a learning curve. In addition, implementation details of the super class often "leak through" to the subclass, which is a big pain in the #$& for framework builders.
Apple chose to use delegation in many instances to address these problems; non-trivial classes like UIApplication expose common extension points through a delegate object so most developers have both an easier learning curve and a more loosely coupled way to add application specific behavior -- extending UIApplication directly is rarely necessary.
In your case, for your application specific code, use which ever techniques you're comfortable with and work best for your design. Inheritance is a great tool when used appropriately.
I frequently see application programmers draw lessons from framework designs and trying to apply them to their application code (this is common in Java, C++ and Python worlds as well as Objective-C). While it's good to think about and understand the choices framework designers made, those lessons don't always apply to application code.
In general you should avoid subclassing API classes if there exist delegates, etc that accomplish what you want to do. In your own code subclassing is often nicer, but it really does depend on your goals, eg. if you're providing an API you should provide a delegate based API rather than assuming subclassing.
When dealing with APIs subclassing has more potential bugs -- eg. if any class in the class hierarchy gets a new method that has the same name as your addition you make break stuff. And also, if you're providing a useful/helper type function there's a chance that in the future something similar will be added to the actual class you were subclassing, and that might be more efficient, etc but your override will hide it.
Please read the Apple documentation "Adding behavior to a Cocoa program"!. Under "Inheriting from a Cocoa class" section, see the 2nd paragraph. Apple clearly mentions that Subclassing is the primary way of adding application specific behavior to the framework (please note, FRAMEWORK).
MVC pattern does not completely disallow the use of subclasses or subtypes. Atleast I have not seen this recommendation from either Apple or others (if I have missed please feel free to point me to the right source of information about this). If you are subclassing api classes only within your application, please go ahead, no one's stopping you but do take care that it does not break the behavior of the class/api as a whole. Subclassing is great way of extending the framework api's functionality. We see a lot of subclassing within the Apple IOS framework APIs too.
As a developer one has to take care the implementation is well documented and not duplicated accidentally by another developer. Its another ball game altogether if your application is a set of API classes that you plan to distribute as reusable component.
IMHO, rather than asking around what the best practice is, first read the related documentation thoroughly, implement and test it. Make your own judgement. You know best about what you're up to.
It's easy for others (like me and so many others) to just read stuff from different sources on the Net and throw around terms. Be your own judge, it has worked for me so far.
I really think it depends on what you're trying to do. If the puzzle game you describe in the example really does have a set of unique elements that share common attributes, and there's no provided classes - say, for example, "NSPuzzlePiece" - that fit your needs, then I don't see a problem with subclassing extensively.
In my experience, delegates, data source methods, and informal protocols are much more useful when Apple has provided a class that already does something close to what you want it to do.
For example, say you're building an app that uses a table. There is (and I speak here of the iPhone SDK, since that's where I have experience) a class UITableView that does all the little niceties of creating a table for interaction with the user, and it's much more efficient to define a data source for an instance of UITableView than it is to completely subclass UITableView and redefine or extend its methods to customize its behavior.
Similar concepts go for delegates and protocols. If you can fit your ideas into Apple's classes, then it's usually easier (and will work more smoothly) to do so and use data source, delegates, and protocols than it is to create your own subclasses. It helps you avoid extra work and wasting time, and is usually less error-prone. Apple's classes have taken care of the business of making functions efficient and debugging; the more you can work with them, the fewer mistakes your program will have in the long run.
my impression of ADC's emphasis 'against' subclassing has more to do with the legacy of how the operating system has evolved... back in the day (Mac Classic aka os9) when c++ was the primary interface to most of the mac toolbox, subclassing was the de-facto standard in order for a programmer to modify the behaviour of commonplace OS features (and this was indeed sometimes a pain in the neck and meant that one had to be very careful that any and all modifications behaved predictably and didn't break any standard behaviour).
this being said, MY IMPRESSION of ADC's emphasis against subclassing is not putting forth a case for designing an application's class hierarchy without inheritance, BUT INSTEAD to point out that in the new way of doing things (ie OSX) there are in most cases more appropriate means to go about customizing standard behavior without needing to subclass.
So, by all means, design your puzzle program's architecture as robustly as you can, leveraging inheritance as you see fit!
looking forward to seeing your cool new puzzle application!
|K<
Apple indeed appears to passively discourage subclassing with Objective-C.
It is an axiom of OOP design to Favor composition over implementation.
Me and my friend had a little debate.
I had to implement a "browser process watcher" class which invokes an event whenever the browser that is being watched (let's say Internet explorer) is running.
We created a "Process watcher" class, and here starts the debate:
He said that the constructor should only accept strings (like "iexplore.exe"), and i said we should inherit "Process watcher" to create a "browser watcher" which accepts the currently used browser enum, which the constructor will "translate" it to "iexplore".
he said we should use a util function which will act as the translator.
I know both ways are valid and good, but i wonder whats the pros and cons of each, and what is suitable in our case.
Lately I've been taking the approach of "Keep it simple now, and refactor later if you need to extend it".
What you're doing right now seems pretty simple. You only really have one case that you're trying to handle. So I'd say take the simpler approach for now. In the end, if you never have to make another kind of watcher then you'll avoid the extra complexity. However, code it in a way that will make it easier to refactor later if you need to.
In the future, if you find you need another type of watcher, spend the effort then to refactor it into an inheritance (or composition, or whatever other pattern you want to follow). If your initial code is done right the refactoring should be fairly easy, so you're not really adding much extra work.
I've found this approach works fairly well for me. In the cases where I really didn't need inheritance the code stays simple. But when I really do need it I can add it in without any real problems.
Other things being equal I prefer the simpler solution (a single concrete class which takes a string as a constructor parameter) to the more complicated one (using a base class and a subclass).
Inheritance is appropriate when you want to vary behaviour: if the browser watcher will do something that the ordinary process watcher doesn't. But if you only want to vary the value of the data, then just vary the data.
If you have no other use for ProcessWatcher than to serve as the parent of BrowserWatcher than you shouldn't create it. If other Watchers are being implemented that have shared functionality that can be placed in ProcessWatcher, then you should (both are "isa" relationships so Rob's criterion is met).
It really is as simple as that. Arguing that some day you'll have other watchers is not an argument in favor of creating a separate class. It is a mental tic that you should lose ASAP.
Inheritance should only ever be used to implement an "isa" relationship.
As you can say that a "browser watcher" is a specific instance of a "process watcher" then inheritance is suitable for this architecture.
Hence, for me, having the identity of what you are watching passed through as a part of the construction of the browser watcher implementation of the "process watcher" is definitely the way to go.
Edit: More specifically, inheritance is for when you want to specialise behaviour. For example, most animals make a sound, but you could hope to provide which sound to make in a class called animal, you must wait for the specialisation.
So then we have Horse class providing a "neigh" for its sound, a Dog class providing a "bark" for its sound, etc.
HTH
cheers,
Rob
Depends on what use case you have or what god you follow.
I don't say "inheritance is evil" but generally I follow the principle "Favor composition over inheritance" to avoid excessive class hierarchies.
I agree that in most cases, simplicity over complexity is a good strategy, as long as your simplicity is not too short-sighted (ref. Herms, write code in such a way that you can easily re-factor later).
However, I also know how difficult it can be to shut up that bug in your ear that encourages a more thorough design. If you still want to favor inheritance without necessarily thinking in terms of "base class" and "subclass", you can simply define an interface (ex. IProcessWatcher) which is implemented by ProcessWatcher. When you use the ProcessWatcher object, refer to it in terms of the interface so that if you later decide to create a BrowserWatcher (or any other kind of ProcessWatcher), you can do so without forcing it to descend from ProcessWatcher, as long as it implements the IProcessWatcher interface.
Warning: Proceed with caution. It gets tempting to want to define an interface for every single object, and let's face it, that just ridiculous. =)
Ultimately, you need to find something that you're both comfortable with, since you will both have to maintain this code, and I think this might be a nice compromise, rather than simply "Inheritance or No inheritance".
Good luck!
in a very simple sentence can say:
when you need to use inheritance (subclassing) that subclass has different behaviour (not properties) than super class.