I come from a ECMAScript background (seems to be very C/C++). Thus, I learned classic C++ style inheritance involving classes, objects, and cool stuff like polymorphism.
I like Android and iOS development lately. Java appears to be C-based, so I'm fine using the majority of my C-based rules there, but iOS is different enough that I'm leery with my approaches -- especially with something like object inheritance.
I ask because there seem to be some strong opinions toward composition. From what I've seen with composition so far, I'm not the biggest fan unless the project provides a good reason to work with it.
You pro Obj-C/iOS devs out there, would you recommend composition over classic inheritance? Or is it a situational thing?
The object model of Objective-C is quite different to C/C++/Java. It is message-based and so more emphasis is placed on objects responding to messages rather than calling methods as in C/C++/Java.
The approach to the Cocoa libraries favours a flatter object inheritance hierarchy and relies on the delegate pattern for customisation and to keep those object hierarchies flat. Why do this? A lot of libraries, especially GUI, suffer from complications due to hierarchy bloat to the point that it is not clear which class you'd inherit from. Most modern object systems in video games, for example (being my industry of expertise), use composition paradigms where objects are constructed by mixing behaviors as it's more flexible and easier to maintain in practice.
I wouldn't say Cocoa libraries use the composition model as such, but rather uses the interaction between classes clearly partitioned in one of the Model-View-Controller areas and uses delegation for customisation. Keeping customisation separate from the core functionality keeps complexity down and hierarchies flatter.
If a new class generally follows Liskov Substitution Principle towards the class that it's derived from, use inheritance, otherwise, prefer composition/aggregation. It is not one against another, both are widely used.
To me, this is more a question of what classes you're using. If you're using Apple's core foundation or UI libraries, I would recommend that you avoid subclassing. Many of these classes are not concrete classes but rather are class clusters. In these classes, the os may decide at runtime which class to actually use.
In general, I prefer composition unless I have a very compelling reason to subclass. Even when I have a compelling reason to subclass, I often choose instead to create a category of methods for the original class.
Related
After reading lots of blogs, forum entries and several Apple docs, I still don't know whether extensive subclassing in Objective-C is a wise thing to do or not.
Take for example the following case:
Say I'm developing a puzzle game which
has a lot of elements. All of those
elements share a certain amount of the
same behaviour. Then, within my
collection of elements, different
groups of elements share equal
behaviour, distinguishing groups from
groups, etc...
So, after determining what inherits
from what, I decided to subclass out
of oblivion. And why shouldn't I?
Considering the ease tweaking general
behaviour takes with this model, I
think I accomplished something OOP is
meant for.
But, - and this is the source of my question - Apple mentions using delegates, data source methods, and informal protocols in favour of subclassing. It really boggles my mind why?
There seem to be two camps. Those in favor of subclassing, those in fafor of not. It depends on personal taste apparently. I'm wondering what the pros and cons are of subclassing massively and not subclassing massively?
To wrap it up, my question is simple: Am I right? And why or why not?
Delegation is a means of using the composition technique to replace some aspects of coding you would otherwise subclass for. As such, it boils down to the age old question of the task at hand needing one large thing that knows how to do a lot, or if you have a loose network of specialized objects (a very UNIX sort of model of responsibility).
Using a combination of delegates and protocols (to define what the delegates are supposed to be able to do) provides a great deal of flexibility of behavior and ease of coding - going back to that Liskov substitution principle, when you subclass you have to be careful you don't do anything a user of the whole class would find unexpected. But if you are simply making a delegate object then you have much less to be responsible for, only that the delegate methods you implement do what that one protocol calls for, beyond that you don't care.
There are still many good reasons to use subclasses, if you truly have shared behavior and variables between a number of classes it may make a lot of sense to subclass. But if you can take advantage of the delegate concept you'll often make your classes easier to extend or use in ways you the designer may not have expected.
I tend to be more of a fan of formal protocols than informal ones, because not only do formal protocols make sure you have the methods a class treating you as a delegate expect, but also because the protocol definition is a natural place to document what you expect from a delegate that implements those methods.
Personally, I follow this rule: I can create a subclass if it respects the Liskov substitution principle.
Subclassing has it's benefits, but it also has some drawbacks. As a general rule, I try to avoid implementation inheritance and instead use interface inheritance and delegation.
One of the reasons I do this is because when you inherit implementation, you can wind up with problems if you override methods but don't adhere to their (sometimes undocumented contract). Additionally, I find walking class hierarchies with implementation inheritance difficult because methods can be overridden or implemented at any level. Finally, when subclassing you can only widen an interface, you can't narrow it. This leads to leaky abstractions. A good example of this is java.util.Stack which extends java.util.Vector. I shouldn't be able to treat a stack as a Vector. Doing so only allows the consumer to run around the interface.
Others have mentioned the Liskov Substitution Principle. I think that using that would have certainly cleared up the java.util.Stack problem but it can also lead to very deep class hierarchies in order to put ensure that classes get only the methods they are supposed to have.
Instead, with interface inheritance there is essentially no class hierarchy because interfaces rarely need to extend one another. The classes simply implement the interfaces that they need to and can therefore be treated in the correct way by the consumer. Additionally, because there is no implementation inheritance, consumers of these classes won't infer their behavior due to previous experience with a parent class.
In the end though, it doesn't really matter which way you go. Both are perfectly acceptable. It's really more a matter of what you're more comfortable with and what the frameworks that you're working with encourage. As the old saying goes: "When in Rome do as Romans do."
There's nothing wrong with using inheritance in Objective-C. Apple uses it quite a bit. For instance, in Cocoa-touch, the inheritance tree of UIButton is UIControl : UIView : UIResponder : NSObject.
I think Martin hit on an important point in mentioning the Liskov substitution principle. Also, proper use of inheritance requires that the implementer of the subclass has a deep knowledge of the super class. If you've ever struggled to extend a non-trivial class in a complex framework, you know that there's always a learning curve. In addition, implementation details of the super class often "leak through" to the subclass, which is a big pain in the #$& for framework builders.
Apple chose to use delegation in many instances to address these problems; non-trivial classes like UIApplication expose common extension points through a delegate object so most developers have both an easier learning curve and a more loosely coupled way to add application specific behavior -- extending UIApplication directly is rarely necessary.
In your case, for your application specific code, use which ever techniques you're comfortable with and work best for your design. Inheritance is a great tool when used appropriately.
I frequently see application programmers draw lessons from framework designs and trying to apply them to their application code (this is common in Java, C++ and Python worlds as well as Objective-C). While it's good to think about and understand the choices framework designers made, those lessons don't always apply to application code.
In general you should avoid subclassing API classes if there exist delegates, etc that accomplish what you want to do. In your own code subclassing is often nicer, but it really does depend on your goals, eg. if you're providing an API you should provide a delegate based API rather than assuming subclassing.
When dealing with APIs subclassing has more potential bugs -- eg. if any class in the class hierarchy gets a new method that has the same name as your addition you make break stuff. And also, if you're providing a useful/helper type function there's a chance that in the future something similar will be added to the actual class you were subclassing, and that might be more efficient, etc but your override will hide it.
Please read the Apple documentation "Adding behavior to a Cocoa program"!. Under "Inheriting from a Cocoa class" section, see the 2nd paragraph. Apple clearly mentions that Subclassing is the primary way of adding application specific behavior to the framework (please note, FRAMEWORK).
MVC pattern does not completely disallow the use of subclasses or subtypes. Atleast I have not seen this recommendation from either Apple or others (if I have missed please feel free to point me to the right source of information about this). If you are subclassing api classes only within your application, please go ahead, no one's stopping you but do take care that it does not break the behavior of the class/api as a whole. Subclassing is great way of extending the framework api's functionality. We see a lot of subclassing within the Apple IOS framework APIs too.
As a developer one has to take care the implementation is well documented and not duplicated accidentally by another developer. Its another ball game altogether if your application is a set of API classes that you plan to distribute as reusable component.
IMHO, rather than asking around what the best practice is, first read the related documentation thoroughly, implement and test it. Make your own judgement. You know best about what you're up to.
It's easy for others (like me and so many others) to just read stuff from different sources on the Net and throw around terms. Be your own judge, it has worked for me so far.
I really think it depends on what you're trying to do. If the puzzle game you describe in the example really does have a set of unique elements that share common attributes, and there's no provided classes - say, for example, "NSPuzzlePiece" - that fit your needs, then I don't see a problem with subclassing extensively.
In my experience, delegates, data source methods, and informal protocols are much more useful when Apple has provided a class that already does something close to what you want it to do.
For example, say you're building an app that uses a table. There is (and I speak here of the iPhone SDK, since that's where I have experience) a class UITableView that does all the little niceties of creating a table for interaction with the user, and it's much more efficient to define a data source for an instance of UITableView than it is to completely subclass UITableView and redefine or extend its methods to customize its behavior.
Similar concepts go for delegates and protocols. If you can fit your ideas into Apple's classes, then it's usually easier (and will work more smoothly) to do so and use data source, delegates, and protocols than it is to create your own subclasses. It helps you avoid extra work and wasting time, and is usually less error-prone. Apple's classes have taken care of the business of making functions efficient and debugging; the more you can work with them, the fewer mistakes your program will have in the long run.
my impression of ADC's emphasis 'against' subclassing has more to do with the legacy of how the operating system has evolved... back in the day (Mac Classic aka os9) when c++ was the primary interface to most of the mac toolbox, subclassing was the de-facto standard in order for a programmer to modify the behaviour of commonplace OS features (and this was indeed sometimes a pain in the neck and meant that one had to be very careful that any and all modifications behaved predictably and didn't break any standard behaviour).
this being said, MY IMPRESSION of ADC's emphasis against subclassing is not putting forth a case for designing an application's class hierarchy without inheritance, BUT INSTEAD to point out that in the new way of doing things (ie OSX) there are in most cases more appropriate means to go about customizing standard behavior without needing to subclass.
So, by all means, design your puzzle program's architecture as robustly as you can, leveraging inheritance as you see fit!
looking forward to seeing your cool new puzzle application!
|K<
Apple indeed appears to passively discourage subclassing with Objective-C.
It is an axiom of OOP design to Favor composition over implementation.
The necessity for protocols are to abstract the methods of classes which are not hierarchically related.
The similar things also can be done with the help a class (interface) which encompasses all those methods and subclass them ? (This is not really possible due to the Multiple inheritance problem since a class has to be derived already from NSObject.ignore the NSProxy case)
What special things that protocols can do than a class?
Are protocols trying to solve only the multiple inheritance problem?
Protocols main advantage is, that they describe what a object should be able to do, without enforcing subclassing. In languages that dont have multiple inheritance such a mechanism is needed, if you want others programmers be able to use your classes. (see delegation)
For an instance Java has something similar, called interfaces.
This means a huge advantage, as it is very easy to build dynamic systems, as I can allow other developers to enhance my classes via a clearly defined protocol.
A practical example:
I am just designing a REST API and I am providing a Objective-C client library.
As my api requires information about the user, I add a protocol
#protocol VSAPIClientUser <NSObject>
-(NSString *)lastName;
-(NSString *)firstName;
-(NSString *)uuid;
#end
Anywhere I need this user information, I will have an basic id-object, that must conform to this protocol
-(void)addUserWithAttributes:(id<VSAPIClientUser>)user;
You can read this line as: "I don't care, what kind of object you provide here, as long as it knows about lastName, firstName and uuid". So I have no idea, how the rest of that object looks like — and I don't care.
As the library author I can use this safely:
NSDictionary *userAttributes = #{#"last_name" : [user lastName],
#"first_name": [user firstName],
#"uuid": [user uuid]};
BTW: I wouldn't call the absence of multi-inheritance a problem. It is just another design.
“[…] If I revisited that decision today, I might even go so far as to remove single inheritance as well. Inheritance just isn’t all that important. Encapsulation is OOP’s lasting contribution.” — Brad Cox was asked, why Objective-C doesn’t have multiple inheritance. (Masterminds of Programming: Conversations with the Creators of Major Programming Languages, p. 259)
As an alternative view....
Object-oriented programming's most basic value comes from being able to model real-world relationships directly as opposed to translating them into abstract and vaguely-equivalent computer-world constructs. Wherever a language requires you to think about the implementation of a solution in different terms than those you can use to describe your problem, it is flawed as an OOP tool. (Note that I didn't say 'useless'. :) )
Real-world objects have various roles that depend on context. Those roles can have state. Therefore, I agree that lack of multiple-inheritance is an impediment to ease of modelling. Objective-C protocols, Java interfaces, and the claim that you should prefer composition to inheritance are all denials of a fundamental part of the OOP advantage.
One of many uses of C++ abstract classes is, among their other uses, to define interfaces (to specify reusable contracts). There are however also other programming languages, such as Objective C that have a separate concept for interfaces in this sense; in Objective C, it is called protocols.
A wide use of such a construct does require a way of attaching more than one contract to an object; and if such interfaces are allowed to inherit from each another, this has to be multiple inheritance to be useful.
However, this is not the same thing as multiple inheritance between classes.
Protocols are not trying to solve the multiple inheritance problem. They are trying to separate contract specification from object (data+code) specification. They can actually do much less than a class (if you ignore the multiple inheritance aspect) and that's why they exist as a separate concept.
Implementing a protocol is generally a much less restrictive (safer) proposition to consider than inheriting from a class.
There is a Design Principle that says Favor composition over inheritance and its advertised benefit is that it simplifies design. Let's agree on that as background for this question.
So, could override be deprecated? Could we, in theory, get rid of it for good?
Let's be a bit over zealous on the above mentioned Design Principle and take it to the extreme: composition all the way. One reason should be enough for now, override abuse.
One question arises: are we, programmers, going to loose something? Is any power lost trying to prevent some possible abuse?
So, what applications are there for override and can they be achieved otherwise? Should they?
Not only is this a completely radical and impractical proposal, it's not a particularly compelling one. Just because a feature gets abused doesn't mean that it should be removed entirely. People have been abusing all sorts of things for a very long time, but that hardly implies that they don't serve a useful purpose when used correctly.
Design patterns are one thing; designing an intentionally limited language to conform with your ideal notion of a good design pattern is quite another. To my mind, it's an exercise in futility. Programmers will still find something to abuse.
And I take issue with the central assumption that any use of override is inappropriate or abusive. There are lots of cases where you want to take advantage of inheritance implying an is-a relationship. Sure, this model doesn't fit the real world 100% of the time, but there are plenty of times that it does.
The Animal and Shape class examples that you read about in textbooks might be a bit contrived, but I frequently use inheritance in real-world applications.
That's not to imply that I disagree with the sentiment that one should generally or when in doubt, favor composition over inheritance. But that's not saying that inheritance is bad and should never be used.
If you remove inheritance altogether you remove a significant feature of OOP design.
Using inheritance allows you to use a "is a" design, which has a strong meaning in OOP design, and of course saves code redundancy.
If you'd use only encapsulation you'd have to either expose the members (which isn't always what you want (raises design complexity because of the amount of stuff the programmer needs to know about).
Or, make wrapper methods that will call the member's methods (which is redundant).
Besides that, lets assume you know the difference between overriding and hiding, you can see that most OOP languages will choose to use strictly overriding when given the choice.
This is because overriding is usually more intuitive than hiding.
So, if you remove overriding, and still allow inheritance, you are left with hiding. That usually leads to many runtime errors and un-expected results with type conflicts.
Farther more you won't be able to have things like an array or list of base class pointers that point a lot of different derived classes. Because if you don't have overrides it won't be able to call the specified derived class method, it will only call the same base class method for all of them.
I've added a response on behalf of astander extracting from his link (hope you don't mind)
For example, one advantage with inheritance is that it is easier to
use than composition. However, that ease of use comes at the cost that
it is harder to reuse because the subclass is tied to the parent
class.
One advantage of composition is that it is more flexible because
behavior can be swapped at runtime. One disadvantage of composition is
that the behavior of the system may be harder to understand just by
looking at the source. These are all factors one should think about
when applying composition over inheritance.
I'm always using polymorphism. I always seem to have a bunch of objects with some common concept behind them and a lot of code that is interested in that concept--that is, they care about Animals, not Lions and Tigers and Bears or even Carnivores. Interfaces often work better for this than superclasses, so I suppose I could get by without subclassing. (Are interfaces okay when subclassing is not?) However, I have often found that a lot of classes using an interface have identical code for the interface methods. Changing the interface to a superclass can let me get rid of a lot of duplicate code. The other situation I find myself in is where a large, complex class does what I need except for one teeny, tiny little thing. With subclassing, I can create a new class that does exactly what I need in just a few lines.
There may be a language component to this debate. When I'm writing in Java I subclass at a furious rate. When I'm writing in C# I think long and hard before overriding anything or even using interfaces. I'm not sure why and it may have more to do with the type of work I do in those languages than the languages themselves. But working in C#, I am quite sympathetic to this idea, while when working in Java...well, I'd have to toss almost all my Java code if I couldn't override.
This article describes an approach to OOP I find interesting:
What if objects exist as
encapsulations, and the communicate
via messages? What if code re-use has
nothing to do with inheritance, but
uses composition, delegation, even
old-fashioned helper objects or any
technique the programmer deems fit?
The ontology does not go away, but it
is decoupled from the implementation.
The idea of reuse without inheritance or dependence to a class hierarchy is what I found most astounding, but how feasible is this?
Examples were given but I can't quite see how I can change my current code to adapt this approach.
So how feasible is this approach? Or is there really not a need for changing code but rather a scenario-based approach where "use only when needed or optimal"?
EDIT: oops, I forgot the link: here it is link
I'm sure you've heard of "always prefer composition over inheritance".
The basic idea of this premise is multiple objects with different functionalities are put together to create one fully-featured object. This should be preferred over inheriting functionality from disparate objects that have nothing to do with each other.
The main argument regarding this is contained in the definition of the Liskov Substitution Principle and playfully illustrated by this poster:
If you had a ToyDuck object, which object should you inherit from, from a purely inheritance standpoint? Should you inherit from Duck? No -- most likely you should inherit from Toy.
Bottomline is you should be using the correct method of abstraction -- whether inheritance or composition -- for your code.
For your current objects, consider if there are objects that ought to be removed from the inheritance tree and included merely as a property that you can call and invoke.
Inheritance is not well suited for code reuse. Inheriting for code reuse usually leads to:
Classes with inherited methods that must not be called on them (violating the Liskov substitution principle), which confuses programmers and leads to bugs.
Deep hierarchies where it takes inordinate amount of time to find the method you need when it can be declared anywhere in dozen or more classes.
Generally the inheritance tree should not get more than two or three levels deep and usually you should only inherit interfaces and abstract base classes.
There is however no point in rewriting existing code just for sake of it. However when you need to modify, try to switch to composition where possible. That will usually allow you to modify the code in smaller pieces, since there will be less coupling between the classes.
I just skimmed the text over, but it seems to say what OO design was always about: Inheritance is not meant as a code reuse tool and loose coupling is good. This has been written dozens times before, see the linked references on the article bottom. This does not mean you should skip inheritance entirely, you just have to use it conciously and only when it makes sense. The article also states this.
As for the duck typing, I find the examples and thoughts questionable. Like this one:
function good (foo) {
if ( !foo.baz || !foo.quux ) {
throw new TypeError("We need foo to have baz and quux methods.");
}
return foo.baz(foo.quux(10));
}
What’s the point in adding three new lines just to report an error that would be reported by the runtime automatically?
Inheritance is fundamental
no inheritance, no OOP.
prototyping and delegation can be used to effect inheritance (like in JavaScript), which is fine, and is functionally equivalent to inheritance
objects, messages, and composition but no inheritance is object-based, not object-oriented. VB5, not Java. Yes it can be done; plan on writing a lot of boilerplate code to expose interfaces and forward operations.
Those that insist inheritance is unnecessary, or that it is 'bad' are creating strawmen: it is easy to imagine scenarios where inheritance is used badly; this is not a reflection on the tool, but on the tool-user.
After reading lots of blogs, forum entries and several Apple docs, I still don't know whether extensive subclassing in Objective-C is a wise thing to do or not.
Take for example the following case:
Say I'm developing a puzzle game which
has a lot of elements. All of those
elements share a certain amount of the
same behaviour. Then, within my
collection of elements, different
groups of elements share equal
behaviour, distinguishing groups from
groups, etc...
So, after determining what inherits
from what, I decided to subclass out
of oblivion. And why shouldn't I?
Considering the ease tweaking general
behaviour takes with this model, I
think I accomplished something OOP is
meant for.
But, - and this is the source of my question - Apple mentions using delegates, data source methods, and informal protocols in favour of subclassing. It really boggles my mind why?
There seem to be two camps. Those in favor of subclassing, those in fafor of not. It depends on personal taste apparently. I'm wondering what the pros and cons are of subclassing massively and not subclassing massively?
To wrap it up, my question is simple: Am I right? And why or why not?
Delegation is a means of using the composition technique to replace some aspects of coding you would otherwise subclass for. As such, it boils down to the age old question of the task at hand needing one large thing that knows how to do a lot, or if you have a loose network of specialized objects (a very UNIX sort of model of responsibility).
Using a combination of delegates and protocols (to define what the delegates are supposed to be able to do) provides a great deal of flexibility of behavior and ease of coding - going back to that Liskov substitution principle, when you subclass you have to be careful you don't do anything a user of the whole class would find unexpected. But if you are simply making a delegate object then you have much less to be responsible for, only that the delegate methods you implement do what that one protocol calls for, beyond that you don't care.
There are still many good reasons to use subclasses, if you truly have shared behavior and variables between a number of classes it may make a lot of sense to subclass. But if you can take advantage of the delegate concept you'll often make your classes easier to extend or use in ways you the designer may not have expected.
I tend to be more of a fan of formal protocols than informal ones, because not only do formal protocols make sure you have the methods a class treating you as a delegate expect, but also because the protocol definition is a natural place to document what you expect from a delegate that implements those methods.
Personally, I follow this rule: I can create a subclass if it respects the Liskov substitution principle.
Subclassing has it's benefits, but it also has some drawbacks. As a general rule, I try to avoid implementation inheritance and instead use interface inheritance and delegation.
One of the reasons I do this is because when you inherit implementation, you can wind up with problems if you override methods but don't adhere to their (sometimes undocumented contract). Additionally, I find walking class hierarchies with implementation inheritance difficult because methods can be overridden or implemented at any level. Finally, when subclassing you can only widen an interface, you can't narrow it. This leads to leaky abstractions. A good example of this is java.util.Stack which extends java.util.Vector. I shouldn't be able to treat a stack as a Vector. Doing so only allows the consumer to run around the interface.
Others have mentioned the Liskov Substitution Principle. I think that using that would have certainly cleared up the java.util.Stack problem but it can also lead to very deep class hierarchies in order to put ensure that classes get only the methods they are supposed to have.
Instead, with interface inheritance there is essentially no class hierarchy because interfaces rarely need to extend one another. The classes simply implement the interfaces that they need to and can therefore be treated in the correct way by the consumer. Additionally, because there is no implementation inheritance, consumers of these classes won't infer their behavior due to previous experience with a parent class.
In the end though, it doesn't really matter which way you go. Both are perfectly acceptable. It's really more a matter of what you're more comfortable with and what the frameworks that you're working with encourage. As the old saying goes: "When in Rome do as Romans do."
There's nothing wrong with using inheritance in Objective-C. Apple uses it quite a bit. For instance, in Cocoa-touch, the inheritance tree of UIButton is UIControl : UIView : UIResponder : NSObject.
I think Martin hit on an important point in mentioning the Liskov substitution principle. Also, proper use of inheritance requires that the implementer of the subclass has a deep knowledge of the super class. If you've ever struggled to extend a non-trivial class in a complex framework, you know that there's always a learning curve. In addition, implementation details of the super class often "leak through" to the subclass, which is a big pain in the #$& for framework builders.
Apple chose to use delegation in many instances to address these problems; non-trivial classes like UIApplication expose common extension points through a delegate object so most developers have both an easier learning curve and a more loosely coupled way to add application specific behavior -- extending UIApplication directly is rarely necessary.
In your case, for your application specific code, use which ever techniques you're comfortable with and work best for your design. Inheritance is a great tool when used appropriately.
I frequently see application programmers draw lessons from framework designs and trying to apply them to their application code (this is common in Java, C++ and Python worlds as well as Objective-C). While it's good to think about and understand the choices framework designers made, those lessons don't always apply to application code.
In general you should avoid subclassing API classes if there exist delegates, etc that accomplish what you want to do. In your own code subclassing is often nicer, but it really does depend on your goals, eg. if you're providing an API you should provide a delegate based API rather than assuming subclassing.
When dealing with APIs subclassing has more potential bugs -- eg. if any class in the class hierarchy gets a new method that has the same name as your addition you make break stuff. And also, if you're providing a useful/helper type function there's a chance that in the future something similar will be added to the actual class you were subclassing, and that might be more efficient, etc but your override will hide it.
Please read the Apple documentation "Adding behavior to a Cocoa program"!. Under "Inheriting from a Cocoa class" section, see the 2nd paragraph. Apple clearly mentions that Subclassing is the primary way of adding application specific behavior to the framework (please note, FRAMEWORK).
MVC pattern does not completely disallow the use of subclasses or subtypes. Atleast I have not seen this recommendation from either Apple or others (if I have missed please feel free to point me to the right source of information about this). If you are subclassing api classes only within your application, please go ahead, no one's stopping you but do take care that it does not break the behavior of the class/api as a whole. Subclassing is great way of extending the framework api's functionality. We see a lot of subclassing within the Apple IOS framework APIs too.
As a developer one has to take care the implementation is well documented and not duplicated accidentally by another developer. Its another ball game altogether if your application is a set of API classes that you plan to distribute as reusable component.
IMHO, rather than asking around what the best practice is, first read the related documentation thoroughly, implement and test it. Make your own judgement. You know best about what you're up to.
It's easy for others (like me and so many others) to just read stuff from different sources on the Net and throw around terms. Be your own judge, it has worked for me so far.
I really think it depends on what you're trying to do. If the puzzle game you describe in the example really does have a set of unique elements that share common attributes, and there's no provided classes - say, for example, "NSPuzzlePiece" - that fit your needs, then I don't see a problem with subclassing extensively.
In my experience, delegates, data source methods, and informal protocols are much more useful when Apple has provided a class that already does something close to what you want it to do.
For example, say you're building an app that uses a table. There is (and I speak here of the iPhone SDK, since that's where I have experience) a class UITableView that does all the little niceties of creating a table for interaction with the user, and it's much more efficient to define a data source for an instance of UITableView than it is to completely subclass UITableView and redefine or extend its methods to customize its behavior.
Similar concepts go for delegates and protocols. If you can fit your ideas into Apple's classes, then it's usually easier (and will work more smoothly) to do so and use data source, delegates, and protocols than it is to create your own subclasses. It helps you avoid extra work and wasting time, and is usually less error-prone. Apple's classes have taken care of the business of making functions efficient and debugging; the more you can work with them, the fewer mistakes your program will have in the long run.
my impression of ADC's emphasis 'against' subclassing has more to do with the legacy of how the operating system has evolved... back in the day (Mac Classic aka os9) when c++ was the primary interface to most of the mac toolbox, subclassing was the de-facto standard in order for a programmer to modify the behaviour of commonplace OS features (and this was indeed sometimes a pain in the neck and meant that one had to be very careful that any and all modifications behaved predictably and didn't break any standard behaviour).
this being said, MY IMPRESSION of ADC's emphasis against subclassing is not putting forth a case for designing an application's class hierarchy without inheritance, BUT INSTEAD to point out that in the new way of doing things (ie OSX) there are in most cases more appropriate means to go about customizing standard behavior without needing to subclass.
So, by all means, design your puzzle program's architecture as robustly as you can, leveraging inheritance as you see fit!
looking forward to seeing your cool new puzzle application!
|K<
Apple indeed appears to passively discourage subclassing with Objective-C.
It is an axiom of OOP design to Favor composition over implementation.