Related
From Apple's own website: "At the heart of Swift's design are two incredibly powerful ideas: protocol-oriented programming and first class value semantics."
Can someone please elaborate what exactly is protocol oriented programming, and what added value does it bring?
I have read this and watched the Protocol-Oriented Programming in Swift video, but coming from an Objective-C background still haven't understood it. I kindly ask for a very plain English answer along with code snippets & technical details about how it's different from Objective-C.
Just one of the confusions I have is using <tableViewDelegate, CustomDelegate> Couldn't we also conform to multiple protocols in Objective-C as well? So again how is Swift new?
EDIT: See Protocol-Oriented Views video. I find this video to be more basic and easier to grasp a meaningful use case. The WWDC video itself is a bit advanced and requires more breadth. Additionally the answers here are somewhat abstract.
Preface: POP and OOP are not mutually exclusive. They're design paradigms that are greatly related.
The primary aspect of POP over OOP is that is prefers composition over inheritance. There are several benefits to this.
In large inheritance hierarchies, the ancestor classes tend to contain most of the (generalized) functionality, with the leaf subclasses making only minimal contributions. The issue here is that the ancestor classes end up doing a lot of things. For example, a Car drives, stores cargo, seats passengers, plays music, etc. These are many functionalities that are each quite distinct, but they all get indivisibly lumped into the Car class. Descendants of Car, such as Ferrari, Toyota, BMW, etc. all make minimal modifications to this base class.
The consequence of this is that there is reduced code reuse. My BoomBox also plays music, but it's not a car. Inheriting the music-playing functionality from Car isn't possible.
What Swift encourages instead is that these large monolithic classes be broken down into a composition of smaller components. These components can then be more easily reused. Both Car and BoomBox can use MusicPlayer.
Swift offers multiple features to achieve this, but the most important by far are protocol extensions. They allow implementation of a protocol to exist separate of its implementing class, so that many classes may simply implement this protocol and instantly gain its functionality.
It surprised me that none of the answers mentioned value type in POP.
To understand what is protocol oriented programming, you need to understand what are drawbacks of objected oriented programming.
It (Objc) has only one inheritance. If we have very complicated hierarchy of inheritance, the bottom class may have a lot of unnecessary state to hold.
It uses class which is a reference type. Reference type may cause code unsafe. e.g. Processing collection of reference types while they are being modified.
While in protocol oriented programming in swift:
It can conform multiple protocols.
It can be used by not only class, but also structures and enumerations.
It has protocol extension which gives us common functionality to all types that conforms to a protocol.
It prefers to use value type instead of reference type. Have a look at the standard swift library here, you can find majority of types are structures which is value type. But this doesn't mean you don't use class at all, in some situation, you have to use class.
So protocol oriented programming is nothing but just an another programming paradigm that try to solve the OOP drawbacks.
In Objective C protocol is the same thing as interface in most languages. So in Objective C protocol's usage is limited to SOLID principle "Depend upon Abstractions. Do not depend upon concretions."
In Swift protocols were improved so seriously that since they still could be used as interfaces in fact they are closer to classes (like Abstract classes in C++)
In Objective C the only way to share functionality between classes is an inheritance. And you could inherit the only one parent class. In Swift you could also adopt as many protocols as you want. And since protocols in Swift can have default methods implementation they give us a fully-functional Multiple inheritance. More flexibility, better code reuse - awesome!
Conclusion:
Protocol Oriented Programming is mostly the same as OOP but it pays additional attention to functionality sharing not only via inheritance but also via protocol adoption (Composition over inheritance).
Worth to mention that in C++ abstract classes are very similar to protocols in Swift but no one says C++ supports some specific type of OOP. So in general POP is a one of the versions of OOP if we speak about programming paradigms. For Swift POP is an improved version of OOP.
Adding to the above answer
Protocol is a interface in which signature of methods and properties are declared and any class/struct/enum subclassing the enum must have to obey the contract means they have to implement all the methods and properties declared in superclass protocol.
Reason to use Protocol
Classes provide single inheritance and struct doesn't support inheritance. Thus protocols was introduced.
Extension The methods declare inside the protocol can be implemented inside the extension to avoid the redundancy of the code in case protocol is being inherited in multiple class / struct having same method implementation. We can call the method by simply declaring the object of struct/enums. Even we can restrict the extension to a list of classes, only restricted class will be able to use the method implemented inside the extension while rest of the classes have to implement method inside own class.
Example
protocol validator{
var id : String{ get }
func capitialise()-> (String)
}
extension validator where Self : test{
func capitialise() -> String{
return id.capitalized
}
}
class test : validator {
var id: String
init(name:String) {
id = name
}
}
let t = test(name: "Ankit")
t.capitialise()
When to use In OOP suppose we have a vehicle base class which is inherited by the airplane, bike, car etc. Here break, acceleration may be common method among three subclass but not the flyable method of airplane. Thus if we are declaring flyable method also in OOP, the bike and car subclass also have the inherit flyable method which is of no use for those class. Thus in the POP we can declare two protocols one is for flyable objects and other is for break and acceleration methods. And flyable protocol can be restricted to use by only the airplane
Protocol Oriented Programming(POP)
protocol-first approach
Protocol as a key point of OOP concept. abstraction, inheritance, polymorphism, encapsulation.
Protocol as a base for SOLID[About]
Protocol instead of class hierarchy tree. Its is hard to support class inheritance. Moreover it has some performance impact
Class/struct can implements multiple protocols(a kind of multiple inheritance)
Composition over inheritance.
extension MyClass: MyProtocol {
}
Default method. Shared implementation for all implementators
extension MyProtocol {
func foo() {
//logic
}
}
Protocol inheritance. One protocol can extends another protocol. Implementator of protocol one should implements all from first and the second protocols
protocol ProtocolB: ProtocolA {
}
value type implement protocol(as usual reference type)[About]
Protocol Oriented Programming (POP)
Came since Swift 2.0
class (OOP)
is reference type
memory leak, incorrect data stored,race condition to access in complex multi-thread environments
can be large by inheriting members of super classes at chain time
struct (POP)
is value type - each time a fresh copy is made when needed
provides multi inheritance - inherits protocols
Protocol :
Defines what Methods, Properties and Initializes are required o Can
inherit another Protocol(s)
Don’t have to use override keyword to implement protocol functions
Extensions:
Default value and
The default implementation for protocol
Can add extra members to
protocol
what is protocol oriented programming? What’s POP?
is a new programming paradigm
we start designing our system by defining protocols. We rely on new concepts: protocol extensions, protocol inheritance, and protocol compositions.
value types can inherit from protocols, even multiple protocols. Thus, with POP, value types have become first class citizens in Swift. value types like enums, structs
*POP lets you to add abilities to a class or struct or enum with protocols which supports multiple implementations.
Apple tells us:
“Don’t start with a class, start with a protocol.”
Why? Protocols serve as better abstractions than classes.
Protocols: are a fundamental feature of Swift. They play a leading role in the structure of the Swift standard library and are a common method of abstraction.
Protocols are used to define a “blueprint of methods, properties, and other requirements that suit a particular task or piece of functionality.”
Benefits of Protocol-Oriented Programming:
All classes are decoupled from each other
Separating the concerns of declaration from implementation
Reusability
Testability
After reading lots of blogs, forum entries and several Apple docs, I still don't know whether extensive subclassing in Objective-C is a wise thing to do or not.
Take for example the following case:
Say I'm developing a puzzle game which
has a lot of elements. All of those
elements share a certain amount of the
same behaviour. Then, within my
collection of elements, different
groups of elements share equal
behaviour, distinguishing groups from
groups, etc...
So, after determining what inherits
from what, I decided to subclass out
of oblivion. And why shouldn't I?
Considering the ease tweaking general
behaviour takes with this model, I
think I accomplished something OOP is
meant for.
But, - and this is the source of my question - Apple mentions using delegates, data source methods, and informal protocols in favour of subclassing. It really boggles my mind why?
There seem to be two camps. Those in favor of subclassing, those in fafor of not. It depends on personal taste apparently. I'm wondering what the pros and cons are of subclassing massively and not subclassing massively?
To wrap it up, my question is simple: Am I right? And why or why not?
Delegation is a means of using the composition technique to replace some aspects of coding you would otherwise subclass for. As such, it boils down to the age old question of the task at hand needing one large thing that knows how to do a lot, or if you have a loose network of specialized objects (a very UNIX sort of model of responsibility).
Using a combination of delegates and protocols (to define what the delegates are supposed to be able to do) provides a great deal of flexibility of behavior and ease of coding - going back to that Liskov substitution principle, when you subclass you have to be careful you don't do anything a user of the whole class would find unexpected. But if you are simply making a delegate object then you have much less to be responsible for, only that the delegate methods you implement do what that one protocol calls for, beyond that you don't care.
There are still many good reasons to use subclasses, if you truly have shared behavior and variables between a number of classes it may make a lot of sense to subclass. But if you can take advantage of the delegate concept you'll often make your classes easier to extend or use in ways you the designer may not have expected.
I tend to be more of a fan of formal protocols than informal ones, because not only do formal protocols make sure you have the methods a class treating you as a delegate expect, but also because the protocol definition is a natural place to document what you expect from a delegate that implements those methods.
Personally, I follow this rule: I can create a subclass if it respects the Liskov substitution principle.
Subclassing has it's benefits, but it also has some drawbacks. As a general rule, I try to avoid implementation inheritance and instead use interface inheritance and delegation.
One of the reasons I do this is because when you inherit implementation, you can wind up with problems if you override methods but don't adhere to their (sometimes undocumented contract). Additionally, I find walking class hierarchies with implementation inheritance difficult because methods can be overridden or implemented at any level. Finally, when subclassing you can only widen an interface, you can't narrow it. This leads to leaky abstractions. A good example of this is java.util.Stack which extends java.util.Vector. I shouldn't be able to treat a stack as a Vector. Doing so only allows the consumer to run around the interface.
Others have mentioned the Liskov Substitution Principle. I think that using that would have certainly cleared up the java.util.Stack problem but it can also lead to very deep class hierarchies in order to put ensure that classes get only the methods they are supposed to have.
Instead, with interface inheritance there is essentially no class hierarchy because interfaces rarely need to extend one another. The classes simply implement the interfaces that they need to and can therefore be treated in the correct way by the consumer. Additionally, because there is no implementation inheritance, consumers of these classes won't infer their behavior due to previous experience with a parent class.
In the end though, it doesn't really matter which way you go. Both are perfectly acceptable. It's really more a matter of what you're more comfortable with and what the frameworks that you're working with encourage. As the old saying goes: "When in Rome do as Romans do."
There's nothing wrong with using inheritance in Objective-C. Apple uses it quite a bit. For instance, in Cocoa-touch, the inheritance tree of UIButton is UIControl : UIView : UIResponder : NSObject.
I think Martin hit on an important point in mentioning the Liskov substitution principle. Also, proper use of inheritance requires that the implementer of the subclass has a deep knowledge of the super class. If you've ever struggled to extend a non-trivial class in a complex framework, you know that there's always a learning curve. In addition, implementation details of the super class often "leak through" to the subclass, which is a big pain in the #$& for framework builders.
Apple chose to use delegation in many instances to address these problems; non-trivial classes like UIApplication expose common extension points through a delegate object so most developers have both an easier learning curve and a more loosely coupled way to add application specific behavior -- extending UIApplication directly is rarely necessary.
In your case, for your application specific code, use which ever techniques you're comfortable with and work best for your design. Inheritance is a great tool when used appropriately.
I frequently see application programmers draw lessons from framework designs and trying to apply them to their application code (this is common in Java, C++ and Python worlds as well as Objective-C). While it's good to think about and understand the choices framework designers made, those lessons don't always apply to application code.
In general you should avoid subclassing API classes if there exist delegates, etc that accomplish what you want to do. In your own code subclassing is often nicer, but it really does depend on your goals, eg. if you're providing an API you should provide a delegate based API rather than assuming subclassing.
When dealing with APIs subclassing has more potential bugs -- eg. if any class in the class hierarchy gets a new method that has the same name as your addition you make break stuff. And also, if you're providing a useful/helper type function there's a chance that in the future something similar will be added to the actual class you were subclassing, and that might be more efficient, etc but your override will hide it.
Please read the Apple documentation "Adding behavior to a Cocoa program"!. Under "Inheriting from a Cocoa class" section, see the 2nd paragraph. Apple clearly mentions that Subclassing is the primary way of adding application specific behavior to the framework (please note, FRAMEWORK).
MVC pattern does not completely disallow the use of subclasses or subtypes. Atleast I have not seen this recommendation from either Apple or others (if I have missed please feel free to point me to the right source of information about this). If you are subclassing api classes only within your application, please go ahead, no one's stopping you but do take care that it does not break the behavior of the class/api as a whole. Subclassing is great way of extending the framework api's functionality. We see a lot of subclassing within the Apple IOS framework APIs too.
As a developer one has to take care the implementation is well documented and not duplicated accidentally by another developer. Its another ball game altogether if your application is a set of API classes that you plan to distribute as reusable component.
IMHO, rather than asking around what the best practice is, first read the related documentation thoroughly, implement and test it. Make your own judgement. You know best about what you're up to.
It's easy for others (like me and so many others) to just read stuff from different sources on the Net and throw around terms. Be your own judge, it has worked for me so far.
I really think it depends on what you're trying to do. If the puzzle game you describe in the example really does have a set of unique elements that share common attributes, and there's no provided classes - say, for example, "NSPuzzlePiece" - that fit your needs, then I don't see a problem with subclassing extensively.
In my experience, delegates, data source methods, and informal protocols are much more useful when Apple has provided a class that already does something close to what you want it to do.
For example, say you're building an app that uses a table. There is (and I speak here of the iPhone SDK, since that's where I have experience) a class UITableView that does all the little niceties of creating a table for interaction with the user, and it's much more efficient to define a data source for an instance of UITableView than it is to completely subclass UITableView and redefine or extend its methods to customize its behavior.
Similar concepts go for delegates and protocols. If you can fit your ideas into Apple's classes, then it's usually easier (and will work more smoothly) to do so and use data source, delegates, and protocols than it is to create your own subclasses. It helps you avoid extra work and wasting time, and is usually less error-prone. Apple's classes have taken care of the business of making functions efficient and debugging; the more you can work with them, the fewer mistakes your program will have in the long run.
my impression of ADC's emphasis 'against' subclassing has more to do with the legacy of how the operating system has evolved... back in the day (Mac Classic aka os9) when c++ was the primary interface to most of the mac toolbox, subclassing was the de-facto standard in order for a programmer to modify the behaviour of commonplace OS features (and this was indeed sometimes a pain in the neck and meant that one had to be very careful that any and all modifications behaved predictably and didn't break any standard behaviour).
this being said, MY IMPRESSION of ADC's emphasis against subclassing is not putting forth a case for designing an application's class hierarchy without inheritance, BUT INSTEAD to point out that in the new way of doing things (ie OSX) there are in most cases more appropriate means to go about customizing standard behavior without needing to subclass.
So, by all means, design your puzzle program's architecture as robustly as you can, leveraging inheritance as you see fit!
looking forward to seeing your cool new puzzle application!
|K<
Apple indeed appears to passively discourage subclassing with Objective-C.
It is an axiom of OOP design to Favor composition over implementation.
This is not yet another question about the difference between abstract classes and interfaces, so please think twice before voting to close it.
I am aware that interfaces are essential in those OOP languages which don't support multiple inheritance - such as C# and Java. But what about those with multiple inheritance? Would be a concept of interface (as a specific language feature) redundant in a language with multiple inheritance? I guess that OOP "contract" between classes can be established using abstract classes.
Or, to put it a bit more explicitly, are interfaces in C# and Java just a consequence of the fact that they do not support multiple inheritance?
Not at all. Interfaces define contracts without specifying implementations.
So they are needed even if multiple inheritance is present - inheritance is about implementation.
Technically, you can use an abstract class in multiple inheritance to simulate an interface. But thus one can be inclined to write some implementation there, which will creates big messes.
Depends on the test for redundancy.
If the test is "can this task be achieved without the language feature" then classes themselves are redundant because there are Turing compete languages without classes. Or, from an engineering base, anything beyond machine code is redundant.
Realistically, the test is a more subtle combination of syntax and semantics. A thing is redundant if it doesn't improve either the syntax or the semantics of a language, for a reasonable number of uses.
In languages that make the distinction, supporting an interface declares that a class knows how to converse in a certain manner. Inheriting from another class imports (and, probably, extends or modifies) the functionality of another class.
Since the two tasks are not logically equivalent, I maintain that interfaces are not redundant. Distinguishing between the two improves the semantics for a large number of programs because it can more specifically indicate programmer intent.
... The lack of multiple inheritance
forced us to add the concept of
interfaces...
Krzysztof Cwalina, in The C# Programming Language (4th Ed.) (p. 56)
So yes, I believe interfaces are redundant given multiple inheritance. You could use pure abstract base classes in a language supporting multiple inheritance or mix-ins.
That said, I'm quite happy with single inheritance most of the time. Eric Lippert makes the point earlier in the same volume (p. 10) that the choice of single inheritance "... eliminates in one stroke many of the complicated corner cases..."
There are languages that support multiple inheritance that do not include a parallel concept to the Java interface. Eiffel is one of them. Bertrand Meyer did not see the need for them, since there was the ability to define a deferred class (which is something most folks call an abstract class) with a fleshed out contract.
The lack of multiple inheritance can lead to situations where a programmer needs to create a utility class or the like to prevent writing duplicated code in objects that implement the same interface.
It may be that the presence of the contract was a significant contribution to the absence of a completely implementation free concept of an interface.... Contracts are harder to write without some implementation details to test against.
So, technically interfaces are redundant in a language that supports MI.
But, as others have pointed out... multiple inheritance can be a very tricky thing to use correctly, all the time. I know I couldn't... and I worked for Meyer as he was drafting Object Oriented Software Construction, 2nd edition.
Are interfaces in C# and Java just a
consequence of the fact that they do
not support multiple inheritance?
Yes, they are. At least in Java. As a simple language, Java's creators wanted a language that most developers could grasp without extensive training. To that end, they worked to make the language as similar to C++ as possible (familiar) without carrying over C++'s unnecessary complexity (simple). Java's designers chose to allow multiple interface inheritance through the use of interfaces, an idea borrowed from Objective C's protocols.
See there for details
And, yes, I believe that like in C++ Interfaces are redundant, if you have multiple inheritance. If you have a more powerful feature, why to keep the less one?
Well, if you go this way, you could say that C and C++, C# and oll other high level languages are redundant because you can code anything you want using assembly. Sure you don't absolutely need these high level languages, however, they help ... a lot.
All these languages come with various utilities. For some of them, the interface concept is one of these utilities. So yes, in C++, you could avoid using interfaces an stick with abstract classes without implementation.
As a matter of fact, if you want to program Microsoft COM with C, although C doesn't know the interface concept, you can do it because all .h files define interfaces this way:
#if defined(__cplusplus) && !defined(CINTERFACE)
MIDL_INTERFACE("ABCDE000-0000-0000-0000-000000000000")
IMyInterface : public IUnknown
{
...
}
#else /* C style interface */
typedef struct IMyInterfaceVtbl
{
BEGIN_INTERFACE
HRESULT ( STDMETHODCALLTYPE *SomMethod )(... ...);
END_INTERFACE
} IMyInterfaceVtbl;
interface IMyInterface
{
CONST_VTBL struct IMyInterfaceVtbl *lpVtbl;
};
#endif
Some kind of another syntactic sugar...
And it's true to say that in C#, if I hadn't the interface concept, I don't know how I could really code :). In C#, we absolutely need interfaces.
Interfaces are preferable to multiple inheritance since inheritance violates encapsulation according to "Effective Java" Item 16, Favor composition over inheritance.
This is more of a subjective question, so I'm going to preemptively mark it as community wiki.
Basically, I've found that in most of my code, there are many classes, many of which use each other, but few of which are directly related to each other. I look back at my college days, and think of the traditional class Cat : Animal type examples, where you have huge inheritance trees, but I see none of this in my code. My class diagrams look like giant spiderwebs, not like nice pretty trees.
I feel I've done a good job of separating information logically, and recently I've done a good job of isolating dependencies between classes via DI/IoC techniques, but I'm worried I might be missing something. I do tend to clump behavior in interfaces, but I simply don't subclass.
I can easily understand subclassing in terms of the traditional examples such as class Dog : Animal or class Employee : Person, but I simply don't have anything that obvious I'm dealing with. And things are rarely as clear-cut as class Label : Control. But when it comes to actually modeling real entities in my code as a hierarchy, I have no clue where to begin.
So, I guess my questions boil down to this:
Is it ok to simply not subclass or inherit? Should I be concerned at all?
What are some strategies you have to determine objects that could benefit from inheritance?
Is it acceptable to always inherit based on behavior (interfaces) rather than the actual type?
Inheritance should always represent an "is-a" relationship. You should be able to say "A is a B" if A derives from B. If not, prefer composition. It's perfectly fine to not subclass when it is not necessary.
For example, saying that FileOpenDialog "is-a" Window makes sense, but saying that an Engine "is-a" Car is nonsense. In that case, an instance of Engine inside a Car instance is more appropriate (It can be said that Car "is-implemented-in-terms-of" Engine).
For a good discussion of inheritance, see Part 1 and Part 2 of "Uses and Abuses of Inheritance" on gotw.ca.
As long as you do not miss the clear cut 'is a' relationships, it's ok and in fact, it's best not to inherit, but to use composition.
is-a is the litmus test. if (Is X a Y?) then class X : Y { } else class X { Y myY; } or class Y { X myX; }
Using interfaces, that is, inheriting behavior, is a very neat way to structure the code via adding only the needed behavior and no other. The tricky part is defining those interfaces well.
No technology or pattern should be used for its own sake. You obviously work in a domain where classes tend to not benefit from inheritance, so you shouldn't use inheritance.
You've used DI to keep things neat and clean. You separated the concerns of your classes. Those are all good things. Don't try and force inheritance if you don't really need it.
An interesting follow-up to this question would be: Which programming domains do tend to make good use of inheritance? (UI and db frameworks have already been mentioned and are great examples. Any others?)
I also hate the Dog -> Mammal -> Animal examples, precisely because they do not occur in real life.
I use very little subclassing, because it tightly couples the subclass to the superclass and makes your code really hard to read. Sometimes implementation inheritance is useful (e.g. PostgreSQLDatabaseImpl and MySQLDatabaseImpl extend AbstractSQLDatabase), but most of the time it just makes a mess of things. Most of the time I see subclasses the concept has been misused and either interfaces or a property should be used.
Interfaces, however, are great and you should use those.
Generally, favour composition over inheritance. Inheritance tends to break encapsulation. e.g. If a class depends on a method of a super class and the super class changes the implementation of that method in some release, the subclass may break.
At times when you are designing a framework, you will have to design classes to be inherited. If you want to use inheritance, you will have to document and design for it carefully. e.g. Not calling any instance methods (that could be overridden by your subclasses) in the constructor. Also if its a genuine 'is-a' relationship, inheritance is useful but is more robust if used within a package.
See Effective Java (Item 14, and 15). It gives a great argument for why you should favour composition over inheritance. It talks about inheritance and encapsulation in general (with java examples). So its a good resource even if you are not using java.
So to answer your 3 questions:
Is it ok to simply not subclass or inherit? Should I be concerned at all?
Ans: Ask yourself the question is it a truly "is-a" relationship? Is decoration possible? Go for decoration
// A collection decorator that is-a collection with
public class MyCustomCollection implements java.util.Collection {
private Collection delegate;
// decorate methods with custom code
}
What are some strategies you have to determine objects that could benefit from inheritance?
Ans: Usually when you are writing a framework, you may want to provide certain interfaces and "base" classes specifically designed for inheritance.
Is it acceptable to always inherit based on behavior (interfaces) rather than the actual type?
Ans: Mostly yes, but you'd be better off if the super class is designed for inheritance and/or under your control. Or else go for composition.
IMHO, you should never do #3, unless you're building an abstract base class specifically for that purpose, and its name makes it clear what its purpose is:
class DataProviderBase {...}
class SqlDataProvider : DataProviderBase {...}
class DB2DataProvider : DataProviderBase {...}
class AccountDataProvider : SqlDataProvider {...}
class OrderDataProvider : SqlDataProvider {...}
class ShippingDataProvider : DB2DataProvider {...}
etc.
Also following this type of model, sometimes if you provide an interface (IDataProvider) it's good to also provide a base class (DataProviderBase) that future consumers can use to conveniently access logic that's common to all/most DataProviders in your application model.
As a general rule, though, I only use inheritance if I have a true "is-a" relationship, or if it will improve the overall design for me to create an "is-a" relationship (provider model, for instance.)
Where you have shared functionality, programming to the interface is more important than inheritance.
Essentially, inheritance is more about relating objects together.
Most of the time we are concerned with what an object can DO, as opposed to what it is.
class Product
class Article
class NewsItem
Are the NewsItem and Article both Content items? Perhaps, and you may find it useful to be able to have a list of content which contains both Article items and NewsItem items.
However, it's probably more likely you'll have them implement similar interfaces. For example, IRssFeedable could be an interface that they both implement. In fact, Product could also implement this interface.
Then they can all be thrown to an RSS Feed easily to provide lists of things on your web page. This is a great example when the interface is important whereas the inheritance model is perhaps less useful.
Inheritance is all about identifying the nature of Objects
Interfaces are all about identifying what Objects can DO.
My class hierarchies tend to be fairly flat as well, with interfaces and composition providing the necessary coupling. Inheritance seems to pop up mostly when I'm storing collections of things, where the different kinds of things will have data/properties in common. Inheritance often feels more natural to me when there is common data, whereas interfaces are a very natural way to express common behavior.
The answer to each of your 3 questions is "it depends". Ultimately it will all depend on your domain and what your program does with it. A lot of times, I find the design patterns I choose to use actually help with finding points where inheritance works well.
For example, consider a 'transformer' used to massage data into a desired form. If you get 3 data sources as CSV files, and want to put them into three different object models (and maybe persist them into a database), you could create a 'csv transformer' base and then override some methods when you inherit from it in order to handle the different specific objects.
'Casting' the development process into the pattern language will help you find objects/methods that behave similarly and help in reducing redundant code (maybe through inheritance, maybe through the use of shared libraries - whichever suits the situation best).
Also, if you keep your layers separate (business, data, presentation, etc.), your class diagram will be simpler, and you could then 'visualize' those objects that aught to be inherited.
I wouldn't get too worried about how your class diagram looks, things are rarely like the classroom...
Rather ask yourself two questions:
Does your code work?
Is it extremely time consuming to maintain? Does a change sometimes require changing the 'same' code in many places?
If the answer to (2) is yes, you might want to look at how you have structured your code to see if there is a more sensible fashion, but always bearing in mind that at the end of the day, you need to be able to answer yes to question (1)... Pretty code that doesn't work is of no use to anybody, and hard to explain to the management.
IMHO, the primary reason to use inheritance is to allow code which was written to operate upon a base-class object to operate upon a derived-class object instead.
After reading lots of blogs, forum entries and several Apple docs, I still don't know whether extensive subclassing in Objective-C is a wise thing to do or not.
Take for example the following case:
Say I'm developing a puzzle game which
has a lot of elements. All of those
elements share a certain amount of the
same behaviour. Then, within my
collection of elements, different
groups of elements share equal
behaviour, distinguishing groups from
groups, etc...
So, after determining what inherits
from what, I decided to subclass out
of oblivion. And why shouldn't I?
Considering the ease tweaking general
behaviour takes with this model, I
think I accomplished something OOP is
meant for.
But, - and this is the source of my question - Apple mentions using delegates, data source methods, and informal protocols in favour of subclassing. It really boggles my mind why?
There seem to be two camps. Those in favor of subclassing, those in fafor of not. It depends on personal taste apparently. I'm wondering what the pros and cons are of subclassing massively and not subclassing massively?
To wrap it up, my question is simple: Am I right? And why or why not?
Delegation is a means of using the composition technique to replace some aspects of coding you would otherwise subclass for. As such, it boils down to the age old question of the task at hand needing one large thing that knows how to do a lot, or if you have a loose network of specialized objects (a very UNIX sort of model of responsibility).
Using a combination of delegates and protocols (to define what the delegates are supposed to be able to do) provides a great deal of flexibility of behavior and ease of coding - going back to that Liskov substitution principle, when you subclass you have to be careful you don't do anything a user of the whole class would find unexpected. But if you are simply making a delegate object then you have much less to be responsible for, only that the delegate methods you implement do what that one protocol calls for, beyond that you don't care.
There are still many good reasons to use subclasses, if you truly have shared behavior and variables between a number of classes it may make a lot of sense to subclass. But if you can take advantage of the delegate concept you'll often make your classes easier to extend or use in ways you the designer may not have expected.
I tend to be more of a fan of formal protocols than informal ones, because not only do formal protocols make sure you have the methods a class treating you as a delegate expect, but also because the protocol definition is a natural place to document what you expect from a delegate that implements those methods.
Personally, I follow this rule: I can create a subclass if it respects the Liskov substitution principle.
Subclassing has it's benefits, but it also has some drawbacks. As a general rule, I try to avoid implementation inheritance and instead use interface inheritance and delegation.
One of the reasons I do this is because when you inherit implementation, you can wind up with problems if you override methods but don't adhere to their (sometimes undocumented contract). Additionally, I find walking class hierarchies with implementation inheritance difficult because methods can be overridden or implemented at any level. Finally, when subclassing you can only widen an interface, you can't narrow it. This leads to leaky abstractions. A good example of this is java.util.Stack which extends java.util.Vector. I shouldn't be able to treat a stack as a Vector. Doing so only allows the consumer to run around the interface.
Others have mentioned the Liskov Substitution Principle. I think that using that would have certainly cleared up the java.util.Stack problem but it can also lead to very deep class hierarchies in order to put ensure that classes get only the methods they are supposed to have.
Instead, with interface inheritance there is essentially no class hierarchy because interfaces rarely need to extend one another. The classes simply implement the interfaces that they need to and can therefore be treated in the correct way by the consumer. Additionally, because there is no implementation inheritance, consumers of these classes won't infer their behavior due to previous experience with a parent class.
In the end though, it doesn't really matter which way you go. Both are perfectly acceptable. It's really more a matter of what you're more comfortable with and what the frameworks that you're working with encourage. As the old saying goes: "When in Rome do as Romans do."
There's nothing wrong with using inheritance in Objective-C. Apple uses it quite a bit. For instance, in Cocoa-touch, the inheritance tree of UIButton is UIControl : UIView : UIResponder : NSObject.
I think Martin hit on an important point in mentioning the Liskov substitution principle. Also, proper use of inheritance requires that the implementer of the subclass has a deep knowledge of the super class. If you've ever struggled to extend a non-trivial class in a complex framework, you know that there's always a learning curve. In addition, implementation details of the super class often "leak through" to the subclass, which is a big pain in the #$& for framework builders.
Apple chose to use delegation in many instances to address these problems; non-trivial classes like UIApplication expose common extension points through a delegate object so most developers have both an easier learning curve and a more loosely coupled way to add application specific behavior -- extending UIApplication directly is rarely necessary.
In your case, for your application specific code, use which ever techniques you're comfortable with and work best for your design. Inheritance is a great tool when used appropriately.
I frequently see application programmers draw lessons from framework designs and trying to apply them to their application code (this is common in Java, C++ and Python worlds as well as Objective-C). While it's good to think about and understand the choices framework designers made, those lessons don't always apply to application code.
In general you should avoid subclassing API classes if there exist delegates, etc that accomplish what you want to do. In your own code subclassing is often nicer, but it really does depend on your goals, eg. if you're providing an API you should provide a delegate based API rather than assuming subclassing.
When dealing with APIs subclassing has more potential bugs -- eg. if any class in the class hierarchy gets a new method that has the same name as your addition you make break stuff. And also, if you're providing a useful/helper type function there's a chance that in the future something similar will be added to the actual class you were subclassing, and that might be more efficient, etc but your override will hide it.
Please read the Apple documentation "Adding behavior to a Cocoa program"!. Under "Inheriting from a Cocoa class" section, see the 2nd paragraph. Apple clearly mentions that Subclassing is the primary way of adding application specific behavior to the framework (please note, FRAMEWORK).
MVC pattern does not completely disallow the use of subclasses or subtypes. Atleast I have not seen this recommendation from either Apple or others (if I have missed please feel free to point me to the right source of information about this). If you are subclassing api classes only within your application, please go ahead, no one's stopping you but do take care that it does not break the behavior of the class/api as a whole. Subclassing is great way of extending the framework api's functionality. We see a lot of subclassing within the Apple IOS framework APIs too.
As a developer one has to take care the implementation is well documented and not duplicated accidentally by another developer. Its another ball game altogether if your application is a set of API classes that you plan to distribute as reusable component.
IMHO, rather than asking around what the best practice is, first read the related documentation thoroughly, implement and test it. Make your own judgement. You know best about what you're up to.
It's easy for others (like me and so many others) to just read stuff from different sources on the Net and throw around terms. Be your own judge, it has worked for me so far.
I really think it depends on what you're trying to do. If the puzzle game you describe in the example really does have a set of unique elements that share common attributes, and there's no provided classes - say, for example, "NSPuzzlePiece" - that fit your needs, then I don't see a problem with subclassing extensively.
In my experience, delegates, data source methods, and informal protocols are much more useful when Apple has provided a class that already does something close to what you want it to do.
For example, say you're building an app that uses a table. There is (and I speak here of the iPhone SDK, since that's where I have experience) a class UITableView that does all the little niceties of creating a table for interaction with the user, and it's much more efficient to define a data source for an instance of UITableView than it is to completely subclass UITableView and redefine or extend its methods to customize its behavior.
Similar concepts go for delegates and protocols. If you can fit your ideas into Apple's classes, then it's usually easier (and will work more smoothly) to do so and use data source, delegates, and protocols than it is to create your own subclasses. It helps you avoid extra work and wasting time, and is usually less error-prone. Apple's classes have taken care of the business of making functions efficient and debugging; the more you can work with them, the fewer mistakes your program will have in the long run.
my impression of ADC's emphasis 'against' subclassing has more to do with the legacy of how the operating system has evolved... back in the day (Mac Classic aka os9) when c++ was the primary interface to most of the mac toolbox, subclassing was the de-facto standard in order for a programmer to modify the behaviour of commonplace OS features (and this was indeed sometimes a pain in the neck and meant that one had to be very careful that any and all modifications behaved predictably and didn't break any standard behaviour).
this being said, MY IMPRESSION of ADC's emphasis against subclassing is not putting forth a case for designing an application's class hierarchy without inheritance, BUT INSTEAD to point out that in the new way of doing things (ie OSX) there are in most cases more appropriate means to go about customizing standard behavior without needing to subclass.
So, by all means, design your puzzle program's architecture as robustly as you can, leveraging inheritance as you see fit!
looking forward to seeing your cool new puzzle application!
|K<
Apple indeed appears to passively discourage subclassing with Objective-C.
It is an axiom of OOP design to Favor composition over implementation.