I've seen both used interchangebly but do they really mean the same? From my understanding, Polymorphism stretches the fact that you could exchange an instance of a class by an instance of a subclass, and Late Binding means that when you call a method of an instance, the type decides which method (subclass/superclass) gets called.
Wikipedia has a very nice article about this:
http://en.wikipedia.org/wiki/Polymorphism_in_object-oriented_programming
Summary: Late binding is a way to implement polymorphism.
Related
Is there any direct relationship between Late Binding and Overriding, similarly for Early Binding and Overloading?
They (Binding/Overriding/Overloading) can be termed as ways to implement polymorphism, but is there any "Direct Relationship" ex: Late Binding is a sub/super concept to Overriding and vice versa etc?
They are orthogonal (independent) concepts.
Overloading, Overriding: Forms of polymorphism
Early binding/Late binding: In the former, the method to call is known at compile time. In the latter, at runtime.
Of course, an implementation of overriding usually implies using late binding, because you will only know the object's real type at runtime. But that's just a special case.
This is a pretty general question, but I was wondering today about delegates. At this point I don't really have a specific time I do use them or don't use them - aside from obvious cases, like passing selections from a picker or tableview stuff. For example, if there's a situation where I can pass a reference to an object around and use that to call methods, is there a reason to implement a delegate? In summary, what is the delegate pattern intended for use in and when is it better to NOT use it?
Thanks for the quick and comprehensive answers! They were all extremely helpful.
The advantage of the delegate pattern is loose coupling between the delegating object and its delegate. Loose coupling improves a class's reusability in other contexts.
The delegating object doesn't have to know anything about the object it communicates with (aside from the requirement that it implement the delegate protocol) – especially not its class or what methods it has. If you later want to reuse your component in a different context or have it communicate with another object of a different class, all this object has to do is implement the delegate protocol. The delegating object does not have to be changed at all.
There is also a downside to this, of course, and that is that a bit more code is required and the code you write is not as explicit and therefore may be a bit harder to understand. Whether this (generally small) tradeoff is worth it depends on your use case. If the two objects are tightly coupled anyway and the probability of reuse in the future is low, using the delegate pattern might be overkill.
See this discussion
A delegate allows one object to send messages to another object when an event happens.
Pros
Very strict syntax. All events to be heard are clearly defined in
the delegate protocol.
Compile time Warnings / Errors if a method is not implemented as it should be by a delegate.
Protocol defined within the scope of the controller only.
Very traceable, and easy to identify flow of control within an application.
Ability to have multiple protocols defined one controller, each with different delegates.
No third party object required to maintain / monitor the communication process.
Ability to receive a returned value from a called protocol method. This means that a delegate can help provide information back
to a controller.
Cons
Many lines of code required to define: 1. the protocol definition, 2. the delegate property in the controller, and 3. the implementation of the delegate method definitions within the delegate itself.
Need to be careful to correctly set delegates to nil on object deallocation, failure to do so can cause memory crashes by calling methods on deallocated objects.
Although possible, it can be difficult and the pattern does not really lend itself to have multiple delegates of the same protocol in a controller (telling multiple objects about the same event)
The "use case" for delegation is pretty much the same as for inheritance, namely extending a class behavior in a polymorphic way.
This is how the wikipedia defines delegation:
In software engineering, the delegation pattern is a design pattern in object-oriented programming where an object, instead of performing one of its stated tasks, delegates that task to an associated helper object. There is an Inversion of Responsibility in which a helper object, known as a delegate, is given the responsibility to execute a task for the delegator. The delegation pattern is one of the fundamental abstraction patterns that underlie other software patterns such as composition (also referred to as aggregation), mixins and aspects.
There are, obviously, many differences between delegation and inheritance, but the biggest one is, IMO, that inheritance is a fixed (aka, compile-time) relationship between two classes, while delegation can be defined at run-time (in languages that support this). On the other hand, inheritance offers better support for polymorphism.
Delegation is a huge topic (as inheritance is), and you can read a lot about it. In the end, deciding whether using delegation or inheritance comes down to deciding whether you want an "is-a" or and "has-a" relationship, so it is not so easy to list guidelines for choosing that.
For me, basically, the decision to create a delegate comes from the observation that:
my code presents a set of homogeneous behaviors (homogeneous here means that can be recognized as having a common "nature");
those behaviors might be be "customized" for particular cases (like in, replaced by alternative behaviors).
This is my personal view and a description of the way I get to identify "delegation" patterns. It has probably much to do with the fact that my programming discipline is strongly informed by the principle of refactoring.
Really, IMO, delegation is a way to define "customization" points for your class. As an example, if you have some kind of abstract workflow, where at each step you take some action depending on certain condition; and furthermore those concrete actions could be replaced by other of another kind, then I see there the chance of reuse through delegation.
Hope this helps.
I'm trying to solve a design issue using inheritance based polymorphism and dynamic binding. I have an abstract superclass and two subclasses. The superclass contains common behaviour. SubClassA and SubClassB define some different methods:
SubClassA defines a method performTransform(), but SubClassB does not.
So the following example
1 var v:SuperClass;
2 var b:SubClassB = new SubClassB();
3 v = b;
4 v.performTransform();
would cause a compile error on line 4 as performTransform() is not defined in the superclass. We can get it to compile by casting...
(v as SubClassA).performTransform();
however, this will cause a runtime exception to be thrown as v is actually an instance of SubClassB, which also does not define performTransform()
So we can get around that by testing the type of an object before casting it:
if( typeof v == SubClassA)
{
(cast v to SubClassA).performTransform();
}
That will ensure that we only call performTransform() on v's that are instances of SubClassA. That's a pretty inelegant solution to my eyes, but at least its safe. I have used interface based polymorphism (interface meaning
a type that can't
be instantiated and defines the API of classes that implement it) in the past, but that also feels clunky. For the above case, if SubClassA and SubClassB implemented ISuperClass
that defined performTransform, then they would both have to implement performTransform(). If SubClassB had no real need for a performTransform() you would have to implement an empty function.
There must be a design pattern out there that addresses the issue.
My immediate comment is that your object modelling is wrong. Why treat SubClassA as a SuperClass (is-a relationship), when I would suggest that it's not.
You could implement a dummy performTransform() that does absolutely nothing in its base instance, and is overridden in SubClassA. But I'm still concerned that on one hand you're treating all these objects (SubClassA, SubClassB) as the same thing, and then wanting to treat them differently depending on their real implementation, rather than the interface they present.
Assuming you are using a strongly-typed language, which your question seems to indicate...
There is no design pattern to work around this, because this is the intended behavior.
In your definition, performTransform belongs only to SubClassA. Thus, to be able to invoke performTransform on an object, the object must be of type SubClassA (or a subtype of SubClassA.
Invoking performTransform on a SuperClass does not make sense because not every instance of SuperClass defines this method.
Downcasting from a SuperClass to a SubClassA should certainly throw an error if the instance is not a SubClassA - this should be obvious.
So, you must either change your definitions such that performTransform belongs to SuperClass (in which case, as you said, every instance of type SuperClass would need to have some implementation for the method, even an empty one) or you must make sure that you are only invoking methods on types that define them.
I'm not so sure it requires a pattern to solve but instead just a small redesign. If it makes sense for anything to call performTransform is should be in the superclass as a virtual method and overridden in the subclasses.
So the superclass defines the flow from an abstract viewpoint and the subclasses implement them appropriately. In your case, the simplest options are to either just leave performTransform empty in the superclass or implement it as an empty method in the subclass that doesn't require it (when you mix this approach with a short comment, you get a more maintainable system IMO).
The closest pattern I can think of for this is the Null Object pattern where this performTransform method is just a dummy function to preserve compatibility but perform no actual task.
Just because you say your bicycle is a car doesn't mean there's a place to put gas in it. The whole point of polymorphism is to let you think of things as the super class - these are all bank accounts, these are all shapes, to use the classic examples - and not get caught up in what they really are. Sometimes the subclasses add capability. In many cases that capability is used in the specific implementations in each subclass. So to use your names, some method Adjust() that is in the signature of SuperClass is implemented (differently) in SubClassA and SubClassB. The SubClassA version calls its own performTransform as part of the process and we all live happily ever after. The minute some code needs to decide whether to call performTransform or not, you're not just thinking of it as a SuperClass any more. That's not necessarily something that needs to be solved, it's just what is.
It Would be better to have the call to performTransform() in a method that only takes type SubClassB as a parameter - at least you wouldn't have to do type checking then.
On saying that, if your having this problem at all it may suggest that inheritance may not be the best solution - composition may be a better way to approach the problem.
I've been trying to implement a simple component-based game object architecture using Objective-C, much along the lines of the article 'Evolve Your Hierarchy' by Mick West. To this end, I've successfully used a some ideas as outlined in the article 'Objective-C Message Forwarding' by Mike Ash, that is to say using the -(id)forwardingTargetForSelector: method.
The basic setup is I have a container GameObject class, that contains three instances of component classes as instance variables: GCPositioning, GCRigidBody, and GCRendering. The -(id)forwardingTargetForSelector: method returns whichever component will respond to the relevant selector, determined using the -(BOOL)respondsToSelector: method.
All this, in a way, works like a charm: I can call a method on the GameObject instance of which the implementation is found in one of the components, and it works. Of course, the problem is that the compiler gives 'may not respond to ...' warnings for each call. Now, my question is, how do I avoid this? And specifically regarding the fact that the point is that each instance of GameObject will have a different set of components? Maybe a way to register methods with the container objects, on a object per object basis? Such as, can I create some kind of -(void)registerMethodWithGameObject: method, and how would I do that?
Now, it may or may not be obvious that I'm fairly new to Cocoa and Objective-C, and just horsing around, basically, and this whole thing may be very alien here. Of course, though I would very much like to know of a solution to my specific issue, anyone who would care to explain a more elegant way of doing this would additionally be very welcome.
Much appreciated, -Bastiaan
I don't think that sending the container object all of its components' messages is what Mick West was suggesting--that doesn't help to remove the idea of a "monolithic game entity object".
The eventual goal is to have the components communicate directly with one another, with no container object at all. Until then, the container object acts as glue between old code that expects a single object for each game entity and the new component-to-component system.
That is, you shouldn't need to use message forwarding at all in the final product, so ignoring the warnings, or declaring variables as id for now to quiet them, isn't all that ugly. (The plan as laid out by the article is to eventually remove the very code that is causing your warnings!)
A simple way to have those warnings disappear would be to declare the instance variables of type id
That way the compiler assumes you know what you're doing regarding the type of the object and that the object will respond to whatever messages you send to it, or if it doesn't you don't care.
Override your GameObject's -respondsToSelector: method. Your implementation should in turn send a respondsToSelector: message to each of its instances, and return YES if any one of them returns YES.
You can use type of id - or you could invoke the methods using performSelector methods, or create an NSInvocation if the arguments are complex. This is all just a way of getting around compiler warnings, however. If your objects respond to several methods, then possibly declaring a protocol might help, although the same caveat applies.
Another option if I understand the problem correctly is to implement a protocol. This is link an interface in java and variables can be declared like this:
id anObjectRef
That way the compiler understands that the object referred to by anObjectRef conforms to the protocol.
There are also methods that can tell you if an particular object conforms to a specific protocol before you cast or assign it.
In the Factory Method Design pattern of GoF, I can see that there is no parameter accepted by the FactoryMethod() method. My understanding is that the FactoryMethod should be passed a parameter that will be used in the switch case and then based upon the value in the switch case, different class objects are instantiated and returned back to the caller. My questions in summary are as follows:
1) Should I implement factory method pattern, exactly in the same way defined by GoF. I am also referring to UML diagram given at www.dofactory.com for Factory Method pattern).
2) Why is the factory method pattern of GoF not shown accepting parameter?
1) Should I implement factory method
pattern, exactly in the same way
defined by GoF. I am also referring to
UML diagram given at www.dofactory.com
for Factory Method pattern).
Do whatever makes sense to you. Patterns are just guidelines, not categorical laws of nature.
More than that, the patterns in GoF are not comprehensive. You'll find yourself discovering and implementing patterns which never appear in the book, nor even have a name. And that's a good thing.
2) Why is the factory method pattern
of GoF not shown accepting parameter?
The factory method pattern is simply a special instance of a derived class. In particular, you have an abstract class with an abstract method, and any number of derived classes which implement the method. Usually, this implemented method returns some other type of object -- this is what makes it a creational pattern.
So, using the sample on DoFactory.com, the classes return a canned set of objects. Nothing in principle prevents users from passing in some parameter to the factory method.
My understanding is that the
FactoryMethod should be passed a
parameter that will be used in the
switch case and then based upon the
value in the switch case, different
class objects are instantiated and
returned back to the caller.
You can certainly implement it that way if that makes sense to you.
2) Why is the factory method pattern
of GoF not shown accepting parameter?
In object-oriented programming, your methods should do one clearly-defined thing. Passing in some kind of value to switch on is called "alternate cohesion": instead the method should be into multiple methods, one for each case. For example:
createVeggie(veggieType)
would be split into
createBroccoli()
createCelery()
createCollaredGreens()
This makes your class interface cleaner, promotes easier modification, and improves compile-time checking.
Also, be careful that you are not violating Factory Method's intent: it is actually to avoid having calling code know which object is created. As Juliet says, you may implement something like this, but then it's no longer GoF Factory Method and does not have the same advantages.