Factory Method Pattern vs inheritance - oop

What is the point in using this pattern?
Considering this article it's just senseless: https://refactoring.guru/design-patterns/factory-method
Why not to use simple inheritance?
Here's a little comparison:
https://codesandbox.io/s/factory-method-pattern-vs-good-old-inheritance-opovv?file=/src/inheritance.ts

There is no one versus the other; Factory Method uses polymorphism, via inheritance, to defer object creation to subclasses.
Don't confuse patterns as being greater than their constituents. Patterns are simply names for common use cases of software design, because anything important deserves a name.

Related

What is a Factory in OOP

My understanding of "factory-related" design patterns and their OOP-implementations has always been pretty simple.
A "Factory method" is a method inside a class that has an interface (or an abstract class) as a return type and constructs objects implementing this interface based on some internal logic.
A "Factory" is a class that only contains factory methods
An "Abstract factory" is an interface (or an abstract class) that only contains factory methods
But I recently stumbled upon Wikipeda articles on the subject (Factory, Abstract factory) that made me somewhat confused, especially about what a "Factory" is in OOP.
Here are several quotes:
A subroutine that returns a "new" object may be referred to as a "factory", as in factory method or factory function.
Factories are used in various design patterns
The "Abstract factory pattern" is a method to build collections of factories.
A factory is the location of a concrete class in the code at which objects are constructed
which arouse some questions:
(1)&(2) Does this mean that a factory is not a class or an object, but a piece of logic?
(2) Is "Factory" not a pattern itself?
(3) What does "collection" mean here? Is it just a way of saying "you can have several factories that implement the same interface (which is an abstract factory)"?
(4) What???
Can anyone clarify what this means? Is my initial understanding of factories incorrect?
Look at this wiki which says:
In object-oriented programming (OOP), a factory is an object for
creating other objects – formally a factory is a function or method
that returns objects of a varying prototype or class from some
method call, which is assumed to be "new".[a] More broadly, a
subroutine that returns a "new" object may be referred to as a
"factory", as in factory method or factory function. This is a basic
concept in OOP, and forms the basis for a number of related software
design patterns.
So to answer your questions specifically:
(1)&(2) Does this mean that a factory is not a class or an object, but a piece of logic?
No, it means that you can create other objects using an object(factory).
(2) Is "Factory" not a pattern itself?
There are different design patterns out of which factory pattern is one. So when you are creating objects using a factory then that patter of creating other objects is "Factory pattern"
I think you generally have it right. But, people don't like general, so it's the specifics where things go pear-shaped.
There a few things to consider:
Wikipedia is sometimes woefully bad on technical subjects because they are trying to get to a single answer where different domains or programming languages may use the terms slightly differently. Not only that, many people are not skilled at technical writing and forget that Wikipedia is for the masses, not a graduate-level computer science theory class. As such, they tend to overly focus on minutiae or use language more complicated than necessary. So, don't worry about Wikipedia. And, although there is an edit history, no real names tend to be attached to these and no one really cares who wrote which words, so no one is that motivated to do that well. Having expressed that quite cynical opinion of the whole endeavor, reading the Talk pages are very interesting, in a Cunningham's Law sort of way.
(2) The name "Factory" is a pattern, but remember that patterns are general ideas, not implementations. Different languages may employ the same idea in different ways, and those different ways may not agree. A particular language or community starts to use the term in the way most meaningful to them, however, despite what anyone else thinks. That idea may show up in bare, procedural logic; classes; objects; or whatever the tool provides.
Design Patterns (the book(s)) are generally describing shortcomings in tools. They also aren't really design patterns in the way the authors think they are. Mark Jason Dominus has a wonderful talk on this, "Design Patterns" Aren't. His basic idea is that the Gang of Four misunderstand Christopher Alexander in that they (accidentally) prescribe solutions rather than promote the idea of a locally-relevant architectural language. So, the Gang of Four patterns become reified in languages almost exactly as the described single possible solution. Readers then force those patterns as a globally-relevant language completely disconnected from what you personally are trying to build, then argue about what it all really means out of any sort of context using extremely ill-suited examples that don't matter to what anyone is trying to build. There's no reason your recommendation engine and someone's first person shooter should have the same architectural language other than your tool forcing it on both of you. FWIW, paying attention to Mark Jason Dominus is a very good professional development move. His Higher-Order Perl is one of the best programming language books I've read and certainly better than anything I've written. He knows a lot of different languages (and languages that are very different) and thinks very deeply about things.
In (3), the term "collection" is unfortunate because we tend to use that to mean a set of things that co-exist at a particular time in the same container (box, book, whatever). I think they are trying to suggest that the abstract factory is a template for future factories that cannot be presently enumerated, which is a fancy way of saying that we can use it to build factories we don't even know about yet.
In (4), the term "location" is unfortunate. An abstract factory is a way of producing concrete factories, which is a way of producing objects.

Why do we need protocol in objective C?

The necessity for protocols are to abstract the methods of classes which are not hierarchically related.
The similar things also can be done with the help a class (interface) which encompasses all those methods and subclass them ? (This is not really possible due to the Multiple inheritance problem since a class has to be derived already from NSObject.ignore the NSProxy case)
What special things that protocols can do than a class?
Are protocols trying to solve only the multiple inheritance problem?
Protocols main advantage is, that they describe what a object should be able to do, without enforcing subclassing. In languages that dont have multiple inheritance such a mechanism is needed, if you want others programmers be able to use your classes. (see delegation)
For an instance Java has something similar, called interfaces.
This means a huge advantage, as it is very easy to build dynamic systems, as I can allow other developers to enhance my classes via a clearly defined protocol.
A practical example:
I am just designing a REST API and I am providing a Objective-C client library.
As my api requires information about the user, I add a protocol
#protocol VSAPIClientUser <NSObject>
-(NSString *)lastName;
-(NSString *)firstName;
-(NSString *)uuid;
#end
Anywhere I need this user information, I will have an basic id-object, that must conform to this protocol
-(void)addUserWithAttributes:(id<VSAPIClientUser>)user;
You can read this line as: "I don't care, what kind of object you provide here, as long as it knows about lastName, firstName and uuid". So I have no idea, how the rest of that object looks like — and I don't care.
As the library author I can use this safely:
NSDictionary *userAttributes = #{#"last_name" : [user lastName],
#"first_name": [user firstName],
#"uuid": [user uuid]};
BTW: I wouldn't call the absence of multi-inheritance a problem. It is just another design.
“[…] If I revisited that decision today, I might even go so far as to remove single inheritance as well. Inheritance just isn’t all that important. Encapsulation is OOP’s lasting contribution.” — Brad Cox was asked, why Objective-C doesn’t have multiple inheritance. (Masterminds of Programming: Conversations with the Creators of Major Programming Languages, p. 259)
As an alternative view....
Object-oriented programming's most basic value comes from being able to model real-world relationships directly as opposed to translating them into abstract and vaguely-equivalent computer-world constructs. Wherever a language requires you to think about the implementation of a solution in different terms than those you can use to describe your problem, it is flawed as an OOP tool. (Note that I didn't say 'useless'. :) )
Real-world objects have various roles that depend on context. Those roles can have state. Therefore, I agree that lack of multiple-inheritance is an impediment to ease of modelling. Objective-C protocols, Java interfaces, and the claim that you should prefer composition to inheritance are all denials of a fundamental part of the OOP advantage.
One of many uses of C++ abstract classes is, among their other uses, to define interfaces (to specify reusable contracts). There are however also other programming languages, such as Objective C that have a separate concept for interfaces in this sense; in Objective C, it is called protocols.
A wide use of such a construct does require a way of attaching more than one contract to an object; and if such interfaces are allowed to inherit from each another, this has to be multiple inheritance to be useful.
However, this is not the same thing as multiple inheritance between classes.
Protocols are not trying to solve the multiple inheritance problem. They are trying to separate contract specification from object (data+code) specification. They can actually do much less than a class (if you ignore the multiple inheritance aspect) and that's why they exist as a separate concept.
Implementing a protocol is generally a much less restrictive (safer) proposition to consider than inheriting from a class.

Why avoid subtyping?

I have seen many people in the Scala community advise on avoiding subtyping "like a plague". What are the various reasons against the use of subtyping? What are the alternatives?
Types determine the granularity of composition, i.e. of extensibility.
For example, an interface, e.g. Comparable, that combines (thus conflates) equality and relational operators. Thus it is impossible to compose on just one of the equality or relational interface.
In general, the substitution principle of inheritance is undecidable. Russell's paradox implies that any set that is extensible (i.e. does not enumerate the type of every possible member or subtype), can include itself, i.e. is a subtype of itself. But in order to identify (decide) what is a subtype and not itself, the invariants of itself must be completely enumerated, thus it is no longer extensible. This is the paradox that subtyped extensibility makes inheritance undecidable. This paradox must exist, else knowledge would be static and thus knowledge formation wouldn't exist.
Function composition is the surjective substitution of subtyping, because the input of a function can be substituted for its output, i.e. any where the output type is expected, the input type can be substituted, by wrapping it in the function call. But composition does not make the bijective contract of subtyping-- accessing the interface of the output of a function, does not access the input instance of the function.
Thus composition does not have to maintain the future (i.e. unbounded) invariants and thus can be both extensible and decidable. Subtyping can be MUCH more powerful where it is provably decidable, because it maintains this bijective contract, e.g. a function that sorts a immutable list of the supertype, can operate on the immutable list of the subtype.
So the conclusion is to enumerate all the invariants of each type (i.e. of its interfaces), make these types orthogonal (maximize granularity of composition), and then use function composition to accomplish extension where those invariants would not be orthogonal. Thus a subtype is appropriate only where it provably models the invariants of the supertype interface, and the additional interface(s) of the subtype are provably orthogonal to the invariants of the supertype interface. Thus the invariants of interfaces should be orthogonal.
Category theory provides rules for the model of the invariants of each subtype, i.e. of Functor, Applicative, and Monad, which preserve function composition on lifted types, i.e. see the aforementioned example of the power of subtyping for lists.
One reason is that equals() is very hard to get right when sub-typing is involved. See How to Write an Equality Method in Java. Specifically "Pitfall #4: Failing to define equals as an equivalence relation". In essence: to get equality right under sub-typing, you need a double dispatch.
I think the general context is for the lanaguage to be as "pure" as possible (ie using as much as possible pure functions), and comes from the comparison with Haskell.
From "Ruminations of a Programmer"
Scala, being a hybrid OO-FP language has to take care of issues like subtyping (which Haskell does not have).
As mentioned in this PSE answer:
no way to restrict a subtype so that it can't do more than the type it inherits from.
For example, if the base class is immutable and defines a pure method foo(...), derived classes must not be mutable or override foo() with a function that is not pure
But the actual recommendation would be to use the best solution adapted to the program you are currently developing.
Focusing on subtyping, ignoring the issues related to classes, inheritance, OOP, etc.. We have the idea subtyping represents a isa relation between types. For example, types A and B have different operations but if A isa B we then can use any of B's operations on an A.
OTOH, using another traditional relation, if C hasa B then we can reuse any of B's operations on a C. Usually languages let you write one with a nicer syntax, a.opOnB instead of a.super.opOnB as it would be in the case of composition, c.b.opOnB
The problem is that in many cases there's more than one way to relate two types. For example Real can be embedded in Complex assuming 0 on the imaginary part, but Complex can be embedded in Real by ignoring the imaginary part, so both can be seen as subtypes of the other and subtyping forces one relation to be viewed as preferred. Also, there are more possible relations (e.g. view Complex as a Real using theta component of polar representation).
In formal terminology we usually say morphism to such relations between types and there are special kinds of morphisms for relations with different properties (e.g. isomorphism, homomorphism).
In a language with subtyping usually there's much more sugar on isa relations and given many possible embeddings we tend to see unnecessary friction whenever we're using the unpreferred relation. If we bring inheritance, classes and OOP to the mix the problem becomes much more visible and messy.
My answer does not answer why it is avoided but tries to give another hint at why it can be avoided.
Using "type classes" you can add an abstraction over existing types/classes without modifying them. Inheritance is used to express that some classes are specializations of a more abstract class. But with type classes you can take any existing classes and express that they all share a common property, for example they are Comparable. And as long as you are not concerned with them being Comparable you don't even notice it. The classes don't inherit any methods from some abstract Comparable type as long as you don't use them. It's a bit like programming in dynamic languages.
Further reads:
http://blog.tmorris.net/the-power-of-type-classes-with-scala-implicit-defs/
http://debasishg.blogspot.com/2010/07/refactoring-into-scala-type-classes.html
I don't know Scala, but I think the mantra 'prefer composition over inheritance' applies for Scala exactly the way it does for every other OO programming language (and subtyping is often used with the same meaning as 'inheritance'). Here
Prefer composition over inheritance?
you will find some more information.
I think lots of Scala programmers are former Java programmers. They are used to think in term of Object Oriented subtyping and they should be able to easily find OO-like solution for most problems. But Functional Programing is a new paradigm to discover, so people ask for a different kind of solutions.
This is the best paper I have found on the subject. A motivating quote from the paper –
We argue that while some of the simpler aspects of object-oriented languages are
compatible with ML, adding a full-fledged class-based object system to ML leads to an excessively complex
type system and relatively little expressive gain

OOP Reuse without Inheritance: How "real-world" practical is this?

This article describes an approach to OOP I find interesting:
What if objects exist as
encapsulations, and the communicate
via messages? What if code re-use has
nothing to do with inheritance, but
uses composition, delegation, even
old-fashioned helper objects or any
technique the programmer deems fit?
The ontology does not go away, but it
is decoupled from the implementation.
The idea of reuse without inheritance or dependence to a class hierarchy is what I found most astounding, but how feasible is this?
Examples were given but I can't quite see how I can change my current code to adapt this approach.
So how feasible is this approach? Or is there really not a need for changing code but rather a scenario-based approach where "use only when needed or optimal"?
EDIT: oops, I forgot the link: here it is link
I'm sure you've heard of "always prefer composition over inheritance".
The basic idea of this premise is multiple objects with different functionalities are put together to create one fully-featured object. This should be preferred over inheriting functionality from disparate objects that have nothing to do with each other.
The main argument regarding this is contained in the definition of the Liskov Substitution Principle and playfully illustrated by this poster:
If you had a ToyDuck object, which object should you inherit from, from a purely inheritance standpoint? Should you inherit from Duck? No -- most likely you should inherit from Toy.
Bottomline is you should be using the correct method of abstraction -- whether inheritance or composition -- for your code.
For your current objects, consider if there are objects that ought to be removed from the inheritance tree and included merely as a property that you can call and invoke.
Inheritance is not well suited for code reuse. Inheriting for code reuse usually leads to:
Classes with inherited methods that must not be called on them (violating the Liskov substitution principle), which confuses programmers and leads to bugs.
Deep hierarchies where it takes inordinate amount of time to find the method you need when it can be declared anywhere in dozen or more classes.
Generally the inheritance tree should not get more than two or three levels deep and usually you should only inherit interfaces and abstract base classes.
There is however no point in rewriting existing code just for sake of it. However when you need to modify, try to switch to composition where possible. That will usually allow you to modify the code in smaller pieces, since there will be less coupling between the classes.
I just skimmed the text over, but it seems to say what OO design was always about: Inheritance is not meant as a code reuse tool and loose coupling is good. This has been written dozens times before, see the linked references on the article bottom. This does not mean you should skip inheritance entirely, you just have to use it conciously and only when it makes sense. The article also states this.
As for the duck typing, I find the examples and thoughts questionable. Like this one:
function good (foo) {
if ( !foo.baz || !foo.quux ) {
throw new TypeError("We need foo to have baz and quux methods.");
}
return foo.baz(foo.quux(10));
}
What’s the point in adding three new lines just to report an error that would be reported by the runtime automatically?
Inheritance is fundamental
no inheritance, no OOP.
prototyping and delegation can be used to effect inheritance (like in JavaScript), which is fine, and is functionally equivalent to inheritance
objects, messages, and composition but no inheritance is object-based, not object-oriented. VB5, not Java. Yes it can be done; plan on writing a lot of boilerplate code to expose interfaces and forward operations.
Those that insist inheritance is unnecessary, or that it is 'bad' are creating strawmen: it is easy to imagine scenarios where inheritance is used badly; this is not a reflection on the tool, but on the tool-user.

Are interfaces redundant with multiple inheritance?

This is not yet another question about the difference between abstract classes and interfaces, so please think twice before voting to close it.
I am aware that interfaces are essential in those OOP languages which don't support multiple inheritance - such as C# and Java. But what about those with multiple inheritance? Would be a concept of interface (as a specific language feature) redundant in a language with multiple inheritance? I guess that OOP "contract" between classes can be established using abstract classes.
Or, to put it a bit more explicitly, are interfaces in C# and Java just a consequence of the fact that they do not support multiple inheritance?
Not at all. Interfaces define contracts without specifying implementations.
So they are needed even if multiple inheritance is present - inheritance is about implementation.
Technically, you can use an abstract class in multiple inheritance to simulate an interface. But thus one can be inclined to write some implementation there, which will creates big messes.
Depends on the test for redundancy.
If the test is "can this task be achieved without the language feature" then classes themselves are redundant because there are Turing compete languages without classes. Or, from an engineering base, anything beyond machine code is redundant.
Realistically, the test is a more subtle combination of syntax and semantics. A thing is redundant if it doesn't improve either the syntax or the semantics of a language, for a reasonable number of uses.
In languages that make the distinction, supporting an interface declares that a class knows how to converse in a certain manner. Inheriting from another class imports (and, probably, extends or modifies) the functionality of another class.
Since the two tasks are not logically equivalent, I maintain that interfaces are not redundant. Distinguishing between the two improves the semantics for a large number of programs because it can more specifically indicate programmer intent.
... The lack of multiple inheritance
forced us to add the concept of
interfaces...
Krzysztof Cwalina, in The C# Programming Language (4th Ed.) (p. 56)
So yes, I believe interfaces are redundant given multiple inheritance. You could use pure abstract base classes in a language supporting multiple inheritance or mix-ins.
That said, I'm quite happy with single inheritance most of the time. Eric Lippert makes the point earlier in the same volume (p. 10) that the choice of single inheritance "... eliminates in one stroke many of the complicated corner cases..."
There are languages that support multiple inheritance that do not include a parallel concept to the Java interface. Eiffel is one of them. Bertrand Meyer did not see the need for them, since there was the ability to define a deferred class (which is something most folks call an abstract class) with a fleshed out contract.
The lack of multiple inheritance can lead to situations where a programmer needs to create a utility class or the like to prevent writing duplicated code in objects that implement the same interface.
It may be that the presence of the contract was a significant contribution to the absence of a completely implementation free concept of an interface.... Contracts are harder to write without some implementation details to test against.
So, technically interfaces are redundant in a language that supports MI.
But, as others have pointed out... multiple inheritance can be a very tricky thing to use correctly, all the time. I know I couldn't... and I worked for Meyer as he was drafting Object Oriented Software Construction, 2nd edition.
Are interfaces in C# and Java just a
consequence of the fact that they do
not support multiple inheritance?
Yes, they are. At least in Java. As a simple language, Java's creators wanted a language that most developers could grasp without extensive training. To that end, they worked to make the language as similar to C++ as possible (familiar) without carrying over C++'s unnecessary complexity (simple). Java's designers chose to allow multiple interface inheritance through the use of interfaces, an idea borrowed from Objective C's protocols.
See there for details
And, yes, I believe that like in C++ Interfaces are redundant, if you have multiple inheritance. If you have a more powerful feature, why to keep the less one?
Well, if you go this way, you could say that C and C++, C# and oll other high level languages are redundant because you can code anything you want using assembly. Sure you don't absolutely need these high level languages, however, they help ... a lot.
All these languages come with various utilities. For some of them, the interface concept is one of these utilities. So yes, in C++, you could avoid using interfaces an stick with abstract classes without implementation.
As a matter of fact, if you want to program Microsoft COM with C, although C doesn't know the interface concept, you can do it because all .h files define interfaces this way:
#if defined(__cplusplus) && !defined(CINTERFACE)
MIDL_INTERFACE("ABCDE000-0000-0000-0000-000000000000")
IMyInterface : public IUnknown
{
...
}
#else /* C style interface */
typedef struct IMyInterfaceVtbl
{
BEGIN_INTERFACE
HRESULT ( STDMETHODCALLTYPE *SomMethod )(... ...);
END_INTERFACE
} IMyInterfaceVtbl;
interface IMyInterface
{
CONST_VTBL struct IMyInterfaceVtbl *lpVtbl;
};
#endif
Some kind of another syntactic sugar...
And it's true to say that in C#, if I hadn't the interface concept, I don't know how I could really code :). In C#, we absolutely need interfaces.
Interfaces are preferable to multiple inheritance since inheritance violates encapsulation according to "Effective Java" Item 16, Favor composition over inheritance.