Converting the interfaces in hierarchical structure in OOD - oop

I have a question about Facade design pattern. As i started learning design patterns from the book: Elements of re-useable object -oriented-software, there is a good explaination of what it is and how it solves the problem.
This Picture comes from that book:
Problem:
Suppose i add some extra functionality in the subsystem for which Domain is an Facade/interface. With this design, i think it's not possible to add extra functionality in the subsystem without changing the Domain class?
Second, suppose i use an abstract class Domain(to create a hierarchical structure) and delegate all the requests to it's subclasses so that whenever i want to add new functionality , i simply extend my new class/subsystem with Domain(abstract), would that be wrong or still i will have a Facade structure?
Same thing happends in Adapter pattern. We can have different kind of adapter and instead of hard-coding one class , can we create such an hierarchial structure without violating any OOD rule?

The facade as well as the adapter design patterns are part of the so called "wrapper" patterns (along with decorator and proxy). They essentially wrap certain functionality and provide a different interface. Their difference is on their intent:
facade: is used to provide a simple interface to clients, hiding the complexities of the operations it provides behind it
adapter: allows two incompatible interfaces to work together without changing their internal structure
decorator: allows new functionalities to be added to an object statically or dynamically without affecting the behavior of objects of the same class
proxy: a class (proxy) is used to represent and allow access to the
functionality of another class
If your components "in the back" add new functionality and you want your facade to expose this functionality, you would have to adjust your facade to do so.
If you have the Domain class (facade in your scenario) as an abstract class that others extend, you do not have a facade, you have whatever inheritance you created with your classes. Simply put there is no "wrapping" for achieving the intent of the facade pattern.

With this design, I think it's not possible to add extra functionality in the subsystem without changing the Domain class?
True. However, the changes you make may (or may not) affect the client (Process) class. If you add a new method to the Façade, it won't break the "old" clients. Although it's not its explicit intention (which is to hide complexities of a sub-system), Façade can provide a stable interface to its clients that can be extended. When I say interface, I don't mean a Java or C# interface. It's a programming interface.
A real-world example is the JOptionPane Façade in Java/Swing. Check the Java doc at the link I put and you'll see that some of its methods existed in 1.4, some in 1.6, etc. Basically, since this class is part of a Swing library, it had to remain stable so old clients of it's interface would not break. But it was still extended with new functionality by simply adding new methods.
I would say this is how Façades are typically extended, not with sub classing or hierarchy. Hierarchies are difficult to maintain, because they are brittle. If you get the abstraction wrong (the root of the hierarchy), then it affects the entire tree when you need to change it. Hierarchies make sense when the abstraction in the hierarchy is stable (certain).
The Adapter pattern has hierarchy because an Adapter adapts a method to work with several variants of a service that cannot be changed. You can see examples of several stable (abstract) services such as tax calculation, accounting services, credit authorization, etc. at https://stackoverflow.com/a/13323703/1168342.

Related

"Client" concept in OOP Design Patterns?

I read many topics about OOP Design Patterns of GoF, but i am not sure about "Client" concept. So what is it? How can we realize it in our application. Thank!
In the gof book, the client is the code or class that is using the classes in the pattern.
for example, from the abstract factory pattern under motivation:
Consider a user interface toolkit that supports multiple look-and-feel standards, such as Motif and Presentation Manager. Different look-and-feels define different appearances and behaviors for user interface "widgets" like scroll bars, windows, and buttons. To be portable across look-and-feel standards, an application should not hard-code its widgets for a particular look and feel. Instantiating look-and-feel-specific classes of widgets throughout the application makes it hard to change the look and feel later.
We can solve this problem by defining an abstract WidgetFactory class that declares an interface for creating each basic kind of widget. There's also an abstract class for each kind of widget, and concrete subclasses implement widgets for specific look-and-feel standards. WidgetFactory's interface has an operation that returns a new widget object for each abstract widget class. Clients call these operations to obtain widget instances, but clients aren't aware of the concrete classes they're using. Thus clients stay independent of the prevailing look and feel.
There is a concrete subclass of WidgetFactory for each look-and-feel standard. Each subclass implements the operations to create the appropriate widget for the look and feel. For example, the CreateScrollBar operation on the MotifWidgetFactory instantiates and returns a Motif scroll bar, while the corresponding operation on the PMWidgetFactory returns a scroll bar for Presentation Manager. Clients create widgets solely through the WidgetFactory interface and have no knowledge of the classes that implement widgets for a particular look and feel. In other words, clients only have to commit to an interface defined by an abstract class, not a particular concrete class.
A WidgetFactory also enforces dependencies between the concrete widget classes. A Motif scroll bar should be used with a Motif button and a Motif text editor, and that constraint is enforced automatically as a consequence of using a MotifWidgetFactory.
As a pattern, a client is an actor that initiates an interaction with a server, which is a functional, but typically passive, actor. Acting on the client's behalf as described by a request, the server performs some action and makes a report back in the form of a response.
As such, the point of a client interface is to make it convenient or possible for arbitrary code to formulate a request and attract the attention of a server. Since the request message might be conveyed over a wide variety of media (a different memory space, for example), some kind of transparent transport is usually involved, hidden behind this request interface.
That's pretty much the long and short of it as a concept. One of the drawbacks of a very flexible pattern (which certainly applies to client/server) is one needs to descend into a specific example, framework or library to speak concretely.
The client is just another module, or class, form the system use the concrete Pattern (all or part of the components construct the pattern)
A client is a caller/consumer. A client is not a subclass/implementer. In terms of a method, a client is the caller of that method. In terms of a class, a client is the caller of methods in that class.
You could say that every method has a client, because without a caller a method is dead code; however, the term client is typically reserved for the caller of a public method, since private methods are just implementation details, not relevant to design.
In a design diagram, such as a UML class diagram, a client indicates where the public access points are and how the design is used after it is implemented.

Object Oriented Programming principles

I was wondering, I recently read an article that spoke of the ills of using the singleton pattern siting the disadvantage of global variable occurrence and rightly that the singleton violates alot of the rules we learn from OOP school, single responsibility principle, programming to interfaces and abstract classes and not to concrete classes... all that good stuff. I was wondering how then do you work with like database connection class where you want just one connection to your DB and one object of your DB floating around. The author spoke of Dependency Injection principle which to my mind stands well with the Dependency Inversion rule. How do I know and control what object gets passed around as a dependency other than the fact that I created the class and expect everyone using it play nice and make sure they are using the right resource?!
Edit: This answer assumes you are using a dependency injection container, either one you wrote yourself, or one you got from a library. If not, then use a DI container :)
How do I know and control what object gets passed around as a dependency other than the fact that I created the class and expect everyone using it play nice and make sure they are using the right resource?!
By contract
The oral contract - You write a design spec that says "thou shalt not instantiate this class directly" and "thou shalt not pass around any object you got from the dependency injection container. Pass the container if you have to".
The compiler contract - You give them a dependency injection container, and they grab the instance out of it, by abstract interface. If you want only a single instance to be used, you can supply them a named instance, which they extract with both the name, and the interface.
ISomething instance = serviceLocator.ResolveInstance<ISomething>(
"TheInstanceImSupposedToUse");
You can also make all your concrete classes private/internal/what-have-you, and only provide them an abstract interface to operate against. This will prevent them from instantiating the classes themselves.
// This can only be instantiated by you, but can be used by them via ISomething
private class ConcreteSomething : ISomething
{
// ...
}
By code review
You make group-wide coding and design standards that are fair, and make sure they are understood by everyone within the group.
You use a source control mechanism, and require code reviews before they check in. You read over their code for what they link to, what headers they include, what objects they instantiate, and what instances they are passing around.
If they violate your rules during code reviews, you don't let them check in until they fix their code. Optionally, for repeat offenders, you make them pay you a dollar, you make them buy you lunch, or you hire a different contractor to replace them. Whatever works well within your group :)
For those who criticize the singleton pattern, based on SRP, here is an opposing view. Also, I've found that dependency injection containers can create as many problems as they solve. That said, I'm using a promising compromise, as covered in another post.
Dependency injection containers (even one you develop yourself, which isn't an entirely uncommon practice) are generally very configurable. What you'd do in that scenario is configure it such that any request for the interface that implementation, well, implements would be satisfied with that implementation. Even if it's a singleton.
For example, take a look at the Logger singleton being used here: http://www.pnpguidance.net/News/StructureMapTutorialDependencyInjectionIoCNET.aspx
Don't take what you read anywhere as absolute truth. Read it, understand it and then you can see when it's best to apply certain things. In your case, why wouldn't you want to create a static singleton?

What is the difference between an Abstraction and a Facade?

What is the difference between an 'Abstraction' and a 'Facade'?
Is there a difference at all? Or are the terms interchangeable?
The facade pattern is an simplified interface to a larger, possibly more complex code base. The code base may be a single class, or more. The facade just gives you a simple interface to it.
Abstraction, is used to represent a concept, but not to be bound to any specific instance. (Ie: An abstract class). This doesn't imply simplifying (like the facade pattern does), but rather making a 'common' interface or representation.
Facade is a specific design pattern, meant to hide the internal stuff inside a package / module from its clients behind a well-defined interface. It usually hides several interfaces/classes behind a single common one, hence its name.
'Abstraction' is a general term, meaning to hide the concrete details of something from the outside world.
So these two are not interchangeable terms.
Facade is a GoF design pattern, very specific. In essense, it's about hiding over-complex functionality from the main body of your application.
Abstraction is a more vague term related to hiding functionality of a service from its client.
Abstract to me means taking the common parts of a collection of things and creating a base thing from them, which the collection can then draw on, sort of like a parent class.
A façade is a face (literally speaking), so they analogy of a base class doesn't quite hold. A façade is more of an interface, so it wouldn't have to be related to the things that use it. I think of it more like a mask. My class will have a "disposable" mask, for example.
So the difference, in my mind, is that an abstract pattern allows a hierarchy to be built, where as a façade pattern allows classes look similar.

Should every single object have an interface and all objects loosely coupled?

From what I have read best practice is to have classes based on an interface and loosely couple the objects, in order to help code re-use and unit test.
Is this correct and is it a rule that should always be followed?
The reason I ask is I have recently worked on a system with 100’s of very different objects. A few shared common interfaces but most do not and wonder if it should have had an interface mirroring every property and function in those classes?
I am using C# and dot net 2.0 however I believe this question would fit many languages.
It's useful for objects which really provide a service - authentication, storage etc. For simple types which don't have any further dependencies, and where there are never going to be any alternative implementations, I think it's okay to use the concrete types.
If you go overboard with this kind of thing, you end up spending a lot of time mocking/stubbing everything in the world - which can often end up creating brittle tests.
Not really. Service components (class that do things for your application) are a good fit for interfaces, but as a rule I wouldn't bother having interfaces for, say, basic entity classes.
For example:
If you're working on a domain model, then that model shouldn't be interfaces. However if that domain model wants to call service classes (like data access, operating system functions etc) then you should be looking at interfaces for those components. This reduces coupling between the classes and means it's the interface, or "contract" that is coupled.
In this situation you then start to find it much easier to write unit tests (because you can have stubs/mocks/fakes for database access etc) and can use IoC to swap components without recompiling your applications.
I'd only use interfaces where that level of abstraction was required - i.e. you need to use polymorphic behaviour. Common examples would be dependency injection or where you have a factory-type scenario going on somewhere, or you need to establish a "multiple inheritance" type behaviour.
In my case, with my development style, this is quite often (I favour aggregation over deep inheritance hierarchies for most things other than UI controls), but I have seen perfectly fine apps that use very little. It all depends...
Oh yes, and if you do go heavily into interfaces - beware web services. If you need to expose your object methods via a web service they can't really return or take interface types, only concrete types (unless you are going to hand-write all your own serialization/deserialization). Yes, that has bitten me big time...
A downside to interface is that they can't be versioned. Once you shipped the interface you won't be making changes to it. If you use abstract classes then you can easily extend the contract over time by adding new methods and flagging them as virtual.
As an example, all stream objects in .NET derive from System.IO.Stream which is an abstract class. This makes it easy for Microsoft to add new features. In version 2 of the frameworkj they added the ReadTimeout and WriteTimeout properties without breaking any code. If they used an interface(say IStream) then they wouldn't have been able to do this. Instead they'd have had to create a new interface to define the timeout methods and we'd have to write code to conditionally cast to this interface if we wanted to use the functionality.
Interfaces should be used when you want to clearly define the interaction between two different sections of your software. Especially when it is possible that you want to rip out either end of the connection and replace it with something else.
For example in my CAM application I have a CuttingPath connected to a Collection of Points. It makes no sense to have a IPointList interface as CuttingPaths are always going to be comprised of Points in my application.
However I uses the interface IMotionController to communicate with the machine because we support many different types of cutting machine each with their own commend set and method of communications. So in that case it makes sense to put it behind a interface as one installation may be using a different machine than another.
Our applications has been maintain since the mid 80s and went to a object oriented design in late 90s. I have found that what could change greatly exceeded what I originally thought and the use of interfaces has grown. For example it used to be that our DrawingPath was comprised of points. But now it is comprised of entities (splines, arcs, ec) So it is pointed to a EntityList that is a collection of Object implementing IEntity interface.
But that change was propelled by the realization that a DrawingPath could be drawn using many different methods. Once that it was realized that a variety of drawing methods was needed then the need for a interface as opposed to a fixed relationship to a Entity Object was indicated.
Note that in our system DrawingPaths are rendered down to a low level cutting path which are always series of point segments.
I tried to take the advice of 'code to an interface' literally on a recent project. The end result was essentially duplication of the public interface (small i) of each class precisely once in an Interface (big I) implementation. This is pretty pointless in practice.
A better strategy I feel is to confine your interface implementations to verbs:
Print()
Draw()
Save()
Serialize()
Update()
...etc etc. This means that classes whose primary role is to store data - and if your code is well-designed they would usually only do that - don't want or need interface implementations. Anywhere you might want runtime-configurable behaviour, for example a variety of different graph styles representing the same data.
It's better still when the thing asking for the work really doesn't want to know how the work is done. This means you can give it a macguffin that it can simply trust will do whatever its public interface says it does, and let the component in question simply choose when to do the work.
I agree with kpollock. Interfaces are used to get a common ground for objects. The fact that they can be used in IOC containers and other purposes is an added feature.
Let's say you have several types of customer classes that vary slightly but have common properties. In this case it is great to have a ICustomer interface to bound them together, logicaly. By doing that you could create a CustomerHander class/method that handels ICustomer objects the same way instead of creating a handerl method for each variation of customers.
This is the strength of interfaces.
If you only have a single class that implements an interface, then the interface isn't to much help, it just sits there and does nothing.

Why should you prevent a class from being subclassed?

What can be reasons to prevent a class from being inherited? (e.g. using sealed on a c# class)
Right now I can't think of any.
Because writing classes to be substitutably extended is damn hard and requires you to make accurate predictions of how future users will want to extend what you've written.
Sealing your class forces them to use composition, which is much more robust.
How about if you are not sure about the interface yet and don't want any other code depending on the present interface? [That's off the top of my head, but I'd be interested in other reasons as well!]
Edit:
A bit of googling gave the following:
http://codebetter.com/blogs/patricksmacchia/archive/2008/01/05/rambling-on-the-sealed-keyword.aspx
Quoting:
There are three reasons why a sealed class is better than an unsealed class:
Versioning: When a class is originally sealed, it can change to unsealed in the future without breaking compatibility. (…)
Performance: (…) if the JIT compiler sees a call to a virtual method using a sealed types, the JIT compiler can produce more efficient code by calling the method non-virtually.(…)
Security and Predictability: A class must protect its own state and not allow itself to ever become corrupted. When a class is unsealed, a derived class can access and manipulate the base class’s state if any data fields or methods that internally manipulate fields are accessible and not private.(…)
I want to give you this message from "Code Complete":
Inheritance - subclasses - tends to
work against the primary technical
imperative you have as a programmer,
which is to manage complexity.For the sake of controlling complexity, you should maintain a heavy bias against inheritance.
The only legitimate use of inheritance is to define a particular case of a base class like, for example, when inherit from Shape to derive Circle. To check this look at the relation in opposite direction: is a Shape a generalization of Circle? If the answer is yes then it is ok to use inheritance.
So if you have a class for which there can not be any particular cases that specialize its behavior it should be sealed.
Also due to LSP (Liskov Substitution Principle) one can use derived class where base class is expected and this is actually imposes the greatest impact from use of inheritance: code using base class may be given an inherited class and it still has to work as expected. In order to protect external code when there is no obvious need for subclasses you seal the class and its clients can rely that its behavior will not be changed. Otherwise external code needs to be explicitly designed to expect possible changes in behavior in subclasses.
A more concrete example would be Singleton pattern. You need to seal singleton to ensure one can not break the "singletonness".
This may not apply to your code, but a lot of classes within the .NET framework are sealed purposely so that no one tries to create a sub-class.
There are certain situations where the internals are complex and require certain things to be controlled very specifically so the designer decided no one should inherit the class so that no one accidentally breaks functionality by using something in the wrong way.
#jjnguy
Another user may want to re-use your code by sub-classing your class. I don't see a reason to stop this.
If they want to use the functionality of my class they can achieve that with containment, and they will have much less brittle code as a result.
Composition seems to be often overlooked; all too often people want to jump on the inheritance bandwagon. They should not! Substitutability is difficult. Default to composition; you'll thank me in the long run.
I am in agreement with jjnguy... I think the reasons to seal a class are few and far between. Quite the contrary, I have been in the situation more than once where I want to extend a class, but couldn't because it was sealed.
As a perfect example, I was recently creating a small package (Java, not C#, but same principles) to wrap functionality around the memcached tool. I wanted an interface so in tests I could mock away the memcached client API I was using, and also so we could switch clients if the need arose (there are 2 clients listed on the memcached homepage). Additionally, I wanted to have the opportunity to replace the functionality altogether if the need or desire arose (such as if the memcached servers are down for some reason, we could potentially hot swap with a local cache implementation instead).
I exposed a minimal interface to interact with the client API, and it would have been awesome to extend the client API class and then just add an implements clause with my new interface. The methods that I had in the interface that matched the actual interface would then need no further details and so I wouldn't have to explicitly implement them. However, the class was sealed, so I had to instead proxy calls to an internal reference to this class. The result: more work and a lot more code for no real good reason.
That said, I think there are potential times when you might want to make a class sealed... and the best thing I can think of is an API that you will invoke directly, but allow clients to implement. For example, a game where you can program against the game... if your classes were not sealed, then the players who are adding features could potentially exploit the API to their advantage. This is a very narrow case though, and I think any time you have full control over the codebase, there really is little if any reason to make a class sealed.
This is one reason I really like the Ruby programming language... even the core classes are open, not just to extend but to ADD AND CHANGE functionality dynamically, TO THE CLASS ITSELF! It's called monkeypatching and can be a nightmare if abused, but it's damn fun to play with!
From an object-oriented perspective, sealing a class clearly documents the author's intent without the need for comments. When I seal a class I am trying to say that this class was designed to encapsulate some specific piece of knowledge or some specific service. It was not meant to be enhanced or subclassed further.
This goes well with the Template Method design pattern. I have an interface that says "I perform this service." I then have a class that implements that interface. But, what if performing that service relies on context that the base class doesn't know about (and shouldn't know about)? What happens is that the base class provides virtual methods, which are either protected or private, and these virtual methods are the hooks for subclasses to provide the piece of information or action that the base class does not know and cannot know. Meanwhile, the base class can contain code that is common for all the child classes. These subclasses would be sealed because they are meant to accomplish that one and only one concrete implementation of the service.
Can you make the argument that these subclasses should be further subclassed to enhance them? I would say no because if that subclass couldn't get the job done in the first place then it should never have derived from the base class. If you don't like it then you have the original interface, go write your own implementation class.
Sealing these subclasses also discourages deep levels of inheritence, which works well for GUI frameworks but works poorly for business logic layers.
Because you always want to be handed a reference to the class and not to a derived one for various reasons:
i. invariants that you have in some other part of your code
ii. security
etc
Also, because it's a safe bet with regards to backward compatibility - you'll never be able to close that class for inheritance if it's release unsealed.
Or maybe you didn't have enough time to test the interface that the class exposes to be sure that you can allow others to inherit from it.
Or maybe there's no point (that you see now) in having a subclass.
Or you don't want bug reports when people try to subclass and don't manage to get all the nitty-gritty details - cut support costs.
Sometimes your class interface just isn't meant to be inheirited. The public interface just isn't virtual and while someone could override the functionality that's in place it would just be wrong. Yes in general they shouldn't override the public interface, but you can insure that they don't by making the class non-inheritable.
The example I can think of right now are customized contained classes with deep clones in .Net. If you inherit from them you lose the deep clone ability.[I'm kind of fuzzy on this example, it's been a while since I worked with IClonable] If you have a true singelton class, you probably don't want inherited forms of it around, and a data persistence layer is not normally place you want a lot of inheritance.
Not everything that's important in a class is asserted easily in code. There can be semantics and relationships present that are easily broken by inheriting and overriding methods. Overriding one method at a time is an easy way to do this. You design a class/object as a single meaningful entity and then someone comes along and thinks if a method or two were 'better' it would do no harm. That may or may not be true. Maybe you can correctly separate all methods between private and not private or virtual and not virtual but that still may not be enough. Demanding inheritance of all classes also puts a huge additional burden on the original developer to foresee all the ways an inheriting class could screw things up.
I don't know of a perfect solution. I'm sympathetic to preventing inheritance but that's also a problem because it hinders unit testing.
I exposed a minimal interface to interact with the client API, and it would have been awesome to extend the client API class and then just add an implements clause with my new interface. The methods that I had in the interface that matched the actual interface would then need no further details and so I wouldn't have to explicitly implement them. However, the class was sealed, so I had to instead proxy calls to an internal reference to this class. The result: more work and a lot more code for no real good reason.
Well, there is a reason: your code is now somewhat insulated from changes to the memcached interface.
Performance: (…) if the JIT compiler sees a call to a virtual method using a sealed types, the JIT compiler can produce more efficient code by calling the method non-virtually.(…)
That's a great reason indeed. Thus, for performance-critical classes, sealed and friends make sense.
All the other reasons I've seen mentioned so far boil down to "nobody touches my class!". If you're worried someone might misunderstand its internals, you did a poor job documenting it. You can't possibly know that there's nothing useful to add to your class, or that you already know every imaginable use case for it. Even if you're right and the other developer shouldn't have used your class to solve their problem, using a keyword isn't a great way of preventing such a mistake. Documentation is. If they ignore the documentation, their loss.
Most of answers (when abstracted) state that sealed/finalized classes are tool to protect other programmers against potential mistakes. There is a blurry line between meaningful protection and pointless restriction. But as long as programmer is the one who is expected to understand the program, I see no hardly any reasons to restrict him from reusing parts of a class. Most of you talk about classes. But it's all about objects!
In his first post, DrPizza claims that designing inheritable class means anticipating possible extensions. Do I get it right that you think that class should be inheritable only if it's likely to be extended well? Looks as if you were used to design software from the most abstract classes. Allow me a brief explanation of how do I think when designing:
Starting from the very concrete objects, I find characteristics and [thus] functionality that they have in common and I abstract it to superclass of those particular objects. This is a way to reduce code duplicity.
Unless developing some specific product such as a framework, I should care about my code, not others (virtual) code. The fact that others might find it useful to reuse my code is a nice bonus, not my primary goal. If they decide to do so, it's their responsibility to ensure validity of extensions. This applies team-wide. Up-front design is crucial to productivity.
Getting back to my idea: Your objects should primarily serve your purposes, not some possible shoulda/woulda/coulda functionality of their subtypes. Your goal is to solve given problem. Object oriented languages uses fact that many problems (or more likely their subproblems) are similar and therefore existing code can be used to accelerate further development.
Sealing a class forces people who could possibly take advantage of existing code WITHOUT ACTUALLY MODIFYING YOUR PRODUCT to reinvent the wheel. (This is a crucial idea of my thesis: Inheriting a class doesn't modify it! Which seems quite pedestrian and obvious, but it's being commonly ignored).
People are often scared that their "open" classes will be twisted to something that can not substitute its ascendants. So what? Why should you care? No tool can prevent bad programmer from creating bad software!
I'm not trying to denote inheritable classes as the ultimately correct way of designing, consider this more like an explanation of my inclination to inheritable classes. That's the beauty of programming - virtually infinite set of correct solutions, each with its own cons and pros. Your comments and arguments are welcome.
And finally, my answer to the original question: I'd finalize a class to let others know that I consider the class a leaf of the hierarchical class tree and I see absolutely no possibility that it could become a parent node. (And if anyone thinks that it actually could, then either I was wrong or they don't get me).