I wonder if creating an object outside of facade providing its needed information and than putting it into a facade breaks the pattern?
explanation :
usually the objects of the facade are created into the facade itself,
however I want to create the object outside of the facade and than pass
them into it. And I wonder if its a bad thing to do.
Example:
class A:
def __init__(self , a ,b, c):
self.a = a
self.b = b
self.c = c
def do_something_a(self):
print(self.a, self.b, self.c)
class B:
def __init__(self, d, e, f):
self.d = d
self.e = e
self.f = f
def do_something_b(self):
print(self.d, self.e, self.f)
class Facade:
def __init__(self, class_a, class_b):
self.class_a = class_a
self.class_b = class_b
def do(self):
self.class_a.do_something_a()
self.class_b.do_something_b()
a = A(1,2,3)
b = B(3,4,5)
f = Facade(a,b)
f.do()
This seems fine; I don't think that it breaks the Facade pattern.
When describing the pattern, GoF discusses a compiler made from various subsystems, such as CodeGenerator, Scanner, ProgramNodeBuilder, and so on. The Motivation section states:
"Some specialized applications might need to access these classes directly. But most clients of a compiler generally don't care about details like parsing and code generation; they merely want to compile some code. For them, the powerful but low-level interfaces in the compiler subsystem only complicate their task.
"To provide a higher-level interface that can shield clients from these classes, the compiler subsystem also includes a Compiler class. This class defines a unified interface to the compiler's functionality. The Compiler acts as a facade"
Here we learn two things:
What a Facade is.
That the subsystem classes that the Facade shields from some clients are still part of the public API of the system, available for those clients that need an extra degree of control.
Later, in the Implementation section, they write:
"The Facade class is part of the public interface, of course, but it's not the only part. Other subsystem classes are usually public as well. For example, the classes Parser and Scanner in the compiler subsystem are part of the public interface."
As a conclusion to the Sample Code section, they write:
"we might want to change the Compiler constructor to take a CodeGenerator parameter. Then programmers can specify the generator to use when they instantiate Compiler. The compiler facade can parameterize other participants such as Scanner and ProgramNodeBuilder as well, which adds flexibility"
The original pattern description clearly describes this as a 'legal' option.
Apart from that, I've also found such a design useful in real life. I've also described a Facade as one design option for a DI-friendly library.
Related
Is it true that for C++ to work similarly in terms of modern OOP as in Java, Ruby, Python, the function (or methods) must be declared virtual and if not, what "strange" behaviors may occur?
I think it is true that for Java, Ruby, Python, and possibly other OOP languages that are late comers such as PHP and Lua, and even Smalltalk and Objective-C, all methods are just what is known as "virtual functions"?
"Method" is an unfortunately overloaded term that can mean many things. There's a reason C++ prefers different terminology, and that's because not only does it do something different from other languages, but it intends to do something different from what other languages do.
In C++ you call a member function. i.e. you externally make a call to a function associated with an object. Whether that function is virtual or not is secondary; what matters is the intended ordering of your actions - you're reaching into the object's scope, and commanding it to take a specific action. It might be that the object can specialize the action, but if so, it warned you in advance that it would do this.
In Smalltalk and the languages that imitate it (Objective-C most closely), you send a message to an object. A message is constructed on your side of the call consisting of a task name (i.e. method selector), arguments, etc., and the packed up and sent to the object, for the object to deal with as it sees fit. Semantically, it's entirely the object's decision what to do upon receipt of the message - it can examine the task name and decide which implementation to apply dynamically, according to a user-implemented choice process, or even do nothing at all. The outside calling code doesn't get to say what the object will do, and certainly doesn't get any say in which procedure actually runs.
Some languages fall in the middle ground, e.g. Java is inspired by the latter, but doesn't give the user any way to specify unusual dynamic responses - for the sake of simplicity every message does result in a call, but which call is still hidden from the external code, because it's entirely the object's business. C++ was never built on this philosophy of messages in the first place, so it has a different default assumption about how its member functions should operate.
The thing is that C++ is like the great grand father. It has many features, which often requires huge code definition.
Consider an example:
class A
{
virtual void fn() = 0;
};
class B: A
{
void fn();
};
#include "a.hpp"
#include "b.hpp"
int main()
{
A *a = new B();
a->fn();
}
This would implement overriding in C++.
Note that virtual void fn()=0 makes the class A abstract, and a pointer to base class (A) is essential.
In Java, the process is even simpler
abstract class A
{
abstract void fn();
}
class B extends A
{
void fn() {
//Some insane function :)
}
}
public static void main(String[] args) {
B ob = new B();
ob.fn();
}
Well, the effect is same; but the process is largely different. In short, C++ does have many features implemented in languages like Java, Ruby etc. but it is simply implemented using some (often complicated) techniques.
Regarding Php, since it is directly based on C++, there exists some syntax similarities between C++ and Php.
It is true that (for example) in Java all methods are virtual by default. In C++ it is possible to overload a non-virtual function (as opposed to overriding a virtual function) in a subclass, leading to possible counter-intutive behaviour, when only the base function is actually executed via a pointer or reference to the base class (i.e. when polymorphic behavior would normally be expected).
Because C++ is a value-based (as opposed to reference-based) language, then even when a function has been declared as virtual, the well known
object slicing problem can still arise: the superclass method is invoked when the type of a value object of a subclass is `cut down' to that of the base class (e.g. when the subclass is passed to a function which takes a base class argument by value).
For this reason, it is recommended to make all non-leaf classes abstract, something which is often achieved by providing a virtual destructor, even if such would otherwise be gratuitous.
Say FrameworkA consumes a FrameworkA.StandardLogger class for logging. I want to replace the logging library by another one (the SuperLogger class).
To make that possible, there are interfaces: FrameworkA will provide a FrameworkA.Logger interface that other libraries have to implement.
But what if other libraries don't implement that interface? FrameworkA might be a not popular enough framework to make SuperLogger care about its interface.
Possible solutions are:
have a standardized interface (defined by standards like JSR, PSR, ...)
write adapters
What if there is no standardized interface, and you want to avoid the pain of writing useless adapters if classes are compatible?
Couldn't there be another solution to ensure a class meets a contract, but at runtime?
Imagine (very simple implementation in pseudo-code):
namespace FrameworkA;
interface Logger {
void log(message);
}
namespace SuperLoggingLibrary;
class SupperLogger {
void log(message) {
// ...
}
}
SupperLogger is compatible with Logger if only it implemented Logger interface. But instead of having a "hard-dependency" to FrameworkA.Logger, its public "interface" (or signature) could be verified at runtime:
// Something verify that SupperLogger implements Logger at run-time
Logger logger = new SupperLogger();
// setLogger() expect Logger, all works
myFrameworkAConfiguration.setLogger(logger);
In the fake scenario, I expect the Logger logger = new SupperLogger() to fail at run-time if the class is not compatible with the interface, but to succeed if it is.
Would that be a valid thing in OOP? If yes, does it exist in any language? If no, why is it not valid?
My question stands for statically-typed languages (Java, ...) or dynamically typed languages (PHP, ...).
For PHP & al: I know when there is no type-check you can use any object you want even if it doesn't implement the interface, but I'd be interested in something that actually checks that the object complies with the interface.
This is called duck typing, a concept that you will find in Ruby ("it walks like a duck, it quacks like a duck, it must be a duck")
In other dynamically typed languages you can simulate it, for example in PHP with method_exists. In statically typed languages there might be workarounds with reflection, a search for "duck typing +language" will help to find them.
This is more of a statically typed issue than a OOP one. Both Java and Ruby are OO languages, but Java woudlnt allow what you want (as its statically typed) but Ruby would (as its dynamically typed).
From a statically typed language point of view one of the major (if not the major) advantage is knowing at compile time if an assignment is safe and valid. What you're looking for is provided by dynamically typed languages (such as Ruby), but isnt possible in a statically typed language - and this is by design (compile time safety).
You can, but it is ugly, do something like (in Java):
Object objLogger = new SupperLogger();
Logger logger = (Logger)objLogger;
This would pass at compile time but would fail at runtime if the assignment was invalid.
That said, the above is pretty ugly and isnt something I would do - it doesnt give you much and risks an unpleasant (and possibly suprising) exception at runtime.
I guess the best you could hope for in a language like Java would be to abstract the creation away from where you want to use it:
Logger logger = getLogger();
With the internals of getLogger deciding what to return. This however just defers the actual creation to further down - you'll still have to do so in a statically typed safe way.
In object-oriented programming, it's sometimes nice to be able to modify the behavior of an already-created object. Of course this can be done with relatively verbose techniques such as the strategy pattern. However, sometimes it would be nice to just completely change the type of the object by changing the vtable pointer after instantiation. This would be safe if, assuming you're switching from class A to class B:
class B is a subclass of class A and does not add any new fields, or
class B and class A have the same parent class. Neither do anything except override virtual functions from the parent class. (No new fields or virtual functions.)
In either case, A and B must have the same invariants.
This is hackable in C++ and the D programming language, because pointers can be arbitrarily cast around, but it's so ugly and hard to follow that I'd be scared to do it in code that needs to be understood by anyone else. Why isn't a higher-level way to do this generally provided?
Because the mindset of most languages designers is too static.
While such features are dangerous in the hand of programmers, they are necessary tools for library builders. For example, in Java one can create objects without calling a constructor (yes, you can!) but this power is only given to library designers. Still however, many features that library designers would kill for are alas not possible in Java. C# on the other hand is adding more and more dynamic features in each version. I am really looking forward to all the awesome libraries one can build using the upcoming DLR (dynamic language runtime).
In some dynamic languages such as Smalltalk (and also as far as I know Perl and Python, but not Ruby) it is totally possible to change the class of an object. In Pharo Smalltalk you achieve this with
object primitiveChangeClassTo: anotherObject
which changes the class of object to that of anotherObject. Please note that this is not the same as object become: anotherObject which exchanges all pointers of both objects.
You can do it in Python, by modifying the instance __class__ attribute:
>>> class A(object):
... def foo(self):
... print "I am an A"
...
>>>
>>> class B(object):
... def foo(self):
... print "I am a B"
...
>>>
>>> a = A()
>>> a.foo()
I am an A
>>> a.__class__
<class '__main__.A'>
>>> a.__class__ = B
>>>
>>> a
<__main__.B object at 0x017010B0>
>>> a.foo()
I am a B
However in 12 years of Python programming I have never had a use for it, and never seen anyone else use it. IMHO there is a huge danger that casual use of this feature will make your code hard to maintain and debug.
The only situation where I can imagine using it is for runtime debugging, e.g. to change an instance of a class whose creation I don't have control into a mock object or into a class that has been decorated with logging. I would not use it in production code.
You can do it in higher level languages - see the Smalltalk "become" message. The fact that this feature is almost impossible to use correctly even in ST could be the reason that statically typed languages like C++ don't support it.
To paraphrase the XoTcl documentation, it is because most languages which proclaim to be "object oriented" are not--they are class oriented. It sounds like XoTcl mixins, Ruby mixins, and Perl6 roles provide the functionality you're looking for.
What you're talking about is monkey patching, that is available in several high level dynamic language :
A monkey patch (also spelled
monkey-patch, MonkeyPatch) is a way to
extend or modify the runtime code of
dynamic languages (e.g. Smalltalk,
JavaScript, Objective-C, Ruby, Perl,
Python, Groovy, etc.) without altering
the original source code.
Most likely an OO concept question/situation:
I have a library that I use in my program with source files available. I've realized I need to tailor the library to my needs, say I need to modify the behavior of a single functions F in class C, while leaving the original library's source intact, to be able to painlessly upgrade it when needed.
I realize I can make my own class C1 inherited from C, place it in my source tree, and write the function F how I see it fit, replacing all occurrences of
myObj = new C();
with
myObj = new C1();
throughout my code.
What is the 'proper' way of doing this? I suspect the inheritance method I described has problems, as the library in its internals would still use C::F instead of my C1::F, and it would be way cooler if I could still refer to C::F not some strange C1::F in my code.
For those that care - the language is PHP5, and I'm kinda OOP newbie :)
I think subclassing is pretty much the best way to add functionality to an external library.
The alternative is the decorator pattern whereby you have to wrap every method of the C class. (There is a time and place for the decorator pattern, but I think this isn't it)
You say:
as the library in its internals would still use C::F instead of my C1::F
Not necessarily true. If you pass an instance of the C1 class to a library function, then any calls to method F of that object would still go through your C1::F method. The same also happens when the C class accesses its own method by calling $this->F() -- because it's still a C1 object. This property is called polymorphism
Of course this does not apply when the library's code itself instantiates a new object of class C.
Specifically, when you create an interface/implementor pair, and there is no overriding organizational concern (such as the interface should go in a different assembly ie, as recommended by the s# architecture) do you have a default way of organizing them in your namespace/naming scheme?
This is obviously a more opinion based question but I think some people have thought about this more and we can all benefit from their conclusions.
The answer depends on your intentions.
If you intend the consumer of your namespaces to use the interfaces over the concrete implementations, I would recommend having your interfaces in the top-level namespace with the implementations in a child namespace
If the consumer is to use both, have them in the same namespace.
If the interface is for predominantly specialized use, like creating new implementations, consider having them in a child namespace such as Design or ComponentModel.
I'm sure there are other options as well, but as with most namespace issues, it comes down to the use-cases of the project, and the classes and interfaces it contains.
I usually keep the interface in the same namespace of as the concrete types.
But, that's just my opinion, and namespace layout is highly subjective.
Animals
|
| - IAnimal
| - Dog
| - Cat
Plants
|
| - IPlant
| - Cactus
You don't really gain anything by moving one or two types out of the main namespace, but you do add the requirement for one extra using statement.
What I generally do is to create an Interfaces namespace at a high level in my hierarchy and put all interfaces in there (I do not bother to nest other namespaces in there as I would then end up with many namespaces containing only one interface).
Interfaces
|--IAnimal
|--IVegetable
|--IMineral
MineralImplementor
Organisms
|--AnimalImplementor
|--VegetableImplementor
This is just the way that I have done it in the past and I have not had many problems with it, though admittedly it might be confusing to others sitting down with my projects. I am very curious to see what other people do.
I prefer to keep my interfaces and implementation classes in the same namespace. When possible, I give the implementation classes internal visibility and provide a factory (usually in the form of a static factory method that delegates to a worker class, with an internal method that allows a unit tests in a friend assembly to substitute a different worker that produces stubs). Of course, if the concrete class needs to be public--for instance, if it's an abstract base class, then that's fine; I don't see any reason to put an ABC in its own namespace.
On a side note, I strongly dislike the .NET convention of prefacing interface names with the letter 'I.' The thing the (I)Foo interface models is not an ifoo, it's simply a foo. So why can't I just call it Foo? I then name the implementation classes specifically, for example, AbstractFoo, MemoryOptimizedFoo, SimpleFoo, StubFoo etc.
(.Net) I tend to keep interfaces in a separate "common" assembly so I can use that interface in several applications and, more often, in the server components of my apps.
Regarding namespaces, I keep them in BusinessCommon.Interfaces.
I do this to ensure that neither I nor my developers are tempted to reference the implementations directly.
Separate the interfaces in some way (projects in Eclipse, etc) so that it's easy to deploy only the interfaces. This allows you to provide your external API without providing implementations. This allows dependent projects to build with a bare minimum of externals. Obviously this applies more to larger projects, but the concept is good in all cases.
I usually separate them into two separate assemblies. One of the usual reasons for a interface is to have a series of objects look the same to some subsystem of your software. For example I have all my Reports implementing the IReport Interfaces. IReport is used is not only used in printing but for previewing and selecting individual options for each report. Finally I have a collection of IReport to use in dialog where the user selects which reports (and configuring options) they want to print.
The Reports reside in a separate assembly and the IReport, the Preview engine, print engine, report selections reside in their respective core assembly and/or UI assembly.
If you use the Factory Class to return a list of available reports in the report assembly then updating the software with new report becomes merely a matter of copying the new report assembly over the original. You can even use the Reflection API to just scan the list of assemblies for any Report Factories and build your list of Reports that way.
You can apply this techniques to Files as well. My own software runs a metal cutting machine so we use this idea for the shape and fitting libraries we sell alongside our software.
Again the classes implementing a core interface should reside in a separate assembly so you can update that separately from the rest of the software.
I give my own experience that is against other answers.
I tend to put all my interfaces in the package they belongs to. This grants that, if I move a package in another project I have all the thing there must be to run the package without any changes.
For me, any helper functions and operator functions that are part of the functionality of a class should go into the same namespace as that of the class, because they form part of the public API of that namespace.
If you have common implementations that share the same interface in different packages you probably need to refactor your project.
Sometimes I see that there are plenty of interfaces in a project that could be converted in an abstract implementation rather that an interface.
So, ask yourself if you are really modeling a type or a structure.
A good example might be looking at what Microsoft does.
Assembly: System.Runtime.dll
System.Collections.Generic.IEnumerable<T>
Where are the concrete types?
Assembly: System.Colleections.dll
System.Collections.Generic.List<T>
System.Collections.Generic.Queue<T>
System.Collections.Generic.Stack<T>
// etc
Assembly: EntityFramework.dll
System.Data.Entity.IDbSet<T>
Concrete Type?
Assembly: EntityFramework.dll
System.Data.Entity.DbSet<T>
Further examples
Microsoft.Extensions.Logging.ILogger<T>
- Microsoft.Extensions.Logging.Logger<T>
Microsoft.Extensions.Options.IOptions<T>
- Microsoft.Extensions.Options.OptionsManager<T>
- Microsoft.Extensions.Options.OptionsWrapper<T>
- Microsoft.Extensions.Caching.Memory.MemoryCacheOptions
- Microsoft.Extensions.Caching.SqlServer.SqlServerCacheOptions
- Microsoft.Extensions.Caching.Redis.RedisCacheOptions
Some very interesting tells here. When the namespace changes to support the interface, the namespace change Caching is also prefixed to the derived type RedisCacheOptions. Additionally, the derived types are in an additional namespace of the implementation.
Memory -> MemoryCacheOptions
SqlServer -> SqlServerCatchOptions
Redis -> RedisCacheOptions
This seems like a fairly easy pattern to follow most of the time. As an example I (since no example was given) the following pattern might emerge:
CarDealership.Entities.Dll
CarDealership.Entities.IPerson
CarDealership.Entities.IVehicle
CarDealership.Entities.Person
CarDealership.Entities.Vehicle
Maybe a technology like Entity Framework prevents you from using the predefined classes. Thus we make our own.
CarDealership.Entities.EntityFramework.Dll
CarDealership.Entities.EntityFramework.Person
CarDealership.Entities.EntityFramework.Vehicle
CarDealership.Entities.EntityFramework.SalesPerson
CarDealership.Entities.EntityFramework.FinancePerson
CarDealership.Entities.EntityFramework.LotVehicle
CarDealership.Entities.EntityFramework.ShuttleVehicle
CarDealership.Entities.EntityFramework.BorrowVehicle
Not that it happens often but may there's a decision to switch technologies for whatever reason and now we have...
CarDealership.Entities.Dapper.Dll
CarDealership.Entities.Dapper.Person
CarDealership.Entities.Dapper.Vehicle
//etc
As long as we're programming to the interfaces we've defined in root Entities (following the Liskov Substitution Principle) down stream code doesn't care where how the Interface was implemented.
More importantly, In My Opinion, creating derived types also means you don't have to consistently include a different namespace because the parent namespace contains the interfaces. I'm not sure I've ever seen a Microsoft example of interfaces stored in child namespaces that are then implement in the parent namespace (almost an Anti-Pattern if you ask me).
I definitely don't recommend segregating your code by type, eg:
MyNamespace.Interfaces
MyNamespace.Enums
MyNameSpace.Classes
MyNamespace.Structs
This doesn't add value to being descriptive. And it's akin to using System Hungarian notation, which is mostly if not now exclusively, frowned upon.
I HATE when I find interfaces and implementations in the same namespace/assembly. Please don't do that, if the project evolves, it's a pain in the ass to refactor.
When I reference an interface, I want to implement it, not to get all its implementations.
What might me be admissible is to put the interface with its dependency class(class that references the interface).
EDIT: #Josh, I juste read the last sentence of mine, it's confusing! of course, both the dependency class and the one that implements it reference the interface. In order to make myself clear I'll give examples :
Acceptable :
Interface + implementation :
namespace A;
Interface IMyInterface
{
void MyMethod();
}
namespace A;
Interface MyDependentClass
{
private IMyInterface inject;
public MyDependentClass(IMyInterface inject)
{
this.inject = inject;
}
public void DoJob()
{
//Bla bla
inject.MyMethod();
}
}
Implementing class:
namespace B;
Interface MyImplementing : IMyInterface
{
public void MyMethod()
{
Console.WriteLine("hello world");
}
}
NOT ACCEPTABLE:
namespace A;
Interface IMyInterface
{
void MyMethod();
}
namespace A;
Interface MyImplementing : IMyInterface
{
public void MyMethod()
{
Console.WriteLine("hello world");
}
}
And please DON'T CREATE a project/garbage for your interfaces ! example : ShittyProject.Interfaces. You've missed the point!
Imagine you created a DLL reserved for your interfaces (200 MB). If you had to add a single interface with two line of codes, your users will have to update 200 MB just for two dumb signaturs!