Well, I'm creating a library and that library needs to take all other libraries and make them work "alike".
For example: Imagine that I have 5 libraries, and all that libraries has the same idea, work to the same case, but they have their own way to work, their own API, and what I need is to make them work using a single API.
What is in my mind is to create a "factory" with a "trust list" inside of the factory that allows the user to choose different libraries to create, and the "factory" look on the "trust list" and if the library really exists, it creates and return the library.
But it can also be made using interfaces, where I can accept only classes that implements an specified interface, where I will have the security of the implementation of the methods that I want, so what this mean? All the libraries needs to implement that interface, implement the methods and make a kindle of wrapper to the library and that way they will work with the same API. The user can create a library using the factory and use the same API to all of them.
I don't know if you understand what I'm trying to explain, but I want to know, based on what I said, what is the best on my situation, "bridge" or "adapter" pattern?
And also, is my idea correct or am I crazy? (The interface and factory thing, and also the bridge and adapter, tell me what you have in mind).
Thank you all in advance.
The Bridge pattern is designed to separate a class's interface from its implementation so you can vary or replace the implementation without changing the client code.
I think you can specify public non-virtual interface, then using Template Method in each of these public functions invoke implementation method.
class Basic {
public:
// Stable, nonvirtual interface.
void A { doA();}
void B { doB();}
//...
private:
// Customization is an implementation detail that may
// or may not directly correspond to the interface.
// Each of these functions might optionally be
// pure virtual
virtual void doA { impl_ -> doA();}
virtual void doB { impl_ -> doB();}
};
These lectures might be useful:
Bridge pattern
Template method
Sounds like adapter to me. You have multiple adapter implementations, which is basic polymorphism and each adapter knows how to adapt to the specific library.
I don't see how the bridge pattern would make any sense here. You would typically use that in places where you use these libraries, but you don't know the specific library implementation yet.
Related
I'm not so sure the title is a good match for this question I want to put on the table.
I'm planning to create a web MVC framework as my graduation dissertation and in a previous conversation with my advisor trying to define some achivements, he convinced me that I should choose a modular design in this project.
I already had some things developed by then and stopped for a while to analyze how much modular it would be and I couldn't really do it because I don't know the real meaning of "modular".
Some things are not very cleary for me, like for example, just referencing another module blows up the modularity of my system?
Let's say I have a Database Access module and it OPTIONALY can use a Cache module for storing results of complex queries. As anyone can see, I at least will have a naming dependency for the cache module.
In my conception of "modular design", I can distribute each component separately and make it interact with others developed by other people. In this case I showed, if someone wants to use my Database Access module, they will have to take the Cache as well, even if he will not use it, just for referencing/naming purposes.
And so, I was wondering if this is really a modular design yet.
I came up with an alternative that is something like creating each component singly, without don't even knowing about the existance of other components that are not absolutely required for its functioning. To extend functionalities, I could create some structure based on Decorators and Adapters.
To clarify things a little bit, here is an example (in PHP):
Before
interface Cache {
public function isValid();
public function setValue();
public function getValue();
}
interface CacheManager {
public function get($name);
public function put($name, $value);
}
// Some concrete implementations...
interface DbAccessInterface {
public doComplexOperation();
}
class DbAccess implements DbAccessInterface {
private $cacheManager;
public function __construct(..., CacheManager $cacheManager = null) {
// ...
$this->cacheManager = $cacheManager;
}
public function doComplexOperation() {
if ($this->cacheManager !== null) {
// return from cache if valid
}
// complex operation
}
}
After
interface Cache {
public function isValid();
public function setValue();
public function getValue();
}
interface CacheManager {
public function get($name);
public function put($name, $value);
}
// Some concrete implementations...
interface DbAccessInterface {
public function doComplexOperation();
}
class DbAccess implements DbAccessInterface {
public function __construct(...) {
// ...
}
public function doComplexQuery() {
// complex operation
}
}
// And now the integration module
class CachedDbAcess implements DbAccessInterface {
private $dbAccess;
private $cacheManager;
public function __construct(DbAccessInterface $dbAccess, CacheManager $cacheManager) {
$this->dbAccess = $dbAccess;
$this->cacheManager = $cacheManager;
}
public function doComplexOperation() {
$cache = $this->cacheManager->get("Foo")
if($cache->isValid()) {
return $cache->getValue();
}
// Do complex operation...
}
}
Now my question is:
Is this the best solution? I should do this for all the modules that do not have as a requirement work together, but can be more efficient doing so?
Anyone would do it in a different way?
I have some more further questions involving this, but I don't know if this is an acceptable question for stackoverflow.
P.S.: English is not my first language, maybe some parts can get a little bit confuse
Some resources (not theoretical):
Nuclex Plugin Architecture
Python Plugin Application
C++ Plugin Architecture (Use NoScript on that side, they have some weird login policies)
Other SO threads (design pattern for plugins in php)
Django Middleware concept
Just referencing another module blows up the modularity of my system?
Not necessarily. It's a dependency. Having a dependencies is perfectly normal. Without dependencies modules can't interact with each other (unless you're doing such interaction indirectly which in general is a bad practice since it hides dependencies and complicates the code). Modular desing implies managing of dependencies, not removing them.
One tool - is using interfaces. Referencing module via interface makes a so called soft dependency. Such module can accept any implementation of an interface as a dependency so it is more independant and as a result - more maintainable.
The other tool - designing modules (and their interfaces) that have only single responcibility. This also makes them more granular, independant and maintainable.
But there is a line which you should not cross - blindly applying these tools may leed to a too modular and too generic desing. Making things too granular makes the whole system more complex. You should not solve universe problems, making generic modules, that all developers can use (unless it is your goal). First of all your system should solve your domain tasks and make things generic enough, but not more than that.
I came up with an alternative that is something like creating each component singly, without don't even knowing about the existance of other components that are not absolutely required for its functioning
It is great if you came up with this idea by yourself. The statement itself, is a key to modular programming.
Plugin architecture is the best in terms of extensibility, but imho it is hard to maintenance especially in intra application. And depending the complexity of plugin architecture, it can make your code more complex by adding plugin logics, etc.
Thus, for intra modular design, I choose the N-Tier, interface based architecture. Basically, the architecture relays on those tiers:
Domain / Entity
Interface [Depend on 1]
Services [Depend on 1 and 2]
Repository / DAL [Depend on 1 and 2]
Presentation Layer [Depend on 1,2,3,4]
Unfortunately, I don't think this is achieveable neatly in php projects as it need separated project / dll references in each tier. However, following the architecture can help to modularize the application.
For each modules, we need to do interface-based design. It can help to enhance the modularity of your code, because you can change the implementation later, but still keep the consumer the same.
I have provided an answer similiar to this interface-based design, at this stackoverflow question.
Lastly but not least, if you want to make your application modular to the UI, you can do Service Oriented Architecture. This is simply make your application as bunch of services, and then make the UI to consume the service. This design can help to separate your UI with your logic. You can later use different UI such as desktop app, but still use the same logic. Unfortunately, I don't have any reliable source for SOA.
EDIT:
I misunderstood the question. This is my point of view about modular framework. Unfortunately, I don't know much about Zend so I will give examples in C#:
It consist of modules, from the smallest to larger modules. Example in C# is you can using the Windows Form (larger) at your application, and also the Graphic (smaller) class to draw custom shapes in the screen.
It is extensible, or replaceable without making change to base class. In C# you can assign FormLoad event (extensible) to the Form class, inherit the Form or List class (extensible) or overridding form draw method to create a custom window graphic (replaceable).
(optional) it is easy to use. In normal DI interface design, we usually inject smaller modules into a larger (high level) module. This will require an IOC container. Refer to my question for detail.
Easy to configure, and does not involve any magical logic such as Service Locator Pattern. Search Service Locator is an Anti Pattern in google.
I don't know much about Zend, however I guess that the modularity in Zend can means that it can be extended without changing the core (replacing the code) inside framework.
If you said that:
if someone wants to use my Database Access module, they will have to take the Cache as well, even if he will not use it, just for referencing/naming purposes.
Then it is not modular. It is integrated, means that your Database Access module will not work without Cache. In reference of C# components, it choose to provide List<T> and BindingList<T> to provide different functionality. In your case, imho it is better to provide CachedDataAccess and DataAccess.
Let's say I have a few controllers. Each controller can at some point create new objects which will need to be stored on the server. For example I can have a RecipeCreationViewController which manages a form. When this form is submitted, a new Recipe object is created and needs to be saved on the server.
What's the best way to design the classes to minimize complexity and coupling while keeping the code as clean and readable as possible?
Singleton
Normally I would create a singleton NetworkAdapter that each controller can access directly in order to save objects.
Example:
[[[NetworkAdapter] sharedAdapter] saveObject:myRecipe];
But I've realized that having classes call singletons on their own makes for coupled code which is hard to debug since the access to the singleton is hidden in the implementation and not obvious from the interface.
Direct Reference
The alternative is to have each controller hold a reference to the NetworkAdapter and have this be passed in by the class that creates the controller.
For example:
[self.networkAdapter saveObject:myRecipe];
Delegation
The other approach that came to mind is delegation. The NetworkAdapter can implement a "RemoteStorageDelegate" protocol and each controller can have a remoteStorageDelegate which it can call methods like saveObject: on. The advantage being that the controllers don't know about the details of a NetworkAdapter, only that the object that implements the protocol knows how to save objects.
For example:
[self.remoteStorageDelegate saveObject:myRecipe];
Direct in Model
Yet another approach would be to have the model handle saving to the network directly. I'm not sure if this is a good idea though.
For example:
[myRecipe save];
What do you think of these? Are there any other patterns that make more sense for this?
I would also stick with Dependency Injection in your case. If you want to read about that you will easily find good articles in the web, e.g. on Wikipedia. There are also links to DI frameworks in Objective C.
Basically, you can use DI if you have two or more components, which must interact but shouldn't know each other directly in code. I'll elaborate your example a bit, but in C#/Java style because I don't know Objective C syntax. Let's say you have
class NetworkAdapter implements NetworkAdapterInterface {
void save(object o) { ... }
}
with the interface
interface NetworkAdapterInterface {
void save(object o);
}
Now you want to call that adapter in a controller like
class Controller {
NetworkAdapterInterface networkAdapter;
Controller() {
}
void setAdapter(NetworkAdapterInterface adapter) {
this.networkAdapter = adapter;
}
void work() {
this.networkAdapter.save(new object());
}
}
Calling the Setter is where now the magic of DI can happen (called Setter Injection; there is also e.g. Constructor Injection). That means that you haven't a single code line where you call the Setter yourself, but let it do the DI framework. Very loose coupled!
Now how does it work? Typically with a common DI framework you can define the actual mappings between components in a central code place or in a XML file. Image you have
<DI>
<component="NetworkAdapterInterface" class="NetworkAdapter" lifecycle="singleton" />
</DI>
This could tell the DI framework to automatically inject a NetworkAdapter in every Setter for NetworkAdapterInterface it finds in your code. In order to do this, it will create the proper object for you first. If it builds a new object for every injection, or only one object for all injections (Singleton), or e.g. one object per Unit of Work (if you use such a pattern), can be configured for each type.
As a sidenote: If you are unit testing your code, you can also use the DI framework to define completely other bindings, suitable for your test szenario. Easy way to inject some mocks!
I want to test my controller that depends on a hardware C# class, not an interface.
It's configured as a singleton and I just can't figure out how to RhinoMock it.
The hardware metadata (example) for the dependent class:
namespace Hardware.Client.Api
{
public class CHardwareManager
{
public static CHardwareManager GetInstance();
public string Connect(string clientId);
}
}
and in my code I want this something like this to return true, else I get an exception
if( !CHardwareManager.GetInstance().Connect("foo") )
I mock it using:
CHardwareManager mockHardwareMgr MockRepository.GenerateMock<CHardwareManager>();
But the Connect needs a GetInstance and the only combination I can get to "compile" is
mockHardwareMgr.Expect (x => x.Connected ).Return(true).Repeat.Any();
but it doesn't correctly mock, it throws an exception
but this complains about typing the GetInstance
mockHardwareMgr.Expect (x => x.GetInstance().Connected).Return(true).Repeat.Any();
So my problem - I think - is mocking a singleton. Then I have no idea how to make my controller use this mock since I don't pass the mock into the controller. It's a resource and namespace.
90% of my work requires external components I need to mock, most times I don't write the classes or interfaces, and I'm struggling to get them mocked and my code tested.
Any pointers would be welcome.
Thanks in advance (yes, I've been searching through SO and have not seen something like this. But then, maybe my search was not good.
The usual way to avoid problems with mocking external components is not to use them directly in your code. Instead, define an anti-corruption layer (usually through an interface that looks like your external component) and test your code using mocked implementation of this interface. After all, you're testing your own code, not the external one.
Even better way is to adjust this interface to your needs so it only exposes stuff that you actually need, not the whole API the external component provides (so it's actually an Adapter pattern).
External components are tested using different approaches: system testing, in which case you don't really mock them, you use the actual implementation.
Usually when you try to get Rhino Mocks to do something which feels unnatural and Rhino growls, this is a good sign that your approach is not the right one. Almost everything can be done using simple interface mocking.
As Igor said RhinoMocks (and most other free mocking frameworks, e.g. Moq) can only mock interfaces.
For mocking classes try (and pay) TypeMock.
For mocking singletons see my answer to:
How to Mock a Static Singleton?
Yes, I'm somewhat undermining the common understanding of what's deemed testable and thus "good" code. However I'm starting to resent answers like "You're doing it wrong. Make everything anew." for those answers don't solve the problem at hand.
No, this is not pointing at Igor, but at many others in similar threads, who answered "Singletons are unmockable. (Make everything anew.)".
I am going over some OO basics and trying to understand why is there a use of Interface reference variables.
When I create an interface:
public interface IWorker
{
int HoneySum { get; }
void getHoney();
}
and have a class implement it:
public class Worker : Bee, IWorker
{
int honeySum = 15;
public int HoneySum { get { return honeySum; } }
public void getHoney()
{
Console.WriteLine("Worker Bee: I have this much honey: {0}", HoneySum);
}
}
why do people use:
IWorker worker = new Worker();
worker.getHoney();
instead of just using:
Worker worker3 = new Worker();
worker3.getHoney();
whats the point of a interface reference variable when you can just instatiate the class and use it's methods and fields that way?
If your code knows what class will be used, you are right, there is no point in having an interface type variable. Just like in your example. That code knows that the class that will be instantiated is Worker, because that code won't magically change and instantiate anything else than Worker. In that sense, your code is coupled with the definition and use of Worker.
But you might want to write some code that works without knowing the class type. Take for example the following method:
public void stopWorker(IWorker worker) {
worker.stop(); // Assuming IWorker has a stop() method
}
That method doesn't care about the specific class. It would handle anything that implements IWorker.
That is code you don't have to change if you want later to use a different IWorker implementation.
It's all about low coupling between your pieces of code. It's all about maintainability.
Basically it's considered good programming practice to use the interface as the type. This allows different implementations of the interface to be used without effecting the code. I.e. if the object being assigned was passed in then you can pass in anything that implements the interface without effecting the class. However if you use the concrete class then you can only passin objects of that type.
There is a programming principle I cannot remember the name of at this time that applies to this.
You want to keep it as generic as possible without tying to specific implementation.
Interfaces are used to achieve loose coupling between system components. You're not restricting your system to the specific concrete IWorker instance. Instead, you're allowing the consumer to specify which concrete implementation of IWorker they'd like to use. What you get out of it is loosely dependent components and better flexibility.
One major reason is to provide compatibility with existing code. If you have existing code that knows how to manipulate objects via some particular interface, you can instantly make your new code compatible with that existing code by implementing that interface.
This kind of capability becomes particularly important for long-term maintenance. You already have an existing framework, and you typically want to minimize changes to other code to fit your new code into the framework. At least in the ideal case, you do this by writing your new code to implement some number of existing interfaces. As soon as you do, the existing code that knows how to manipulate objects via those interfaces can automatically work with your new class just as well as it could with the ones for which it was originally designed.
Think about interfaces as protocols and not classes i.e. does this object implement this protocol as distinct from being a protocol? For example can my number object be serialisable? Its class is a number but it might implement an interface that describes generally how it can be serialised.
A given class of object may actually implement many interfaces.
Specifically, when you create an interface/implementor pair, and there is no overriding organizational concern (such as the interface should go in a different assembly ie, as recommended by the s# architecture) do you have a default way of organizing them in your namespace/naming scheme?
This is obviously a more opinion based question but I think some people have thought about this more and we can all benefit from their conclusions.
The answer depends on your intentions.
If you intend the consumer of your namespaces to use the interfaces over the concrete implementations, I would recommend having your interfaces in the top-level namespace with the implementations in a child namespace
If the consumer is to use both, have them in the same namespace.
If the interface is for predominantly specialized use, like creating new implementations, consider having them in a child namespace such as Design or ComponentModel.
I'm sure there are other options as well, but as with most namespace issues, it comes down to the use-cases of the project, and the classes and interfaces it contains.
I usually keep the interface in the same namespace of as the concrete types.
But, that's just my opinion, and namespace layout is highly subjective.
Animals
|
| - IAnimal
| - Dog
| - Cat
Plants
|
| - IPlant
| - Cactus
You don't really gain anything by moving one or two types out of the main namespace, but you do add the requirement for one extra using statement.
What I generally do is to create an Interfaces namespace at a high level in my hierarchy and put all interfaces in there (I do not bother to nest other namespaces in there as I would then end up with many namespaces containing only one interface).
Interfaces
|--IAnimal
|--IVegetable
|--IMineral
MineralImplementor
Organisms
|--AnimalImplementor
|--VegetableImplementor
This is just the way that I have done it in the past and I have not had many problems with it, though admittedly it might be confusing to others sitting down with my projects. I am very curious to see what other people do.
I prefer to keep my interfaces and implementation classes in the same namespace. When possible, I give the implementation classes internal visibility and provide a factory (usually in the form of a static factory method that delegates to a worker class, with an internal method that allows a unit tests in a friend assembly to substitute a different worker that produces stubs). Of course, if the concrete class needs to be public--for instance, if it's an abstract base class, then that's fine; I don't see any reason to put an ABC in its own namespace.
On a side note, I strongly dislike the .NET convention of prefacing interface names with the letter 'I.' The thing the (I)Foo interface models is not an ifoo, it's simply a foo. So why can't I just call it Foo? I then name the implementation classes specifically, for example, AbstractFoo, MemoryOptimizedFoo, SimpleFoo, StubFoo etc.
(.Net) I tend to keep interfaces in a separate "common" assembly so I can use that interface in several applications and, more often, in the server components of my apps.
Regarding namespaces, I keep them in BusinessCommon.Interfaces.
I do this to ensure that neither I nor my developers are tempted to reference the implementations directly.
Separate the interfaces in some way (projects in Eclipse, etc) so that it's easy to deploy only the interfaces. This allows you to provide your external API without providing implementations. This allows dependent projects to build with a bare minimum of externals. Obviously this applies more to larger projects, but the concept is good in all cases.
I usually separate them into two separate assemblies. One of the usual reasons for a interface is to have a series of objects look the same to some subsystem of your software. For example I have all my Reports implementing the IReport Interfaces. IReport is used is not only used in printing but for previewing and selecting individual options for each report. Finally I have a collection of IReport to use in dialog where the user selects which reports (and configuring options) they want to print.
The Reports reside in a separate assembly and the IReport, the Preview engine, print engine, report selections reside in their respective core assembly and/or UI assembly.
If you use the Factory Class to return a list of available reports in the report assembly then updating the software with new report becomes merely a matter of copying the new report assembly over the original. You can even use the Reflection API to just scan the list of assemblies for any Report Factories and build your list of Reports that way.
You can apply this techniques to Files as well. My own software runs a metal cutting machine so we use this idea for the shape and fitting libraries we sell alongside our software.
Again the classes implementing a core interface should reside in a separate assembly so you can update that separately from the rest of the software.
I give my own experience that is against other answers.
I tend to put all my interfaces in the package they belongs to. This grants that, if I move a package in another project I have all the thing there must be to run the package without any changes.
For me, any helper functions and operator functions that are part of the functionality of a class should go into the same namespace as that of the class, because they form part of the public API of that namespace.
If you have common implementations that share the same interface in different packages you probably need to refactor your project.
Sometimes I see that there are plenty of interfaces in a project that could be converted in an abstract implementation rather that an interface.
So, ask yourself if you are really modeling a type or a structure.
A good example might be looking at what Microsoft does.
Assembly: System.Runtime.dll
System.Collections.Generic.IEnumerable<T>
Where are the concrete types?
Assembly: System.Colleections.dll
System.Collections.Generic.List<T>
System.Collections.Generic.Queue<T>
System.Collections.Generic.Stack<T>
// etc
Assembly: EntityFramework.dll
System.Data.Entity.IDbSet<T>
Concrete Type?
Assembly: EntityFramework.dll
System.Data.Entity.DbSet<T>
Further examples
Microsoft.Extensions.Logging.ILogger<T>
- Microsoft.Extensions.Logging.Logger<T>
Microsoft.Extensions.Options.IOptions<T>
- Microsoft.Extensions.Options.OptionsManager<T>
- Microsoft.Extensions.Options.OptionsWrapper<T>
- Microsoft.Extensions.Caching.Memory.MemoryCacheOptions
- Microsoft.Extensions.Caching.SqlServer.SqlServerCacheOptions
- Microsoft.Extensions.Caching.Redis.RedisCacheOptions
Some very interesting tells here. When the namespace changes to support the interface, the namespace change Caching is also prefixed to the derived type RedisCacheOptions. Additionally, the derived types are in an additional namespace of the implementation.
Memory -> MemoryCacheOptions
SqlServer -> SqlServerCatchOptions
Redis -> RedisCacheOptions
This seems like a fairly easy pattern to follow most of the time. As an example I (since no example was given) the following pattern might emerge:
CarDealership.Entities.Dll
CarDealership.Entities.IPerson
CarDealership.Entities.IVehicle
CarDealership.Entities.Person
CarDealership.Entities.Vehicle
Maybe a technology like Entity Framework prevents you from using the predefined classes. Thus we make our own.
CarDealership.Entities.EntityFramework.Dll
CarDealership.Entities.EntityFramework.Person
CarDealership.Entities.EntityFramework.Vehicle
CarDealership.Entities.EntityFramework.SalesPerson
CarDealership.Entities.EntityFramework.FinancePerson
CarDealership.Entities.EntityFramework.LotVehicle
CarDealership.Entities.EntityFramework.ShuttleVehicle
CarDealership.Entities.EntityFramework.BorrowVehicle
Not that it happens often but may there's a decision to switch technologies for whatever reason and now we have...
CarDealership.Entities.Dapper.Dll
CarDealership.Entities.Dapper.Person
CarDealership.Entities.Dapper.Vehicle
//etc
As long as we're programming to the interfaces we've defined in root Entities (following the Liskov Substitution Principle) down stream code doesn't care where how the Interface was implemented.
More importantly, In My Opinion, creating derived types also means you don't have to consistently include a different namespace because the parent namespace contains the interfaces. I'm not sure I've ever seen a Microsoft example of interfaces stored in child namespaces that are then implement in the parent namespace (almost an Anti-Pattern if you ask me).
I definitely don't recommend segregating your code by type, eg:
MyNamespace.Interfaces
MyNamespace.Enums
MyNameSpace.Classes
MyNamespace.Structs
This doesn't add value to being descriptive. And it's akin to using System Hungarian notation, which is mostly if not now exclusively, frowned upon.
I HATE when I find interfaces and implementations in the same namespace/assembly. Please don't do that, if the project evolves, it's a pain in the ass to refactor.
When I reference an interface, I want to implement it, not to get all its implementations.
What might me be admissible is to put the interface with its dependency class(class that references the interface).
EDIT: #Josh, I juste read the last sentence of mine, it's confusing! of course, both the dependency class and the one that implements it reference the interface. In order to make myself clear I'll give examples :
Acceptable :
Interface + implementation :
namespace A;
Interface IMyInterface
{
void MyMethod();
}
namespace A;
Interface MyDependentClass
{
private IMyInterface inject;
public MyDependentClass(IMyInterface inject)
{
this.inject = inject;
}
public void DoJob()
{
//Bla bla
inject.MyMethod();
}
}
Implementing class:
namespace B;
Interface MyImplementing : IMyInterface
{
public void MyMethod()
{
Console.WriteLine("hello world");
}
}
NOT ACCEPTABLE:
namespace A;
Interface IMyInterface
{
void MyMethod();
}
namespace A;
Interface MyImplementing : IMyInterface
{
public void MyMethod()
{
Console.WriteLine("hello world");
}
}
And please DON'T CREATE a project/garbage for your interfaces ! example : ShittyProject.Interfaces. You've missed the point!
Imagine you created a DLL reserved for your interfaces (200 MB). If you had to add a single interface with two line of codes, your users will have to update 200 MB just for two dumb signaturs!