Specifically, when you create an interface/implementor pair, and there is no overriding organizational concern (such as the interface should go in a different assembly ie, as recommended by the s# architecture) do you have a default way of organizing them in your namespace/naming scheme?
This is obviously a more opinion based question but I think some people have thought about this more and we can all benefit from their conclusions.
The answer depends on your intentions.
If you intend the consumer of your namespaces to use the interfaces over the concrete implementations, I would recommend having your interfaces in the top-level namespace with the implementations in a child namespace
If the consumer is to use both, have them in the same namespace.
If the interface is for predominantly specialized use, like creating new implementations, consider having them in a child namespace such as Design or ComponentModel.
I'm sure there are other options as well, but as with most namespace issues, it comes down to the use-cases of the project, and the classes and interfaces it contains.
I usually keep the interface in the same namespace of as the concrete types.
But, that's just my opinion, and namespace layout is highly subjective.
Animals
|
| - IAnimal
| - Dog
| - Cat
Plants
|
| - IPlant
| - Cactus
You don't really gain anything by moving one or two types out of the main namespace, but you do add the requirement for one extra using statement.
What I generally do is to create an Interfaces namespace at a high level in my hierarchy and put all interfaces in there (I do not bother to nest other namespaces in there as I would then end up with many namespaces containing only one interface).
Interfaces
|--IAnimal
|--IVegetable
|--IMineral
MineralImplementor
Organisms
|--AnimalImplementor
|--VegetableImplementor
This is just the way that I have done it in the past and I have not had many problems with it, though admittedly it might be confusing to others sitting down with my projects. I am very curious to see what other people do.
I prefer to keep my interfaces and implementation classes in the same namespace. When possible, I give the implementation classes internal visibility and provide a factory (usually in the form of a static factory method that delegates to a worker class, with an internal method that allows a unit tests in a friend assembly to substitute a different worker that produces stubs). Of course, if the concrete class needs to be public--for instance, if it's an abstract base class, then that's fine; I don't see any reason to put an ABC in its own namespace.
On a side note, I strongly dislike the .NET convention of prefacing interface names with the letter 'I.' The thing the (I)Foo interface models is not an ifoo, it's simply a foo. So why can't I just call it Foo? I then name the implementation classes specifically, for example, AbstractFoo, MemoryOptimizedFoo, SimpleFoo, StubFoo etc.
(.Net) I tend to keep interfaces in a separate "common" assembly so I can use that interface in several applications and, more often, in the server components of my apps.
Regarding namespaces, I keep them in BusinessCommon.Interfaces.
I do this to ensure that neither I nor my developers are tempted to reference the implementations directly.
Separate the interfaces in some way (projects in Eclipse, etc) so that it's easy to deploy only the interfaces. This allows you to provide your external API without providing implementations. This allows dependent projects to build with a bare minimum of externals. Obviously this applies more to larger projects, but the concept is good in all cases.
I usually separate them into two separate assemblies. One of the usual reasons for a interface is to have a series of objects look the same to some subsystem of your software. For example I have all my Reports implementing the IReport Interfaces. IReport is used is not only used in printing but for previewing and selecting individual options for each report. Finally I have a collection of IReport to use in dialog where the user selects which reports (and configuring options) they want to print.
The Reports reside in a separate assembly and the IReport, the Preview engine, print engine, report selections reside in their respective core assembly and/or UI assembly.
If you use the Factory Class to return a list of available reports in the report assembly then updating the software with new report becomes merely a matter of copying the new report assembly over the original. You can even use the Reflection API to just scan the list of assemblies for any Report Factories and build your list of Reports that way.
You can apply this techniques to Files as well. My own software runs a metal cutting machine so we use this idea for the shape and fitting libraries we sell alongside our software.
Again the classes implementing a core interface should reside in a separate assembly so you can update that separately from the rest of the software.
I give my own experience that is against other answers.
I tend to put all my interfaces in the package they belongs to. This grants that, if I move a package in another project I have all the thing there must be to run the package without any changes.
For me, any helper functions and operator functions that are part of the functionality of a class should go into the same namespace as that of the class, because they form part of the public API of that namespace.
If you have common implementations that share the same interface in different packages you probably need to refactor your project.
Sometimes I see that there are plenty of interfaces in a project that could be converted in an abstract implementation rather that an interface.
So, ask yourself if you are really modeling a type or a structure.
A good example might be looking at what Microsoft does.
Assembly: System.Runtime.dll
System.Collections.Generic.IEnumerable<T>
Where are the concrete types?
Assembly: System.Colleections.dll
System.Collections.Generic.List<T>
System.Collections.Generic.Queue<T>
System.Collections.Generic.Stack<T>
// etc
Assembly: EntityFramework.dll
System.Data.Entity.IDbSet<T>
Concrete Type?
Assembly: EntityFramework.dll
System.Data.Entity.DbSet<T>
Further examples
Microsoft.Extensions.Logging.ILogger<T>
- Microsoft.Extensions.Logging.Logger<T>
Microsoft.Extensions.Options.IOptions<T>
- Microsoft.Extensions.Options.OptionsManager<T>
- Microsoft.Extensions.Options.OptionsWrapper<T>
- Microsoft.Extensions.Caching.Memory.MemoryCacheOptions
- Microsoft.Extensions.Caching.SqlServer.SqlServerCacheOptions
- Microsoft.Extensions.Caching.Redis.RedisCacheOptions
Some very interesting tells here. When the namespace changes to support the interface, the namespace change Caching is also prefixed to the derived type RedisCacheOptions. Additionally, the derived types are in an additional namespace of the implementation.
Memory -> MemoryCacheOptions
SqlServer -> SqlServerCatchOptions
Redis -> RedisCacheOptions
This seems like a fairly easy pattern to follow most of the time. As an example I (since no example was given) the following pattern might emerge:
CarDealership.Entities.Dll
CarDealership.Entities.IPerson
CarDealership.Entities.IVehicle
CarDealership.Entities.Person
CarDealership.Entities.Vehicle
Maybe a technology like Entity Framework prevents you from using the predefined classes. Thus we make our own.
CarDealership.Entities.EntityFramework.Dll
CarDealership.Entities.EntityFramework.Person
CarDealership.Entities.EntityFramework.Vehicle
CarDealership.Entities.EntityFramework.SalesPerson
CarDealership.Entities.EntityFramework.FinancePerson
CarDealership.Entities.EntityFramework.LotVehicle
CarDealership.Entities.EntityFramework.ShuttleVehicle
CarDealership.Entities.EntityFramework.BorrowVehicle
Not that it happens often but may there's a decision to switch technologies for whatever reason and now we have...
CarDealership.Entities.Dapper.Dll
CarDealership.Entities.Dapper.Person
CarDealership.Entities.Dapper.Vehicle
//etc
As long as we're programming to the interfaces we've defined in root Entities (following the Liskov Substitution Principle) down stream code doesn't care where how the Interface was implemented.
More importantly, In My Opinion, creating derived types also means you don't have to consistently include a different namespace because the parent namespace contains the interfaces. I'm not sure I've ever seen a Microsoft example of interfaces stored in child namespaces that are then implement in the parent namespace (almost an Anti-Pattern if you ask me).
I definitely don't recommend segregating your code by type, eg:
MyNamespace.Interfaces
MyNamespace.Enums
MyNameSpace.Classes
MyNamespace.Structs
This doesn't add value to being descriptive. And it's akin to using System Hungarian notation, which is mostly if not now exclusively, frowned upon.
I HATE when I find interfaces and implementations in the same namespace/assembly. Please don't do that, if the project evolves, it's a pain in the ass to refactor.
When I reference an interface, I want to implement it, not to get all its implementations.
What might me be admissible is to put the interface with its dependency class(class that references the interface).
EDIT: #Josh, I juste read the last sentence of mine, it's confusing! of course, both the dependency class and the one that implements it reference the interface. In order to make myself clear I'll give examples :
Acceptable :
Interface + implementation :
namespace A;
Interface IMyInterface
{
void MyMethod();
}
namespace A;
Interface MyDependentClass
{
private IMyInterface inject;
public MyDependentClass(IMyInterface inject)
{
this.inject = inject;
}
public void DoJob()
{
//Bla bla
inject.MyMethod();
}
}
Implementing class:
namespace B;
Interface MyImplementing : IMyInterface
{
public void MyMethod()
{
Console.WriteLine("hello world");
}
}
NOT ACCEPTABLE:
namespace A;
Interface IMyInterface
{
void MyMethod();
}
namespace A;
Interface MyImplementing : IMyInterface
{
public void MyMethod()
{
Console.WriteLine("hello world");
}
}
And please DON'T CREATE a project/garbage for your interfaces ! example : ShittyProject.Interfaces. You've missed the point!
Imagine you created a DLL reserved for your interfaces (200 MB). If you had to add a single interface with two line of codes, your users will have to update 200 MB just for two dumb signaturs!
Related
I know that ABAP Objects are kinda old but as far as my knowledge goes you still have to use at least two "sections" to create a complete class.
ABAP:
CLASS CL_MYCLASS DEFINITION.
PUBLIC SECTION.
...
PROTECTED SECTION.
...
PRIVATE SECTION.
...
ENDCLASS.
CLASS CL_MYCLASS IMPLEMENTATION.
...
ENDCLASS.
Java:
public class MyClass {
<visibility> <definition> {
<implementation>
}
}
Wouldn't it make development easier/faster by having a combination of both like most modern languages have?
What are the reasons for this separation?
Easier/faster for the human (maybe), but costly for the compiler: It has to sift through the entire code to determine the structure of the class and its members, whereas in the current form, it only needs to compile the definition to determine whether a reference is valid. ABAP is not the only language that separates definition from implementation: Pascal did so for units, and Object Pascal for classes. One might argue that C++ allows for same construct without specifying an implementation section when you're not using inline member function declarations.
Maybe another reason:
Most (?) classes are not defined with manual written code, but via SE24. There you define the interface in one dynpro and write the code in another one.
Internally the interfaces are stored in one source, the code is stored in another source. So it is reasonable to separate the interface and the implementation.
Well, I'm creating a library and that library needs to take all other libraries and make them work "alike".
For example: Imagine that I have 5 libraries, and all that libraries has the same idea, work to the same case, but they have their own way to work, their own API, and what I need is to make them work using a single API.
What is in my mind is to create a "factory" with a "trust list" inside of the factory that allows the user to choose different libraries to create, and the "factory" look on the "trust list" and if the library really exists, it creates and return the library.
But it can also be made using interfaces, where I can accept only classes that implements an specified interface, where I will have the security of the implementation of the methods that I want, so what this mean? All the libraries needs to implement that interface, implement the methods and make a kindle of wrapper to the library and that way they will work with the same API. The user can create a library using the factory and use the same API to all of them.
I don't know if you understand what I'm trying to explain, but I want to know, based on what I said, what is the best on my situation, "bridge" or "adapter" pattern?
And also, is my idea correct or am I crazy? (The interface and factory thing, and also the bridge and adapter, tell me what you have in mind).
Thank you all in advance.
The Bridge pattern is designed to separate a class's interface from its implementation so you can vary or replace the implementation without changing the client code.
I think you can specify public non-virtual interface, then using Template Method in each of these public functions invoke implementation method.
class Basic {
public:
// Stable, nonvirtual interface.
void A { doA();}
void B { doB();}
//...
private:
// Customization is an implementation detail that may
// or may not directly correspond to the interface.
// Each of these functions might optionally be
// pure virtual
virtual void doA { impl_ -> doA();}
virtual void doB { impl_ -> doB();}
};
These lectures might be useful:
Bridge pattern
Template method
Sounds like adapter to me. You have multiple adapter implementations, which is basic polymorphism and each adapter knows how to adapt to the specific library.
I don't see how the bridge pattern would make any sense here. You would typically use that in places where you use these libraries, but you don't know the specific library implementation yet.
I'm not so sure the title is a good match for this question I want to put on the table.
I'm planning to create a web MVC framework as my graduation dissertation and in a previous conversation with my advisor trying to define some achivements, he convinced me that I should choose a modular design in this project.
I already had some things developed by then and stopped for a while to analyze how much modular it would be and I couldn't really do it because I don't know the real meaning of "modular".
Some things are not very cleary for me, like for example, just referencing another module blows up the modularity of my system?
Let's say I have a Database Access module and it OPTIONALY can use a Cache module for storing results of complex queries. As anyone can see, I at least will have a naming dependency for the cache module.
In my conception of "modular design", I can distribute each component separately and make it interact with others developed by other people. In this case I showed, if someone wants to use my Database Access module, they will have to take the Cache as well, even if he will not use it, just for referencing/naming purposes.
And so, I was wondering if this is really a modular design yet.
I came up with an alternative that is something like creating each component singly, without don't even knowing about the existance of other components that are not absolutely required for its functioning. To extend functionalities, I could create some structure based on Decorators and Adapters.
To clarify things a little bit, here is an example (in PHP):
Before
interface Cache {
public function isValid();
public function setValue();
public function getValue();
}
interface CacheManager {
public function get($name);
public function put($name, $value);
}
// Some concrete implementations...
interface DbAccessInterface {
public doComplexOperation();
}
class DbAccess implements DbAccessInterface {
private $cacheManager;
public function __construct(..., CacheManager $cacheManager = null) {
// ...
$this->cacheManager = $cacheManager;
}
public function doComplexOperation() {
if ($this->cacheManager !== null) {
// return from cache if valid
}
// complex operation
}
}
After
interface Cache {
public function isValid();
public function setValue();
public function getValue();
}
interface CacheManager {
public function get($name);
public function put($name, $value);
}
// Some concrete implementations...
interface DbAccessInterface {
public function doComplexOperation();
}
class DbAccess implements DbAccessInterface {
public function __construct(...) {
// ...
}
public function doComplexQuery() {
// complex operation
}
}
// And now the integration module
class CachedDbAcess implements DbAccessInterface {
private $dbAccess;
private $cacheManager;
public function __construct(DbAccessInterface $dbAccess, CacheManager $cacheManager) {
$this->dbAccess = $dbAccess;
$this->cacheManager = $cacheManager;
}
public function doComplexOperation() {
$cache = $this->cacheManager->get("Foo")
if($cache->isValid()) {
return $cache->getValue();
}
// Do complex operation...
}
}
Now my question is:
Is this the best solution? I should do this for all the modules that do not have as a requirement work together, but can be more efficient doing so?
Anyone would do it in a different way?
I have some more further questions involving this, but I don't know if this is an acceptable question for stackoverflow.
P.S.: English is not my first language, maybe some parts can get a little bit confuse
Some resources (not theoretical):
Nuclex Plugin Architecture
Python Plugin Application
C++ Plugin Architecture (Use NoScript on that side, they have some weird login policies)
Other SO threads (design pattern for plugins in php)
Django Middleware concept
Just referencing another module blows up the modularity of my system?
Not necessarily. It's a dependency. Having a dependencies is perfectly normal. Without dependencies modules can't interact with each other (unless you're doing such interaction indirectly which in general is a bad practice since it hides dependencies and complicates the code). Modular desing implies managing of dependencies, not removing them.
One tool - is using interfaces. Referencing module via interface makes a so called soft dependency. Such module can accept any implementation of an interface as a dependency so it is more independant and as a result - more maintainable.
The other tool - designing modules (and their interfaces) that have only single responcibility. This also makes them more granular, independant and maintainable.
But there is a line which you should not cross - blindly applying these tools may leed to a too modular and too generic desing. Making things too granular makes the whole system more complex. You should not solve universe problems, making generic modules, that all developers can use (unless it is your goal). First of all your system should solve your domain tasks and make things generic enough, but not more than that.
I came up with an alternative that is something like creating each component singly, without don't even knowing about the existance of other components that are not absolutely required for its functioning
It is great if you came up with this idea by yourself. The statement itself, is a key to modular programming.
Plugin architecture is the best in terms of extensibility, but imho it is hard to maintenance especially in intra application. And depending the complexity of plugin architecture, it can make your code more complex by adding plugin logics, etc.
Thus, for intra modular design, I choose the N-Tier, interface based architecture. Basically, the architecture relays on those tiers:
Domain / Entity
Interface [Depend on 1]
Services [Depend on 1 and 2]
Repository / DAL [Depend on 1 and 2]
Presentation Layer [Depend on 1,2,3,4]
Unfortunately, I don't think this is achieveable neatly in php projects as it need separated project / dll references in each tier. However, following the architecture can help to modularize the application.
For each modules, we need to do interface-based design. It can help to enhance the modularity of your code, because you can change the implementation later, but still keep the consumer the same.
I have provided an answer similiar to this interface-based design, at this stackoverflow question.
Lastly but not least, if you want to make your application modular to the UI, you can do Service Oriented Architecture. This is simply make your application as bunch of services, and then make the UI to consume the service. This design can help to separate your UI with your logic. You can later use different UI such as desktop app, but still use the same logic. Unfortunately, I don't have any reliable source for SOA.
EDIT:
I misunderstood the question. This is my point of view about modular framework. Unfortunately, I don't know much about Zend so I will give examples in C#:
It consist of modules, from the smallest to larger modules. Example in C# is you can using the Windows Form (larger) at your application, and also the Graphic (smaller) class to draw custom shapes in the screen.
It is extensible, or replaceable without making change to base class. In C# you can assign FormLoad event (extensible) to the Form class, inherit the Form or List class (extensible) or overridding form draw method to create a custom window graphic (replaceable).
(optional) it is easy to use. In normal DI interface design, we usually inject smaller modules into a larger (high level) module. This will require an IOC container. Refer to my question for detail.
Easy to configure, and does not involve any magical logic such as Service Locator Pattern. Search Service Locator is an Anti Pattern in google.
I don't know much about Zend, however I guess that the modularity in Zend can means that it can be extended without changing the core (replacing the code) inside framework.
If you said that:
if someone wants to use my Database Access module, they will have to take the Cache as well, even if he will not use it, just for referencing/naming purposes.
Then it is not modular. It is integrated, means that your Database Access module will not work without Cache. In reference of C# components, it choose to provide List<T> and BindingList<T> to provide different functionality. In your case, imho it is better to provide CachedDataAccess and DataAccess.
Say FrameworkA consumes a FrameworkA.StandardLogger class for logging. I want to replace the logging library by another one (the SuperLogger class).
To make that possible, there are interfaces: FrameworkA will provide a FrameworkA.Logger interface that other libraries have to implement.
But what if other libraries don't implement that interface? FrameworkA might be a not popular enough framework to make SuperLogger care about its interface.
Possible solutions are:
have a standardized interface (defined by standards like JSR, PSR, ...)
write adapters
What if there is no standardized interface, and you want to avoid the pain of writing useless adapters if classes are compatible?
Couldn't there be another solution to ensure a class meets a contract, but at runtime?
Imagine (very simple implementation in pseudo-code):
namespace FrameworkA;
interface Logger {
void log(message);
}
namespace SuperLoggingLibrary;
class SupperLogger {
void log(message) {
// ...
}
}
SupperLogger is compatible with Logger if only it implemented Logger interface. But instead of having a "hard-dependency" to FrameworkA.Logger, its public "interface" (or signature) could be verified at runtime:
// Something verify that SupperLogger implements Logger at run-time
Logger logger = new SupperLogger();
// setLogger() expect Logger, all works
myFrameworkAConfiguration.setLogger(logger);
In the fake scenario, I expect the Logger logger = new SupperLogger() to fail at run-time if the class is not compatible with the interface, but to succeed if it is.
Would that be a valid thing in OOP? If yes, does it exist in any language? If no, why is it not valid?
My question stands for statically-typed languages (Java, ...) or dynamically typed languages (PHP, ...).
For PHP & al: I know when there is no type-check you can use any object you want even if it doesn't implement the interface, but I'd be interested in something that actually checks that the object complies with the interface.
This is called duck typing, a concept that you will find in Ruby ("it walks like a duck, it quacks like a duck, it must be a duck")
In other dynamically typed languages you can simulate it, for example in PHP with method_exists. In statically typed languages there might be workarounds with reflection, a search for "duck typing +language" will help to find them.
This is more of a statically typed issue than a OOP one. Both Java and Ruby are OO languages, but Java woudlnt allow what you want (as its statically typed) but Ruby would (as its dynamically typed).
From a statically typed language point of view one of the major (if not the major) advantage is knowing at compile time if an assignment is safe and valid. What you're looking for is provided by dynamically typed languages (such as Ruby), but isnt possible in a statically typed language - and this is by design (compile time safety).
You can, but it is ugly, do something like (in Java):
Object objLogger = new SupperLogger();
Logger logger = (Logger)objLogger;
This would pass at compile time but would fail at runtime if the assignment was invalid.
That said, the above is pretty ugly and isnt something I would do - it doesnt give you much and risks an unpleasant (and possibly suprising) exception at runtime.
I guess the best you could hope for in a language like Java would be to abstract the creation away from where you want to use it:
Logger logger = getLogger();
With the internals of getLogger deciding what to return. This however just defers the actual creation to further down - you'll still have to do so in a statically typed safe way.
My question is rather simple and the title states it perfectly: How do you name your "reference" or "basic" implementations of an interface? I saw some naming conventions:
FooBarImpl
DefaultFooBar
BasicFooBar
What do you use? What are the pros and cons? And where do you put those "reference" implementations? Currently i create an .impl package where the implementations go. More complex implementations which may contain multiple classes go into a .impl.complex package, where "complex" is a short name describing the implementation.
Thank you,
Malax
I wonder if your question reflects the customs of a particular language. I write in C#, and I typically don't have "default" implementation. I have an interface, say IDistance, and each implementation has a name that describes its actual purpose / how it is specific, say, EuclidianDistance, ManhattanDistance... In my opinion, "default" is not a property of the implementation itself, but of its context: the application could have a service/method called "GetDefaultDistance", which would be configured to return one of the distance implementations.
In Java, (whenever suitable) I typically use a nested class called RefImpl. This way for a given interface InterfaceXYZ, the reference implementation is always InterfaceXYZ.RefImpl and there is no need to fumble around making up effectively redundant names.
public interface InterfaceXYZ {
// interface methods ...
public static class RefImpl implements InterfaceXYZ {
// interface method impls.
}
}
And then have a uniform usage pattern:
// some where else
public void foo () {
InterfaceXYZ anXYZ = new InterfaceXYZ.RefImpl();
...
}
I asked a previous question about a "null" implementation and it was identified as the null object pattern - an implementation of an interface that does nothing meaningful. Like Mathias, I'm not too sure what would be considered a "default" implementation that didn't have some kind of name specific to its implementation.
If the interface is a RazmaFrazzer, I'd call the implementation a DefaultRazmaFrazzer.
If you've already got several implementations and you've marked one of them out as a "default" look at all the implementations and look at the differences between them and come up with an adjective that describes the distinguishing feature of the default implementation e.g. SimpleRazmaFrazzer, or if it's a converter, you might have PassThroughDefaultRazmaFrazzer - so you're looking for whatever makes the implementation distinctive.
The exact convention doesn't metter - be it IService + Service or Service + ServiceImpl. The point is to be consistent throughout the whole project.