Dependency Injection and Circular reference - oop

I am just starting out with DI & unit testing and have hit a snag which I am sure is a no brainer for those more experienced devs :
I have a class called MessageManager which receives data and saves it to a db. Within the same assembly (project in Visual Studio) I have created a repository interface with all the methods needed to access the db.
The concrete implementation of this interface is in a separate assembly called DataAccess.
So DataAccess needs a project reference to MessageManager to know about the repository interface.
And MessageManager needs a project reference to DataAccess so that the client of MessageManager can inject a concrete implementation of the repository interface.
This is of courser not allowed
I could move the interface into the data access assembly but I believe the repository interface is meant to reside in the same assembly as the client that uses it
So what have I done wrong?

You should separate your interface out of either assembly. Putting the interface along with the consumer or the implementor defeats the purpose of having the interface.
The purpose of the interface is to allow you to inject any object that implements that interface, whether or not it's the same assembly that your DataAccess object belongs to. On the other hand you need to allow MessageManager to consume that interface without the need to consume any concrete implementation.
Put your interface in another project, and problem is solved.

You only have two choices: add an assembly to hold the interface or move the interface into the DataAccess assembly. Even if you're developing an architecture where the DataAccess class may someday be replaced by another implementor (even in another assembly) of the repository interface, there's no reason to exclude it from the DataAccess assembly.

Are you using an Inversion of Control Container? If so, the answer is simple.
Assembly A contains:
MessageManager
IRepository
ContainerA (add MessageManager)
Assembly B contains (and ref's AssemblyA):
Repository implements IRepository
ContainerB extends ContainerA (add Repository)
Assembly C (or B) would start the app/ask the container for MessageManager which would know how to resolve MessageManager and the IRepository.

I think you should move the repository interface over to the DataAccess assembly. Then DataAccess has no need to reference MessageManager anymore.
However, it remains hard to say since I know next to nothing about your architecture...

Frequently you can solve circular reference issues by using setter injection instead of constructor injection.
In pseudo-code:
Foo f = new Foo();
Bar b = new Bar();
f.setBar(b);
b.setFoo(f);

Dependency inversion is in play:
High level modules should not depend upon low level modules. Both should depend upon abstractions. Abstractions should not depend upon details. Details should depend upon abstractions.
The abstraction that the classes in the DatAccess assembly depend upon needs to be in a separate assembly from the DataAccess classes and the concrete implementation of that abstration (MessageManager).
Yes that is more assemblies. Personally that's not a big deal for me. I don't see a big downside in extra assemblies.

You could leave the structure as you currently have it (without the dependency from MessageManager to DataAccess that causes the problem) and then have MessageManager dynamically load the concrete implementation required at runtime using the System.Reflection.Assembly class.

Related

Where should my objects/models live if they contain domain functionality?

I've designed my classes using CRC cards and I have a lovely set of objects that contain domain/business logic AND data (properties). Some of the classes require saving to and reading from a database.
My repository should exist in a separate project to my domain objects, but needs to reference them in order to create them.
However, the domain objects/entities need to be able to reference the repository.
I could put the objects in the repository, but as they contain domain functionality, that doesn't feel right at all.
I could put the objects that require persistence in a common shared project, but again it feels wrong to single them out.
Where should I put them? I cant help feeling I'm missing something obvious.
Domain objects/entities should not use repositories. Its domain/applications services should use repositories. And that's done very simple - you should define repository interfaces in your Domain Model assembly and use them in domain/application services.
Domain library should contain
Domain Model
Repository Interfaces
Domain Services (use only interfaces of repositories)
This library does not reference other libraries - it sits at the core of your system.
Persistence library should contain implementation of repositories specific to your data provider. E.g. it can use Entity Framework. This library should reference your domain library. Thus it will know about interfaces it should implement and about entities it should work with.
However, the domain objects/entities need to be able to reference the repository.
Do they? Or do they need to reference the interface of the repository? Then the repository itself is just an implementation of that interface, a low-level detail not needed by the domain logic code.
The way I normally structure a repository pattern in my projects is:
Domain Core Project (business models, core business logic, interfaces for dependencies)
Dependency Projects (references Domain Core Project, implements interfaces)
Application Projects (references Domain Core Project, references Dependency Projects either directly, or through configuration, or through an intermediary project which handles dependency injection)
As an example, suppose I'm using a Service Locator for my dependency injection (which I very often do). Then the business models only need to reference the Service Locator object (which itself is supplied by a factory and can be injected). So internal to a business model I might have something like this:
public class SomeBusinessModel
{
private ISomeDependency SomeProperty
{
get
{
return DIFactory.Current.Resolve<ISomeDependency>();
}
}
}
The DIFactory has a static property called Current which is basically a factory method returning a dependency injection resolver, and its interface has a method called Resolve which takes a type and returns an instance.
So in this case...
SomeBusinessModel is in the Domain Core Project
ISomeDependency is in the Domain Core Project
IDIContainer (the return type for Current) is in the Domain Core Project
DIFactory is in the Domain Core Project, and is initialized (it has an Initialize method that sets the current injection container) by the Application Project for a specific dependency injection container
SomeDependency (the actual instance type being returned by the resolver) is in a Dependency Project
In this setup, the business models know that there needs to be a repository, and require that one be supplied, but they don't have a hard dependency on them. The application supplies the actual implementations for those repositories, either directly by providing an instance or indirectly by configuring a dependency injection container to provide an instance.
All actual dependencies point inward from the implementation details (applications and dependencies) to the core business logic. Never outward.

#Local and #Remote interfaces - Pass by reference vs deep copy

We have an app with the with three interfaces per bean. A *BI (business interface) which contains all the methods, a *LI, which extends BI and is annotated as #Local, and *RI, which also extends BI, but is annotated as #Remote.
I want to remove all *LI and *RI interfaces in favor of *BI, leaving they as #Remote, but there is a problem.
Local lookup pass arguments as references, while remote lookup uses deepcopy. The app is full of pass-by-reference expectations (things that only works if pass by reference works).
If I have only #Remote interfaces, the container will know when it is a local lookup and make pass-by-reference works in that case?
Container is JBoss AS 7.1.1 Final and we use EJB 3.1.
Thanks in advance.
Since the local and remote interfaces behave differently even a local client could want to use the remote interface. So it would be a problem if the container just guessed "this client is running in the same VM, let's give him a local interface".
Details on the differences were already discussed here.
If you want to keep all your interface definitions in one place simply write an interface
and let it be inherited by two (empty) interfaces annotated with #Local and #Remote.

IoC/DI - Implementation in internal class having only internal methods

We are implementing IoC/DI in our application using NInject framework. We are having internal classes having internal methods. To implement IoC/DI, we have to extract interfaces. But if we are having only internal methods in an internal class, we can't extract interface for that class.
So is there a way to implement IoC/DI in such cases (internal class having only internal methods) or should we change our internal methods to public methods. Kindly suggest. Thanks
If your class is already internal then there is absolutely not difference between internal and public methods. public methods of internal classes are only internally visible.
If you stay with injecting concrete classes though you loose all the advantages of DI. So yes you should extract (internal) interfaces and inject the interfaces. This requires that the configuration code has access to the classes by either beeing in the same assembly of the assembly must be declased as friend assembly. Futhermore, you have to configure Ninject to allow none public classes. See NinjectSettings.
The only thing that you really need to make public is the interface (not the concrete implementation).
You can use an abstract factory or (easier) Ninject to map the public interface to the internal concrete; thus your client code just has to request an instance of "a thing" that implements the interface and your factory / container will return the implementation.
You should read up on Dependency Inversion Principle as well as it goes hand-in-hand with this.
You could use InternalsVisibleTo attribute in AssemblyInfo.cs file like this
[assembly: InternalsVisibleTo("Assembly_That_Should_Access_The_Internal_Class")]

Composition of Objects

I have a class that acts as a manager and does some work.
A servlet that starts up when the application server starts up instantiates this manager.
I need to add another class that will do other work, and needs to coordinate with the manager.
I was thinking about adding the class to the manager as an instance variable.
Should I have the manager instantiate the new class (like in the constructor), or have the servlet instantiate the new class and call manager.setNewClass() after the manager has been instantiated?
Well, as a gross-over-generalization, you should instantiate it in the servlet and pass it into the manager (either via a constructor parameter, or via setNewClass())... Inject the dependencies rather than hard-code them.
However, depending on your exact use-case, even that might not be the right answer. You might be better off with a Builder for constructing the manager class. That way, the builder manages the construction of the entire manager (including any dependencies) rather than hard-coding it into the servlet. This would move the dependency out of the servlet and into the builder (which you can better deal with in tests and other code).
The short answer is that there's no silver bullet. Without knowing the hard relationships between all of the classes, and the roles and responsibilities, it's hard to say the best method. But instantiating in a constructor is almost never a good idea and you should inject the dependency in some form or another (but from where is up for debate)...
This reminds me of the FFF pattern.
It does not matter where you create the instance. Just create wherever it fits you best, and if you need it somewhere else, just apply some basic refactoring.
If you really need decoupling try using some tool like Guice, but only if you really need it.
You should do the latter -- it decouples the manager from its delegate. To do the decoupling correctly, you should create an interface that defines the behavior the manager expects, and then provide an implementation via inversion of control/dependency injection. This will allow you to test the manager and its worker class (I called it a delegate, but it might not be) in isolation.
EDIT -- this answer assumes java because you mentioned servlet.
You have your manager class, in it you expect an interface
class Manager {
Worker worker;
Manager(Worker worker) {
this.worker = workder
}
}
Worker is an interface. It defines behavour but not implementation
interface Worker {
public void doesSomething(); //method definition but no implementation
}
you now need to create an implementation
class WorkerImpl implements Worker {
// must define a doesSomething() implementation
}
The manager just knows it gets some Worker. You can provide any class that implements the interface. This is decoupling -- the Manager is not bound to any particular implementation, it is bound only to the behavour of a worker.

In what namespace should you put interfaces relative to their implementors?

Specifically, when you create an interface/implementor pair, and there is no overriding organizational concern (such as the interface should go in a different assembly ie, as recommended by the s# architecture) do you have a default way of organizing them in your namespace/naming scheme?
This is obviously a more opinion based question but I think some people have thought about this more and we can all benefit from their conclusions.
The answer depends on your intentions.
If you intend the consumer of your namespaces to use the interfaces over the concrete implementations, I would recommend having your interfaces in the top-level namespace with the implementations in a child namespace
If the consumer is to use both, have them in the same namespace.
If the interface is for predominantly specialized use, like creating new implementations, consider having them in a child namespace such as Design or ComponentModel.
I'm sure there are other options as well, but as with most namespace issues, it comes down to the use-cases of the project, and the classes and interfaces it contains.
I usually keep the interface in the same namespace of as the concrete types.
But, that's just my opinion, and namespace layout is highly subjective.
Animals
|
| - IAnimal
| - Dog
| - Cat
Plants
|
| - IPlant
| - Cactus
You don't really gain anything by moving one or two types out of the main namespace, but you do add the requirement for one extra using statement.
What I generally do is to create an Interfaces namespace at a high level in my hierarchy and put all interfaces in there (I do not bother to nest other namespaces in there as I would then end up with many namespaces containing only one interface).
Interfaces
|--IAnimal
|--IVegetable
|--IMineral
MineralImplementor
Organisms
|--AnimalImplementor
|--VegetableImplementor
This is just the way that I have done it in the past and I have not had many problems with it, though admittedly it might be confusing to others sitting down with my projects. I am very curious to see what other people do.
I prefer to keep my interfaces and implementation classes in the same namespace. When possible, I give the implementation classes internal visibility and provide a factory (usually in the form of a static factory method that delegates to a worker class, with an internal method that allows a unit tests in a friend assembly to substitute a different worker that produces stubs). Of course, if the concrete class needs to be public--for instance, if it's an abstract base class, then that's fine; I don't see any reason to put an ABC in its own namespace.
On a side note, I strongly dislike the .NET convention of prefacing interface names with the letter 'I.' The thing the (I)Foo interface models is not an ifoo, it's simply a foo. So why can't I just call it Foo? I then name the implementation classes specifically, for example, AbstractFoo, MemoryOptimizedFoo, SimpleFoo, StubFoo etc.
(.Net) I tend to keep interfaces in a separate "common" assembly so I can use that interface in several applications and, more often, in the server components of my apps.
Regarding namespaces, I keep them in BusinessCommon.Interfaces.
I do this to ensure that neither I nor my developers are tempted to reference the implementations directly.
Separate the interfaces in some way (projects in Eclipse, etc) so that it's easy to deploy only the interfaces. This allows you to provide your external API without providing implementations. This allows dependent projects to build with a bare minimum of externals. Obviously this applies more to larger projects, but the concept is good in all cases.
I usually separate them into two separate assemblies. One of the usual reasons for a interface is to have a series of objects look the same to some subsystem of your software. For example I have all my Reports implementing the IReport Interfaces. IReport is used is not only used in printing but for previewing and selecting individual options for each report. Finally I have a collection of IReport to use in dialog where the user selects which reports (and configuring options) they want to print.
The Reports reside in a separate assembly and the IReport, the Preview engine, print engine, report selections reside in their respective core assembly and/or UI assembly.
If you use the Factory Class to return a list of available reports in the report assembly then updating the software with new report becomes merely a matter of copying the new report assembly over the original. You can even use the Reflection API to just scan the list of assemblies for any Report Factories and build your list of Reports that way.
You can apply this techniques to Files as well. My own software runs a metal cutting machine so we use this idea for the shape and fitting libraries we sell alongside our software.
Again the classes implementing a core interface should reside in a separate assembly so you can update that separately from the rest of the software.
I give my own experience that is against other answers.
I tend to put all my interfaces in the package they belongs to. This grants that, if I move a package in another project I have all the thing there must be to run the package without any changes.
For me, any helper functions and operator functions that are part of the functionality of a class should go into the same namespace as that of the class, because they form part of the public API of that namespace.
If you have common implementations that share the same interface in different packages you probably need to refactor your project.
Sometimes I see that there are plenty of interfaces in a project that could be converted in an abstract implementation rather that an interface.
So, ask yourself if you are really modeling a type or a structure.
A good example might be looking at what Microsoft does.
Assembly: System.Runtime.dll
System.Collections.Generic.IEnumerable<T>
Where are the concrete types?
Assembly: System.Colleections.dll
System.Collections.Generic.List<T>
System.Collections.Generic.Queue<T>
System.Collections.Generic.Stack<T>
// etc
Assembly: EntityFramework.dll
System.Data.Entity.IDbSet<T>
Concrete Type?
Assembly: EntityFramework.dll
System.Data.Entity.DbSet<T>
Further examples
Microsoft.Extensions.Logging.ILogger<T>
- Microsoft.Extensions.Logging.Logger<T>
Microsoft.Extensions.Options.IOptions<T>
- Microsoft.Extensions.Options.OptionsManager<T>
- Microsoft.Extensions.Options.OptionsWrapper<T>
- Microsoft.Extensions.Caching.Memory.MemoryCacheOptions
- Microsoft.Extensions.Caching.SqlServer.SqlServerCacheOptions
- Microsoft.Extensions.Caching.Redis.RedisCacheOptions
Some very interesting tells here. When the namespace changes to support the interface, the namespace change Caching is also prefixed to the derived type RedisCacheOptions. Additionally, the derived types are in an additional namespace of the implementation.
Memory -> MemoryCacheOptions
SqlServer -> SqlServerCatchOptions
Redis -> RedisCacheOptions
This seems like a fairly easy pattern to follow most of the time. As an example I (since no example was given) the following pattern might emerge:
CarDealership.Entities.Dll
CarDealership.Entities.IPerson
CarDealership.Entities.IVehicle
CarDealership.Entities.Person
CarDealership.Entities.Vehicle
Maybe a technology like Entity Framework prevents you from using the predefined classes. Thus we make our own.
CarDealership.Entities.EntityFramework.Dll
CarDealership.Entities.EntityFramework.Person
CarDealership.Entities.EntityFramework.Vehicle
CarDealership.Entities.EntityFramework.SalesPerson
CarDealership.Entities.EntityFramework.FinancePerson
CarDealership.Entities.EntityFramework.LotVehicle
CarDealership.Entities.EntityFramework.ShuttleVehicle
CarDealership.Entities.EntityFramework.BorrowVehicle
Not that it happens often but may there's a decision to switch technologies for whatever reason and now we have...
CarDealership.Entities.Dapper.Dll
CarDealership.Entities.Dapper.Person
CarDealership.Entities.Dapper.Vehicle
//etc
As long as we're programming to the interfaces we've defined in root Entities (following the Liskov Substitution Principle) down stream code doesn't care where how the Interface was implemented.
More importantly, In My Opinion, creating derived types also means you don't have to consistently include a different namespace because the parent namespace contains the interfaces. I'm not sure I've ever seen a Microsoft example of interfaces stored in child namespaces that are then implement in the parent namespace (almost an Anti-Pattern if you ask me).
I definitely don't recommend segregating your code by type, eg:
MyNamespace.Interfaces
MyNamespace.Enums
MyNameSpace.Classes
MyNamespace.Structs
This doesn't add value to being descriptive. And it's akin to using System Hungarian notation, which is mostly if not now exclusively, frowned upon.
I HATE when I find interfaces and implementations in the same namespace/assembly. Please don't do that, if the project evolves, it's a pain in the ass to refactor.
When I reference an interface, I want to implement it, not to get all its implementations.
What might me be admissible is to put the interface with its dependency class(class that references the interface).
EDIT: #Josh, I juste read the last sentence of mine, it's confusing! of course, both the dependency class and the one that implements it reference the interface. In order to make myself clear I'll give examples :
Acceptable :
Interface + implementation :
namespace A;
Interface IMyInterface
{
void MyMethod();
}
namespace A;
Interface MyDependentClass
{
private IMyInterface inject;
public MyDependentClass(IMyInterface inject)
{
this.inject = inject;
}
public void DoJob()
{
//Bla bla
inject.MyMethod();
}
}
Implementing class:
namespace B;
Interface MyImplementing : IMyInterface
{
public void MyMethod()
{
Console.WriteLine("hello world");
}
}
NOT ACCEPTABLE:
namespace A;
Interface IMyInterface
{
void MyMethod();
}
namespace A;
Interface MyImplementing : IMyInterface
{
public void MyMethod()
{
Console.WriteLine("hello world");
}
}
And please DON'T CREATE a project/garbage for your interfaces ! example : ShittyProject.Interfaces. You've missed the point!
Imagine you created a DLL reserved for your interfaces (200 MB). If you had to add a single interface with two line of codes, your users will have to update 200 MB just for two dumb signaturs!