When should we use #InjectMocks? - testing

I have read through a lot of discussion about #Mock & #InjectMocks and still could not find what are the suitable or necessary cases to use #InjectMocks. In fact, I am not sure about what would happen when we use #InjectMocks.
Consider the following example,
public class User {
private String user_name;
private int user_id;
public User(int user_id) {
this.user_id = user_id;
}
public String getUser_name() {
return user_name;
}
public void setUser_name(String user_name) {
this.user_name = user_name;
}
public int getUser_id() {
return user_id;
}
public void setUser_id(int user_id) {
this.user_id = user_id;
}
}
public interface UserDao {
public List<User> getUserList();
}
public class UserService {
private UserDao userDao;
public UserDao getUserDao() {
return userDao;
}
public void setUserDao(UserDao userDao) {
this.userDao = userDao;
}
public List<User> getUserList() {
return userDao.getUserList();
}
}
This is my test class
#RunWith(MockitoJUnitRunner.class)
public class UserServiceTest {
private UserService service = new UserService();
#Mock
private UserDao dao;
#Test
public void testGetUserList() {
service.setUserDao(dao);
// mock the return value of the mock object
List<User> mockResult = Arrays.asList(new User(101),new User(102),new User(103));
when(dao.getUserList()).thenReturn(mockResult);
// check return value is same as mocked value
assertEquals(service.getUserList(),mockResult);
// verify the getUserList() function is called
verify(dao).getUserList();
}
}
The test run successfully with no errors.
Consider another approach using #InjectMock annotation.
#RunWith(MockitoJUnitRunner.class)
public class UserServiceTest {
#InjectMocks
private UserService service;
#Mock
private UserDao dao;
#Test
public void testGetUserList() {
service.setUserDao(dao);
// mock the return value of the mock object
List<User> mockResult = Arrays.asList(new User(101),new User(102),new User(103));
when(dao.getUserList()).thenReturn(mockResult);
// check return value is same as mocked value
assertEquals(service.getUserList(),mockResult);
// verify the getUserList() function is called
verify(dao).getUserList();
}
}
This works equally well. So, which way is better? Any best practice?
Btw, I am using Junit 4.8.1 and Mockito 1.9.5

Your actual question can't be answered without you providing further code (the code you are showing does not explain the results you claim to observe).
Regarding the underlying question: you probably should not use #InjectMocks; one of its core problems is: if injecting fails, Mockito does not report an error to you.
In other words: you have passing unit tests, and some internal detail about a field changes ... and your unit tests break; but you have no idea why ... because the mocking framework doesn't tell you that the "initial" step of pushing a mock into the class under test is all of a sudden failing. See here for further reading.
But to be clear here: in the end, this almost a pure style question. People used to #InjectMocks will love it; other people do object it. Meaning: there is no clear evidence whether you should use this concept. Instead: you study the concept, you understand the concept, and then you (and the team working with you) make a conscious decision whether you want to use this annotation, or not.
Edit: I think I get your problem now. The idea of #InjectMocks is to inject a mocked object into some object under test.
But: you are doing that manually in both cases:
service.setUserDao(dao);
Meaning: if injecting works correctly (and there isn't a problem that isn't reported by Mockito) then your example that uses that annotation should also work when you remove that one line. Whereas the testcase that doesn't have #InjectMocks should fail without that line!
In other words: your testcases are both passing because your code does a "manual inject"!

When should we use #InjectMocks?
IHMO : we should not use it.
That favors so many design issues that I would like to add my POV on the question.
Here is relevant information from the javadoc (emphasis is not mine) :
Mark a field on which injection should be performed.
Allows shorthand mock and spy injection.
Minimizes repetitive mock and spy injection.
Mockito will try to inject mocks only either by constructor injection,
setter injection, or property injection in order and as described
below. If any of the following strategy fail, then Mockito won't
report failure; i.e. you will have to provide dependencies yourself.
...
Mockito is not an dependency injection framework,
In two words :
Advantage : you write less code to setup your mock dependencies.
Drawback : no failure if some mocks were not be set in the object under test.
IHMO the advantage hides also another drawback still more serious.
About the advantage :
Mockito uses constructor/setter/reflection to set the mocks in the object under test. The problem is that it is up to you to decide the way to set your fields : mockito will try one after the other way.
The setter way is often very verbose and provides mutability of the dependencies. So developers don't use it frequently. It makes sense.
The constructor way to set the bean dependencies requires to keep a good design with a decent number of dependencies : not more than 4 or 5 in general.
It demands strong design exigencies.
At last, the reflection way is the most easy : few code to write and no mutability of the dependencies.
The reflection way being the simplest from the lambda developer PVO, I saw many teams not aware of the consequences of reflection to specify dependencies and abuse of the reflection way in their Mockito tests but also in their design since finally InjectMocks allows to cope with it in a someway.
You may consider it fine to not finish with a class that declares 30 setters as dependencies or a constructor with 30 arguments but that has also a strong drawback : it favors bad smells such as dependency field hiding and declaration of many dependencies in as field instances of the class. As a result your class is complex, bloat, hard to read, hard to test and error prone to maintain.
The real thing is that in the very most of cases, the right solution is the constructor injection that promotes decent class responsibility and so good maintainability and testability of that.
About the drawback :
Mock injection doesn't work as the dependency injection of IOCs.
The javadoc states itself :
Mockito is not an dependency injection framework,
IOC and DI framework are able and by default consider dependencies as required. So any missing dependency will provoke a fast-fail error at the startup.
It prevents so many headaches and makes you win a precious time.
While Mockito injection will silently ignore the mocks that were not managed to be injected. It is so up to you to guess the error cause.
Finding the issue cause is sometimes complex because you don't have any clue from Mockito and that all is done by reflection by Mockito.

Using #InjectMocks injects the mocked objects as dependencies to the created object(The object marked by #InjectMocks). Creating an object using constructor will defy the purpose since the mocked objects are not injected and hence all the external calls from SUT takes place on concrete objects.Try to avoid creating SUT object by constructor call.

Related

When to instantiate the repository and which is the lifespan of it?

In DDD, is the application layer who uses the repository to get the data from database, call the methods of the domain and then call the repository to persists the data. Something like that:
public void MyApplicationService()
{
Order myOrder = _orderRepository.Get(1);
myOrder.Update(data);
_orderRepository.Commit();
}
In this example the repository is a class variable that it is instantiate in the constructor of the service, so its life is the life of the class.
But I am wondering if it wouldn't be better to instantiate a repository for each action that I want to do, to have a shorter life, because if not, if I use the class for many actions, the repository will have many entities that perhaps it will not need more.
So I was thinking in a solution like this:
public void MyApplicationService()
{
OrderRepository myOrderRepository = new OrderRepository(_options);
Order myOrder = myOrderRepository.GetOrder(1);
myOrder.Update(data);
myOrderRepository.Commit();
myOrderRepository.Dispose();
}
So a new instance each time I need to do the action.
So in sumary, I would like to know about the differents solutions and the advantages and disadvanges to decide the lifespan of the repository.
Thanks.
The recommended lifespan of the repository is one business transaction.
Your second patch of code is correct in that aspect, however it has one drawback: you have created a strong dependency between the ApplicationService and OrderRepository classes. With your code, you are not able to isolate both class in order to unit test them separately. Also, you need to update the ApplicationService class whenever you change the constructor of the OrderRepository. If OrderRepository requires parameters to construct, then you have to construct them (which implies to reference their type and base types), despite this being an implementation detail of OrderRepository (needed for data persistence store access) and not needed for your application service layer.
For these reasons, most of modern program development rely on a pattern called Dependency Injection (DI). With DI, you specify that your ApplicationService class depends on an instance of the OrderRepository class, or better, an interface IOrderRepository whom the OrderRepository class implements. The dependency is declared by adding a parameter in the ApplicationService constructor:
public interface IOrderRepository : IDisposable
{
Order GetOrder(int id);
void Commit();
}
public class ApplicationService
{
private readonly OrderRepository orderRepository;
public ApplicationService(IOrderRepository orderRepository)
{
this.orderRepository = orderRepository ?? throw new ArgumentNullException(nameof(orderRepository));
}
public void Update(int id, string data)
{
Order myOrder = orderRepository.Get(id);
myOrder.Update(data);
orderRepository.Commit();
}
}
Now the DI library is responsible to construct OrderRepository and inject the instance in the ApplicationService class. If OrderRepository has its own dependencies, the library will resolve them first and construct the whole object graph so you don't have to do that yourself. You simply need to tell your DI library what specific implementation you want for each referenced interface. For example in C#:
public IServiceCollection AddServices(IServiceCollection services)
{
return services.AddScoped<IOrderRepository,OrderRepository>();
}
When unit testing your code, you can replace the actual implementation of OrderRepository with a mock object, such as Mock<IOrderRepository> or your own MockOrderRepository implementation. The code under test is then exactly the code in production, all wiring being done by the DI framework.
Most modern DI libraries have support for object lifetime management, including transient (always resolve a new object), singleton (always reuse the same object), or scoped (each scope has a single instance). The latter is what is used to isolate objects instance per business transaction, using a singleton ScopeFactory to create scopes whenever you start a business transaction:
public class UpdateOrderUseCase : UseCase
{
private readonly IScopeFactory scopeFactory;
public UpdateOrderUseCase(IScopeFactory scopeFactory) // redacted
public void UpdateOrder(int id, string data)
{
using var scope = scopeFactory.CreateScope();
var orderRepository = scope.GetService<IOrderRepository>();
var order = orderRepository.Get(id);
order.Update(data);
orderRepository.Commit();
// disposing the scope will also dispose the object graph
}
}
When you implement a REST service, that transaction usually corresponds to one HTTP request. Modern frameworks, such as asp.net core, will automatically create scopes per HTTP request and use that to resolve your dependency graph later in the framework internals. This means you don't even have to handle the ScopeFactory yourself.

How do I mock Func<T> factory dependency to return different objects using AutoMock?

I'm trying to write a test for a class that has a constructor dependency on Func<T>. In order to complete successfully the function under test needs to create a number of separate objects of type T.
When running in production, AutoFac generates a new T every time factory() is called, however when writing a test using AutoMock it returns the same object when it is called again.
Test case below showing the difference in behaviour when using AutoFac and AutoMock. I'd expect both of these to pass, but the AutoMock one fails.
public class TestClass
{
private readonly Func<TestDep> factory;
public TestClass(Func<TestDep> factory)
{
this.factory = factory;
}
public TestDep Get()
{
return factory();
}
}
public class TestDep
{}
[TestMethod()]
public void TestIt()
{
using var autoMock = AutoMock.GetStrict();
var testClass = autoMock.Create<TestClass>();
var obj1 = testClass.Get();
var obj2 = testClass.Get();
Assert.AreNotEqual(obj1, obj2);
}
[TestMethod()]
public void TestIt2()
{
var builder = new ContainerBuilder();
builder.RegisterSource(new AnyConcreteTypeNotAlreadyRegisteredSource());
var container = builder.Build();
var testClass = container.Resolve<TestClass>();
var obj1 = testClass.Get();
var obj2 = testClass.Get();
Assert.AreNotEqual(obj1, obj2);
}
AutoMock (from the Autofac.Extras.Moq package) is primarily useful for setting up complex mocks. Which is to say, you have a single object with a lot of dependencies and it's really hard to set that object up because it doesn't have a parameterless constructor. Moq doesn't let you set up objects with constructor parameters by default, so having something that fills the gap is useful.
However, the mocks you get from it are treated like any other mock you might get from Moq. When you set up a mock instance with Moq, you're not getting a new one every time unless you also implement the factory logic yourself.
AutoMock is not for mocking Autofac behavior. The Func<T> support where Autofac calls a resolve operation on every call to the Func<T> - that's Autofac, not Moq.
It makes sense for AutoMock to use InstancePerLifetimeScope because, just like setting up mocks with plain Moq, you need to be able to get the mock instance back to configure it and validate against it. It would be much harder if it was new every time.
Obviously there are ways to work around that, and with a non-trivial amount of breaking changes you could probably implement InstancePerDependency semantics in there, but there's really not much value in doing that at this point since that's not really what this is for... and you could always create two different AutoMock instances to get two different mocks.
A much better way to go, in general, is to provide useful abstractions and use Autofac with mocks in the container.
For example, say you have something like...
public class ThingToTest
{
public ThingToTest(PackageSender sender) { /* ... */ }
}
public class PackageSender
{
public PackageSender(AddressChecker checker, DataContext context) { /* ... */ }
}
public class AddressChecker { }
public class DataContext { }
If you're trying to set up ThingToTest, you can see how also setting up a PackageSender is going to be complex, and you'd likely want something like AutoMock to handle that.
However, you can make your life easier by introducing an interface there.
public class ThingToTest
{
public ThingToTest(IPackageSender sender) { /* ... */ }
}
public interface IPackageSender { }
public class PackageSender : IPackageSender { }
By hiding all the complexity behind the interface, you now can mock just IPackageSender using plain Moq (or whatever other mocking framework you like, or even creating a manual stub implementation). You wouldn't even need to include Autofac in the mix because you could mock the dependency directly and pass it in.
Point being, you can design your way into making testing and setup easier, which is why, in the comments on your question, I asked why you were doing things that way (which, at the time of this writing, never did get answered). I would strongly recommend designing things to be easier to test if possible.

How to deal with hard to express requirements for dependencies?

When doing IoC, I (think that I) understand its use for getting the desired application level functionality by composing the right parts, and the benefits for testability. But at the microlevel, I don't quite understand how to make sure that an object gets dependencies injected that it can actually work with. My example for this is a BackupMaker for a database.
To make a backup, the database needs to be exported in a specific format, compressed using a specific compression algorithm, and then packed together with some metadata to form the final binary. Doing all of these tasks seems to be far from a single responsibility, so I ended up with two collaborators: a DatabaseExporter and a Compressor.
The BackupMaker doesn't really care how the database is exported (e.g. using IPC to a utility that comes with the database software, or by doing the right API calls) but it does care a lot about the result, i.e. it needs to be a this-kind-of-database backup in the first place, in the transportable (version agnostic) format, either of which I don't really know how to wrap in a contract. Neither does it care if the compressor does the compression in memory or on disk, but it has to be BZip2.
If I give the BackupMaker the wrong kinds of exporter or compressor, it will still produce a result, but it will be corrupt - it'll look like a backup, but it won't have the format that it should have. It feels like no other part of the system can be trusted to give it those collaborators, because the BackupMaker won't be able to guarantee to do the right thing itself; its job (from my perspective) is to produce a valid backup and it won't if the circumstances ain't right, and worse, it won't know about it. At the same time, even when writing this, it seems to me that I'm saying something stupid now, because the whole point of single responsibilities is that every piece should do its job and not worry about the jobs of others. If it were that simple though, there would be no need for contracts - J.B. Rainsberger just taught me there is. (FYI, I sent him this question directly, but I haven't got a reply yet and more opinions on the matter would be great.)
Intuitively, my favorite option would be to make it impossible to combine classes/objects in an invalid way, but I don't see how to do that. Should I write horrendously specific interface names, like IDatabaseExportInSuchAndSuchFormatProducer and ICompressorUsingAlgorithmXAndParametersY and assume that no classes implement these if they don't behave as such, and then call it a day since nothing can be done about outright lying code? Should I go as far as the mundane task of dissecting the binary format of my database's exports and compression algorithms to have contract tests to verify not only syntax but behavior as well, and then be sure (but how?) to use only tested classes? Or can I somehow redistribute the responsibilities to make this issue go away? Should there be another class whose responsibility it is to compose the right lower level elements? Or am I even decomposing too much?
Rewording
I notice that much attention is given to this very particular example. My question is more general than that, however. Therefore, for the final day of the bounty, I will try to summarize is as follows.
When using dependency injection, by definition, an object depends on other objects for what it needs. In many book examples, the way to indicate compatibility - the capability to provide that need - is by using the type system (e.g. implementing an interface). Beyond that, and especially in dynamic languages, contract tests are used. The compiler (if present) checks the syntax, and the contract tests (that the programmer needs to remember about) verify the semantics. So far, so good. However, sometimes the semantics are still too simple to ensure that some class/object is usable as a dependency to another, or too complicated to be described properly in a contract.
In my example, my class with a dependency on a database exporter considers anything that implements IDatabaseExportInSuchAndSuchFormatProducer and returns bytes as valid (since I don't know how to verify the format). Is very specific naming and such a very rough contract the way to go or can I do better than that? Should I turn the contract test into an integration test? Perhaps (integration) test the composition of all three? I'm not really trying to be generic but am trying to keep responsibilities separate and maintain testability.
What you have discovered in your question is that you have 2 classes that have an implicit dependency on one another. So, the most practical solution is to make the dependency explicit.
There are a number of ways you could do this.
Option 1
The simplest option is to make one service depend on the other, and make the dependent service explicit in its abstraction.
Pros
Few types to implement and maintain.
The compression service could be skipped for a particular implementation just by leaving it out of the constructor.
The DI container is in charge of lifetime management.
Cons
May force an unnatural dependency into a type where it is not really needed.
public class MySqlExporter : IExporter
{
private readonly IBZip2Compressor compressor;
public MySqlExporter(IBZip2Compressor compressor)
{
this.compressor = compressor;
}
public void Export(byte[] data)
{
byte[] compressedData = this.compressor.Compress(data);
// Export implementation
}
}
Option 2
Since you want to make an extensible design that doesn't directly depend on a specific compression algorithm or database, you can use an Aggregate Service (which implements the Facade Pattern) to abstract the more specific configuration away from your BackupMaker.
As pointed out in the article, you have an implicit domain concept (coordination of dependencies) that needs to be realized as an explicit service, IBackupCoordinator.
Pros
The DI container is in charge of lifetime management.
Leaving compression out of a particular implementation is as easy as passing the data through the method.
Explicitly implements a domain concept that you are missing, namely coordination of dependencies.
Cons
Many types to build and maintain.
BackupManager must have 3 dependencies instead of 2 registered with the DI container.
Generic Interfaces
public interface IBackupCoordinator
{
void Export(byte[] data);
byte[] Compress(byte[] data);
}
public interface IBackupMaker
{
void Backup();
}
public interface IDatabaseExporter
{
void Export(byte[] data);
}
public interface ICompressor
{
byte[] Compress(byte[] data);
}
Specialized Interfaces
Now, to make sure the pieces only plug together one way, you need to make interfaces that are specific to the algorithm and database used. You can use interface inheritance to achieve this (as shown) or you can just hide the interface differences behind the facade (IBackupCoordinator).
public interface IBZip2Compressor : ICompressor
{}
public interface IGZipCompressor : ICompressor
{}
public interface IMySqlDatabaseExporter : IDatabaseExporter
{}
public interface ISqlServerDatabaseExporter : IDatabaseExporter
{}
Coordinator Implementation
The coordinators are what do the job for you. The subtle difference between implementations is that the interface dependencies are explicitly called out so you cannot inject the wrong type with your DI configuration.
public class BZip2ToMySqlBackupCoordinator : IBackupCoordinator
{
private readonly IMySqlDatabaseExporter exporter;
private readonly IBZip2Compressor compressor;
public BZip2ToMySqlBackupCoordinator(
IMySqlDatabaseExporter exporter,
IBZip2Compressor compressor)
{
this.exporter = exporter;
this.compressor = compressor;
}
public void Export(byte[] data)
{
this.exporter.Export(byte[] data);
}
public byte[] Compress(byte[] data)
{
return this.compressor.Compress(data);
}
}
public class GZipToSqlServerBackupCoordinator : IBackupCoordinator
{
private readonly ISqlServerDatabaseExporter exporter;
private readonly IGZipCompressor compressor;
public BZip2ToMySqlBackupCoordinator(
ISqlServerDatabaseExporter exporter,
IGZipCompressor compressor)
{
this.exporter = exporter;
this.compressor = compressor;
}
public void Export(byte[] data)
{
this.exporter.Export(byte[] data);
}
public byte[] Compress(byte[] data)
{
return this.compressor.Compress(data);
}
}
BackupMaker Implementation
The BackupMaker can now be generic as it accepts any type of IBackupCoordinator to do the heavy lifting.
public class BackupMaker : IBackupMaker
{
private readonly IBackupCoordinator backupCoordinator;
public BackupMaker(IBackupCoordinator backupCoordinator)
{
this.backupCoordinator = backupCoordinator;
}
public void Backup()
{
// Get the data from somewhere
byte[] data = new byte[0];
// Compress the data
byte[] compressedData = this.backupCoordinator.Compress(data);
// Backup the data
this.backupCoordinator.Export(compressedData);
}
}
Note that even if your services are used in other places than BackupMaker, this neatly wraps them into one package that can be passed to other services. You don't necessarily need to use both operations just because you inject the IBackupCoordinator service. The only place where you might run into trouble is if using named instances in the DI configuration across different services.
Option 3
Much like Option 2, you could use a specialized form of Abstract Factory to coordinate the relationship between concrete IDatabaseExporter and IBackupMaker, which will fill the role of the dependency coordinator.
Pros
Few types to maintain.
Only 1 dependency to register in the DI container, making it simpler to deal with.
Moves lifetime management into the BackupMaker service, which makes it impossible to misconfigure DI in a way that will cause a memory leak.
Explicitly implements a domain concept that you are missing, namely coordination of dependencies.
Cons
Leaving compression out of a particular implementation requires you implement the Null object pattern.
The DI container is not in charge of lifetime management and each dependency instance is per request, which may not be ideal.
If your services have many dependencies, it may become unwieldy to inject them through the constructor of the CoordinationFactory implementations.
Interfaces
I am showing the factory implementation with a Release method for each type. This is to follow the Register, Resolve, and Release pattern which makes it effective for cleaning up dependencies. This becomes especially important if 3rd parties could implement the ICompressor or IDatabaseExporter types because it is unknown what kinds of dependencies they may have to clean up.
Do note however, that the use of the Release methods is totally optional with this pattern and excluding them will simplify the design quite a bit.
public interface IBackupCoordinationFactory
{
ICompressor CreateCompressor();
void ReleaseCompressor(ICompressor compressor);
IDatabaseExporter CreateDatabaseExporter();
void ReleaseDatabaseExporter(IDatabaseExporter databaseExporter);
}
public interface IBackupMaker
{
void Backup();
}
public interface IDatabaseExporter
{
void Export(byte[] data);
}
public interface ICompressor
{
byte[] Compress(byte[] data);
}
BackupCoordinationFactory Implementation
public class BZip2ToMySqlBackupCoordinationFactory : IBackupCoordinationFactory
{
public ICompressor CreateCompressor()
{
return new BZip2Compressor();
}
public void ReleaseCompressor(ICompressor compressor)
{
IDisposable disposable = compressor as IDisposable;
if (disposable != null)
{
disposable.Dispose();
}
}
public IDatabaseExporter CreateDatabaseExporter()
{
return new MySqlDatabseExporter();
}
public void ReleaseDatabaseExporter(IDatabaseExporter databaseExporter)
{
IDisposable disposable = databaseExporter as IDisposable;
if (disposable != null)
{
disposable.Dispose();
}
}
}
public class GZipToSqlServerBackupCoordinationFactory : IBackupCoordinationFactory
{
public ICompressor CreateCompressor()
{
return new GZipCompressor();
}
public void ReleaseCompressor(ICompressor compressor)
{
IDisposable disposable = compressor as IDisposable;
if (disposable != null)
{
disposable.Dispose();
}
}
public IDatabaseExporter CreateDatabaseExporter()
{
return new SqlServerDatabseExporter();
}
public void ReleaseDatabaseExporter(IDatabaseExporter databaseExporter)
{
IDisposable disposable = databaseExporter as IDisposable;
if (disposable != null)
{
disposable.Dispose();
}
}
}
BackupMaker Implementation
public class BackupMaker : IBackupMaker
{
private readonly IBackupCoordinationFactory backupCoordinationFactory;
public BackupMaker(IBackupCoordinationFactory backupCoordinationFactory)
{
this.backupCoordinationFactory = backupCoordinationFactory;
}
public void Backup()
{
// Get the data from somewhere
byte[] data = new byte[0];
// Compress the data
byte[] compressedData;
ICompressor compressor = this.backupCoordinationFactory.CreateCompressor();
try
{
compressedData = compressor.Compress(data);
}
finally
{
this.backupCoordinationFactory.ReleaseCompressor(compressor);
}
// Backup the data
IDatabaseExporter exporter = this.backupCoordinationFactory.CreateDatabaseExporter();
try
{
exporter.Export(compressedData);
}
finally
{
this.backupCoordinationFactory.ReleaseDatabaseExporter(exporter);
}
}
}
Option 4
Create a guard clause in your BackupMaker class to prevent non-matching types from being allowed, and throw an exception in the case they are not matched.
In C#, you can do this with attributes (which apply custom metadata to the class). Support for this option may or may not exist in other platforms.
Pros
Seamless - no extra types to configure in DI.
The logic for comparing whether types match could be expanded to include multiple attributes per type, if needed. So a single compressor could be used for multiple databases, for example.
100% of invalid DI configurations will cause an error (although you may wish to make the exception specify how to make the DI configuration work).
Cons
Leaving compression out of a particular backup configuration requires you implement the Null object pattern.
The business logic for comparing types is implemented in a static extension method, which makes it testable but impossible to swap with another implementation.
If the design is refactored so that ICompressor or IDatabaseExporter are not dependencies of the same service, this will no longer work.
Custom Attribute
In .NET, an attribute can be used to attach metadata to a type. We make a custom DatabaseTypeAttribute that we can compare the database type name with two different types to ensure they are compatible.
[AttributeUsage(AttributeTargets.Class, AllowMultiple = false)]
public DatabaseTypeAttribute : Attribute
{
public DatabaseTypeAttribute(string databaseType)
{
this.DatabaseType = databaseType;
}
public string DatabaseType { get; set; }
}
Concrete ICompressor and IDatabaseExporter Implementations
[DatabaseType("MySql")]
public class MySqlDatabaseExporter : IDatabaseExporter
{
public void Export(byte[] data)
{
// implementation
}
}
[DatabaseType("SqlServer")]
public class SqlServerDatabaseExporter : IDatabaseExporter
{
public void Export(byte[] data)
{
// implementation
}
}
[DatabaseType("MySql")]
public class BZip2Compressor : ICompressor
{
public byte[] Compress(byte[] data)
{
// implementation
}
}
[DatabaseType("SqlServer")]
public class GZipCompressor : ICompressor
{
public byte[] Compress(byte[] data)
{
// implementation
}
}
Extension Method
We roll the comparison logic into an extension method so every implementation of IBackupMaker automatically includes it.
public static class BackupMakerExtensions
{
public static bool DatabaseTypeAttributesMatch(
this IBackupMaker backupMaker,
Type compressorType,
Type databaseExporterType)
{
// Use .NET Reflection to get the metadata
DatabaseTypeAttribute compressorAttribute = (DatabaseTypeAttribute)compressorType
.GetCustomAttributes(attributeType: typeof(DatabaseTypeAttribute), inherit: true)
.SingleOrDefault();
DatabaseTypeAttribute databaseExporterAttribute = (DatabaseTypeAttribute)databaseExporterType
.GetCustomAttributes(attributeType: typeof(DatabaseTypeAttribute), inherit: true)
.SingleOrDefault();
// Types with no attribute are considered invalid even if they implement
// the corresponding interface
if (compressorAttribute == null) return false;
if (databaseExporterAttribute == null) return false;
return (compressorAttribute.DatabaseType.Equals(databaseExporterAttribute.DatabaseType);
}
}
BackupMaker Implementation
A guard clause ensures that 2 classes with non-matching metadata are rejected before the type instance is created.
public class BackupMaker : IBackupMaker
{
private readonly ICompressor compressor;
private readonly IDatabaseExporter databaseExporter;
public BackupMaker(ICompressor compressor, IDatabaseExporter databaseExporter)
{
// Guard to prevent against nulls
if (compressor == null)
throw new ArgumentNullException("compressor");
if (databaseExporter == null)
throw new ArgumentNullException("databaseExporter");
// Guard to prevent against non-matching attributes
if (!DatabaseTypeAttributesMatch(compressor.GetType(), databaseExporter.GetType()))
{
throw new ArgumentException(compressor.GetType().FullName +
" cannot be used in conjunction with " +
databaseExporter.GetType().FullName)
}
this.compressor = compressor;
this.databaseExporter = databaseExporter;
}
public void Backup()
{
// Get the data from somewhere
byte[] data = new byte[0];
// Compress the data
byte[] compressedData = this.compressor.Compress(data);
// Backup the data
this.databaseExporter.Export(compressedData);
}
}
If you decide on one of these options, I would appreciate if you left a comment as to which one you go with. I have a similar situation in one of my projects, and I am leaning toward Option 2.
Response to your Update
Is very specific naming and such a very rough contract the way to go or can I do better than that? Should I turn the contract test into an integration test? Perhaps (integration) test the composition of all three? I'm not really trying to be generic but am trying to keep responsibilities separate and maintain testability.
Creating an integration test is a good idea, but only if you are certain that you are testing the production DI configuration. Although it also makes sense to test it all as a unit to verify it works, it doesn't do you much good for this use case if the code that ships is configured differently than the test.
Should you be specific? I believe I have already given you a choice in that matter. If you go with the guard clause, you don't have to be specific at all. If you go with one of the other options, you have a good compromise between specific and generic.
I know you stated that you are not intentionally trying to be generic, and it is good to draw the line somewhere to ensure a solution is not over-engineered. On the other hand, if the solution has to be redesigned because an interface was not generic enough that is not a good thing either. Extensibility is always a requirement whether it is specified up front or not because you never really know how business requirements will change in the future. So, having a generic BackupMaker is definitely the best way to go. The other classes can be more specific - you just need one seam to swap implementations if future requirements change.
My first suggestion would be to critically think if you need to be that generic: You have a concrete problem to solve, you want to backup a very specific database into a specific format. Is there any benefit you get by solving the problem for arbitary databases and arbitary formats? What you surely get of a generic solution is boilerplate code and increased complexity (people understand concrete problems, not generic ones).
If this applies to you, then my suggestion would be to not let your DatabaseExporter accept interfaces, but instead only concrete implementations. There are enough modern tools out there which will also allow you mocking concrete classes, so testability is not an argument for using interfaces here aswell.
on the other hand, if you do have to backup several databases with different strategies, then I would probably introduce something like a
class BackupPlan {
public DatabaseExporter exporter() {/**...*/}
public Compressor compressor() {/** ... */}
}
then your BackupMaker will get passed one BackupPlan, specifying which database to be compressed with which algorithm.
Your question is emphasizing the fact that object composition is very important and that the entity that is responsible for such composition (wiring) has a big responsibility.
Since you already have a generic BackupMaker, I would suggest that you keep it this way, and push the big responsibility of making sure that the right composition of objects (to solve the specific problem) is done in the composition root.
Readers of your application source code (you and your team members), would have a single place (the composition root) to understand how you compose your objects to solve your specific problem by using the generic classes (e.g. BackupMaker).
Put in other words, the composition root is where you decide on the specifics. Its where you use the generic to create the specific.
To reply on the comment:
which should know what about those dependencies?
The composition root needs to know about everything (all the dependencies) since it is creating all the objects in the application and wiring them together. The composition root knows what each piece of the puzzle does and it connects them together to create a meaningful application.
For the BackupMaker, it should only care about just enough to be able to do its single responsibility. In your example, its single (simple) responsibility (as it seems to me) is to orchestrate the consumption of other objects to create a backup.
As long as you are using DI, a class will never be sure that its collaborator will behave correctly, only the composition root will. Consider this simple and extreme example of an IDatabaseExporter implementation (assume that the developer actually gave this class this name, and that he intentionally implemented it this way):
public class StupidDisastrousDatabaseExporter : IDatabaseExporter
{
public ExportedData Export()
{
DoSomethingStupidThatWillDeleteSystemDataAndMakeTheEnterpriseBroke();
...
}
private void DoSomethingStupidThatWillDeleteSystemDataAndMakeTheEnterpriseBroke()
{
//do it
...
}
}
Now, the BackupMaker will never know that it is consuming a stupid and disastrous database exporter, only the composition root does. We can never blame the programmer that wrote the BackupMaker class for this disastrous mistake (or the programmer who designed the IDatabaseExporter contract). But the programmer(s) that are composing the application in the composition root are blamed if they inject a StupidDisastrousDatabaseExporter instance into the constructor of BackupMaker.
Of course, no one should have written the StupidDisastrousDatabaseExporter class in the first place, but I gave you an extreme example to show you that a contract (interface) can never (and should never) guarantee every aspect about its implementors. It should just say enough.
Is there a way to express IDatabaseExporter in such a way that guarantees that implementors of such interface will not make stupid or disastrous actions? No.
Please note that while the BackupMaker is dealing with contracts (no 100% guarantees), the composition root is actually dealing with concrete implementation classes. This gives it the great power (and thus the great responsibility) to guarantee the composition of the correct object graph.
how do I make sure that I'm composing in a sensible way?
You should create automated end-to-end tests for the object graph created by the composition root. Here you are making sure that the composition root has done its big responsibility of composing the objects in a correct way. Here you can test the exact details that you wanted (like that the backup result was in some exact format/details).
Take a look at this article for an approach to automated testing using the Composition Root.
I believe this may be a problem that occurs when focusing too much on object models, at the exclusion of function compositions. Consider the first step in a naive function decomposition (function as in f : a -> b):
exporter: data -> (format, memory), or exception
compressor: memory -> memory, or exception
writer: memory -> side-effect, or exception
backup-maker: (data, exporter, compressor, writer) -> backup-result
So backup-maker, the last function, can be parametized with those three functions, assuming I've considered your use-case correctly, and if the three parameters have the same input and output types, e.g. format, and memory, despite their implementation.
Now, "the guts", or a possible decomposition (read right to left) of backup-maker, with all functions bound, taking data as the argument, and using the composition operator ".":
backup-maker: intermediate-computation . writer . intermediate-computation . compressor . intermediate-computation . exporter
I especially want to note that this model of architecture can be expressed later as either object interfaces, or as first-class functions, e.g. c++ std::function.
Edit: It can also be refined to terms of generics, where memory is a generic type argument, to provide type safety where wanted. E.g.
backup-maker<type M>: (data, exporter<M>, compressor<M>, writer<M>) -> ..
More information about the technique and benefits of Function Decomposition can be found here:
http://jfeltz.com/posts/2015-08-30-cost-decreasing-software-architecture.html
Your requirements seem contradictory:
You want to be specific (allowing only a subset (or only one ?) of combinations)
But you also want to be generic by using interfaces, DI, etc.
My advice is to keep things simple (in your case it means don't try to be generic) until your code evolve.
Only when your code will evolve, refactor in a more generic way. The code below shows a compromise between generic/specific:
public interface ICompressor {
public byte[] compress(byte[] source); //Note: the return type and type argument may not be revelant, just for demonstration purpose
}
public interface IExporter {
public File export(String connectionString); //Note: the return type and type argument may not be revelant, just for demonstration purpose
}
public final class Bzip2 implements ICompressor {
#Override
public final byte[] compress(byte[] source) {
//TODO
}
}
public final class MySQL implements IExporter {
#Override
public final File export(String connnectionString) {
//TODO
}
}
public abstract class ABackupStrategy {
private final ICompressor compressor;
private final IExporter exporter;
public ABackupStrategy(final ICompressor compressor, final IExporter exporter) {
this.compressor = compressor;
this.exporter = exporter;
}
public final void makeBackup() {
//TODO: compose with exporter and compressor to make your backup
}
}
public final class MyFirstBackupStrategy extends ABackupStrategy {
public MyFirstBackupStrategy(final Bzip2 compressor, final MySQL exporter) {
super(compressor, exporter);
}
}
With ICompressor and IExporter, you can easily add other compression algorithm, other database from which to export.
With ABackupStrategy, you can easily define a new allowed combination of concrete compressor/exporter by inheriting it.
Drawback: I had to make ABackupStrategy abstract without declaring any abstract method, which is in contradiction with the OOP-principles.

How to Solve Circular Dependency

Hi I have a problem with the structure of my code, it somehow goes into Circular Dependency. Here is an explanation of how my code looks like:
I have a ProjectA contains BaseProcessor and BaseProcessor has a reference to a class called Structure in ProjectB. Inside BaseProcessor, there is an instance of Structure as a variable.
In projectB there are someother classes such as Pricing, Transaction etc.
Every class in ProjectB has a base class called BaseStructure i.e. Structure, Pricing and Transaction classes all inherited from BaseStructure.
Now in Pricing and Transaction classes, I want to call a method in BaseProcessor class from BaseStructure class which causing Circular Dependency.
What I have tried is:
Using Unity, but I didn't figure out how to make it work because I try to use function like:
unityContainer.ReferenceType(IBaseProcessor, BaseProcessor)
in BaseStructure then it will need a reference of BaseProcessor which also cause Circular Dependency.
And I've also tried creating an interface of IBaseProcessor and create a function(the function I want to call) declaration in this interface. And let both BaseProcessor and BaseStructure inherit this interface. But how can I call the function in Pricing and Transaction class without create an instance of BaseProcessor?
Can anyone please tell me how to resolve this problem other than using reflection?
Any help will be much appreciated. Thanks :)
You could use the lazy resolution:
public class Pricing {
private Lazy<BaseProcessor> proc;
public Pricing(Lazy<BaseProcessor> proc) {
this.proc = proc;
}
void Foo() {
this.proc.Value.DoSomethin();
}
}
Note that you haven't to register the Lazy because Unity will resolve it by BaseProcessor registration.
Your DI container can't help solving the circular reference, since it is the dependency structure of the application that prevents objects from being created. Even without a DI container, you can't construct your object graphs without some special 'tricks'.
Do note that in most cases cyclic dependency graphs are a sign of a design flaw in your application, so you might want to consider taking a very close look at your design and see if this can't be solved by extracting logic into separate classes.
But if this is not an option, there are basically two ways of resolving this cyclic dependency graph. Either you need to fallback to property injection, or need to postpone resolving the component with a factory, Func<T>, or like #onof proposed with a Lazy<T>.
Within these two flavors, there are a lot of possible ways to do this, for instance by falling back to property injection into your application (excuse my C#):
public class BaseStructure {
public BaseStructure(IDependency d1) { ... }
// Break the dependency cycle using a property
public IBaseProcessor Processor { get; set; }
}
This moves the IBaseProcessor dependency from the constructor to a property and allows you to set it after the graph is constructed. Here's an example of an object graph that is built manually:
var structure = new Structure(new SomeDependency());
var processor = new BaseProcessor(structure);
// Set the property after the graph has been constructed.
structure.Processor = processor;
A better option is to hide the property inside your Composition Root. This makes your application design cleaner, since you can keep using constructor injection. Example:
public class BaseStructure {
// vanilla constructor injection here
public BaseStructure(IDependency d1, IBaseProcessor processor) { ... }
}
// Defined inside your Composition Root.
private class CyclicDependencyBreakingProcessor : IBaseProcessor {
public IBaseProcessor WrappedProcessor { get; set; }
void IBaseProcessor.TheMethod() {
// forward the call to the real processor.
this.WrappedProcessor.TheMethod();
}
}
Now instead of injecting the BaseProcessor into your Structure, you inject the CyclicDependencyBreakingProcessor, which will be further initialized after the construction of the graph:
var cyclicBreaker = new CyclicDependencyBreakingProcessor();
var processor = new BaseProcessor(new Structure(new SomeDependency(), cyclicBreaker));
// Set the property after the graph has been constructed.
cyclicBreaker.WrappedProcessor = processor;
This is basically the same as before, but now the application stays oblivious from the fact that there is a cyclic dependency that needed to be broken.
Instead of using property injection, you can also use Lazy<T>, but just as with the property injection, it is best to hide this implementation detail inside your Composition Root, and don't let Lazy<T> values leak into your application, since this just adds noise to your application, which makes your code more complex and harder to test. Besides, the application shouldn't care that the dependency injection is delayed. Just as with Func<T> (and IEnumerable<T>), when injecting a Lazy<T> the dependency is defined with a particular implementation in mind and we're leaking implementation details. So it's better to do the following:
public class BaseStructure {
// vanilla constructor injection here
public BaseStructure(IDependency d1, IBaseProcessor processor) { ... }
}
// Defined inside your Composition Root.
public class CyclicDependencyBreakingProcessor : IBaseProcessor {
public CyclicDependencyBreakingBaseProcessor(Lazy<IBaseProcessor> processor) {...}
void IBaseProcessor.TheMethod() {
this.processor.Value.TheMethod();
}
}
With the following wiring:
IBaseProcessor value = null;
var cyclicBreaker = new CyclicDependencyBreakingProcessor(
new Lazy<IBaseProcessor>(() => value));
var processor = new BaseProcessor(new Structure(new SomeDependency(), cyclicBreaker));
// Set the value after the graph has been constructed.
value = processor;
Up until now I only showed how to build up the object graph manually. When doing this using a DI container, you usually want to let the DI container build up the complete graph for you, since this yields a more maintainable Composition Root. But this can make it a bit more tricky to break the cyclic dependencies. In most cases the trick is to register the component that you want to break with a caching lifestyle (basically anything else than transient). Per Web Request Lifestyle for instance. This allows you to get the same instance in a lazy fashion.
Using the last CyclicDependencyBreakingProcessor example, we can create the following Unity registration:
container.Register<BaseProcessor>(new PerRequestLifetimeManager());
container.RegisterType<IStructure, Structure>();
container.RegisterType<IDependency, SomeDependenc>();
container.Register<IBaseProcessor>(new InjectionFactory(c =>
new CyclicDependencyBreakingProcessor(
new Lazy<IBaseProcessor>(() => c.GetInstance<BaseProcessor>())));

Rhino Mocks Record Playback syntax

Help, can anyone help out and explain the purpose of the Rhino Mocks 'Record' scope?
I assumed that that the expectation set within the scope would only be verified but it seems as soon as you create the mock object, Rhino Mocks is in 'record mode' so I'm now unsure of the purpose of the Record scope.
Here's an example I have:
private static void SomeTest()
{
MockRepository mockRepository = new MockRepository();
ISomeInterface test = mockRepository.StrictMock<ISomeInterface>();
test.Bar();
using (mockRepository.Record())
{
Expect.Call<string>(test.GetFoo()).Return("Hello");
}
using (mockRepository.Playback())
{
test.GetFoo();
}
}
public interface ISomeInterface
{
string GetFoo();
void Bar();
}
This test would fail because there is an expectation that Bar should be called. Is it because I've created a StrictMock and not Dynamic?
This test will fail because there is no expectation that Bar() would be called, but it was called.
To answer your question, yes, it is because you have a strict mock. If you change to a DynamicMock, it will ignore everything except the expectations that you actually set. I would highly recommend using DynamicMocks wherever possible, as StrictMocks are actually quite brittle and tend to end up being a lot of hassle.
As for Record/Replay, it is not automatically in Record mode if you're using a concrete MockRepository. It's just the nature of the StrictMock that looks for anything being called that was outside of expectations, no matter when.