Can someone help me set up Ninject 2 with Log4net? - ninject

I've been (happily) using Ninject for a while now with some basic scenarios, and would like to give it control of my logging. I noted the existence of the Ninject.Extensions.Logging namespace, and would like to use it, but I'm running into two issues:
I want the logger to be initialized with the type of the class running it (as if I ran LogManager.GetLogger with the GetCurrentMethod().DeclaringType).
I want to be able to easily mock, or "nullify" the logger for unit testing (i.e I don't want to have the logger work), without running into NullReferenceExceptions for not initializing the logger.
Now, I know there are some questions (and even answers) around here, but I couldn't seem to find any that pointed me in the right direction.
I'll appreciate any help (even a "you bone-head" it's here! Linking to something I should have noticed).

This is the default behavior of the extension
Don't use Ninject to create the object under test in your unit tests. Create an instance manually and pass what ever you want for the logger.
Best you have a look at the unittests. https://github.com/ninject/ninject.extensions.logging/blob/master/src/Ninject.Extensions.Logging.Tests/Infrastructure/CommonTests.cs

Related

Generate a Mock object with a Method which raises an event

I am working on a VB.NET project which requires the extensive used of Unit Tests but am having problems mocking on of the classes.
Here is a breakdown of the issue:
Using NUnit and Rhino Mock 3.6
VS2010 & VB.NET
I have an interface which contains a number of methods and an Event.
The class which implements that Interface raises the event when one of the methods is called.
When I mock the object in my tests I can stub methods and create/assert expectations on the methods with no problems.
How do I configure the mock object so that when a method is called the event is raised so that I can assert that is was raised?
I have found numerous posts using C# which suggest code like this
mockObject.MyEvent += null...
When I try this 'MyEvent' does not appear in Intellisense.
I'm obviously not configuring my test/mock correctly but with so few VB.NET examples out there I'm drawing a blank.
Sorry for my lack of VB syntax; I'm a C# guy. Also, I think you should be congratulated for writing tests at all, regardless of test first or test last.
I think your code needs refactoring. It sounds like you have an interface that requires implementations to contain an event, and then another class (which you're testing) depends on this interface. The code under test then executes the event when certain things happen.
The question in my mind is, "Why is it a publically exposed event?" Why not just a method that implementations can define? I suppose the event could have multiple delegates being added to it dynamically somewhere, but if that's something you really need, then the implementation should figure out how that works. You could replace the event with a pair of methods: HandleEvent([event parameters]) and AddEventListener(TheDelegateType listener). I think the meaning and usage of those should be obvious enough. If the implementation wants to use events internally, it can, but I feel like that's an implementation detail that users of the interface should not care about. All they should care about is adding their listener and that all the listeners get called. Then you can just assert that HandleEvent or AddEventListener were called. This is probably the simplest way to make this more testable.
If you really need to keep the event, then see here for information on mocking delegates. My advice would be to mock a delegate, add it to the event during set up, and then assert it was called. This might also be useful if you need to test that things are added to the event.
Also, I wouldn't rely on Intellisense too much. Mocking is done via some crafty IL code, I believe. I wouldn't count on Intellisense to keep up with members of its objects, especially when you start getting beyond normal methods.

Instantiation of System.ServiceModel.Description.WsdlContractConversionContext class

For the case of a project requirement, I need to instantiate WsdlContractConversionContext which is not having a constructor for doing so.
Is there any work around to achieve this?
WsdlContractConversionContext is a member of the System.ServiceModel.Description namespace.
Note:
The requirement exactly is that, I am doing an implementation of IWsdlExportExtension.ExportContract and IWsdlImportExtension.ImportContract, and to unit test this implemetation I need the instance of WsdlContractConversionContext.
There are basically two ways to do that: you can either use reflection to call the non-public constructor of the class (making sure you're passing appropriate parameters to it); or you can let WCF create it for you, and use it wherever you need. The WsdlContractConversionContext is passed as one of the parameters to either IWsdlExportExtension.ExportContract or an IWsdlImportExtension.ImportContract, so you'd need to implement one of the two interfaces (exporting is usually easier, since you won't need to fiddle with WSDL-consuming tools), and force the interface to be called (you may need to hit the service metadata endpoint for that).
The post at http://blogs.msdn.com/b/carlosfigueira/archive/2011/10/06/wcf-extensibility-wsdl-export-extension.aspx has an example of an implementation of a WSDL export extension.
Update following edit in the question: many parts of WCF are notoriously hard to be unit tested. If you can't use WCF itself to create the instance, the only alternative you have is to use reflection. To create an instance of the conversion context class you need an instance of a ContractDescription (which you can create for your contract, but isn't easy), and a PortType, which is even harder. I'm afraid that unit testing your implementation of the WSDL export / import extension may not be worth the effort.

How to JMock a Singleton

My application has this structure: there's a RepositoryFacade (that is a Singleton) that uses many other ObjectRepository that are Singleton (UserRepository, etc).
Now I'd like to test it, mocking the [Objetct]Repositiries. To do that I made the [Objetct]Repositiry implements an interface, and then i tried to:
final IUserRepository mockIUserRepository= context.mock(IUserRepository.class);
RepositoryFacade.getInstance().setUserRepository(mockIUserRepository);
final User testUser = new User("username");
// expectations
context.checking(new Expectations() {{
oneOf (mockIUserRepository).save(testUser);
}});
// execute
RepositoryFacade.getInstance().save(testUser);
And in RepositoryFacade I added:
public IUserRepository userRepository = UserRepository.getInstance();
But if I try to run the test, I obtain:
java.lang.SecurityException: class "org.hamcrest.TypeSafeMatcher"'s signer
information does not match signer information of other classes in the same
package
p.s. Originally my RepositoryFacade had not a IUserRepository variable, I used it asking always UserRepository.getInstance().what_i_want(). I introduced it to try to use JMock, so if not needed I'll be glad to remove that bad use of Singleton.
Thanks,
Andrea
The error you're getting suggests that you have a classloading issue with the org.hamcrest package rather than any issue with your singletons. See this question for more on this exception and this one for the particular problem with hamcrest and potential solutions.
Check your classpath to make sure you're not including conflicting hamcrest code from multiple jars. If you find hamcrest in multiple jars, this may be corrected by something as simple as changing their order in your classpath.
Junit itself comes in two versions - one may include an old version of hamcrest. Switching to the one not including hamcrest may also fix your problem.
If you can find a way to do it, it would be better in the long run to get rid of the singletons altogether and instead do dependency injection using something like Spring or Guice.
But what you're doing should work, once you deal with the classloading, and it's a reasonable approach to dealing with singletons in a testing context.

BDD and outside-in approach, how to start with testing

All,
I'm trying to grasp all the outside-in TDD and BDD stuff and would like you to help me to get it.
Let's say I need to implement Config Parameters functionality working as follows:
there are parameters in file and in database
both groups have to be merged into one parameters set
parameters from database should override those from files
Now I'd like to implement this with outside-in approach, and I stuck just at the beginning. Hope you can help me to get going.
My questions are:
What test should I start with? I just have sth as follows:
class ConfigurationAssemblerTest {
#Test
public void itShouldResultWithEmptyConfigurationWhenBothSourcesAreEmpty() {
ConfigurationAssembler assembler = new ConfigurationAssembler();
// what to put here ?
Configuration config = assembler.getConfiguration();
assertTrue(config.isEmpty());
}
}
I don't know yet what dependencies I'll end with. I don't know how I'm gonna write all that stuff yet and so on.
What should I put in this test to make it valid? Should I mock something? If so how to define those dependencies?
If you could please show me the path to go with this, write some plan, some tests skeletons, what to do and in what order it'd be super-cool. I know it's a lot of writing, so maybe you can point me to any resources? All the resources about outside-in approach I've found were about simple cases with no dependencies etc.
And two questions to mocking approach.
if mocking is about interactions and their verification, does it mean that there should not be state assertions in such tests (only mock verifications) ?
if we replace something that doesn't exist yet with mock just for test, do we replace it later with real version?
Thanks in advance.
Ok, that's indeed a lot of stuff. Let's start from the end:
Mocking is not only about 'interactions and their verification', this would be only one half of the story. In fact, you're using it in two different ways:
Checking, if a certain call was made, and eventually also checking the arguments of the call (this is the 'interactions and verification' part).
Using mocks to replace dependencies of the class-under-test (CUT), eventually setting up return values on the mock objects as required. Here, you use mock objects to isolate the CUT from the rest of the system (so that you can handle the CUT as an isolated 'unit', which sort of runs in a sandbox).
I'd call the first form dynamic or 'interaction-based' unit testing, it uses the Mocking frameworks call verification methods. The second one is more traditional, 'static' unit testing which asserts a fact.
You shouldn't ever have the need to 'replace something that doesn't exist yet' (apart from the fact that this is - logically seen - completely impossible). If you feel like you need to do this, then this is a clear indication that you're trying to make the second step before the first.
Regarding your notion of 'outside-in approach': To be honest, I've never heard of this before, so it doesn't seem to be a very prominent concept - and obviously not a very helpful one, because it seems to confuse things more than clarifying them (at least for the moment).
Now onto your first question: (What test should I start with?):
First things first - you need some mechanism to read the configuration values from file and database, and this functionality should be encapsulated in separate helper classes (you need, among other things, a clean Separation of concerns for effectively doing TDD - this usually is totally underemphasized when introducing TDD/BDD). I'd suggest an interface (e.g. IConfigurationReader) which has two implementations (one for the file stuff and one for the database, e.g. FileConfigurationReader and DatabaseConfigurationReader). In TDD (not necessarily with a BDD approach) you would also have corresponding test fixtures. These fixtures would cover test cases like 'What happens if the underlying data store contains no/invalid/valid/other special values?'. This is what I'd advice you to start with.
Only then - with the reading mechanism in operation and your ConfigurationAssembler class having the necessary dependencies - you would start to write tests for/implement the ConfigurationAssembler class. Your test then could look like this (Because I'm a C#/.NET guy, I don't know the appropriate Java tools. So I'm using pseudo-code here):
class ConfigurationAssemblerTest {
#Test
public void itShouldResultWithEmptyConfigurationWhenBothSourcesAreEmpty() {
IConfigurationReader fileConfigMock = new [Mock of FileConfigurationReader];
fileConfigMock.[WhenAskedForConfigValues].[ReturnEmpty];
IConfigurationReader dbConfigMock = new [Mock of DatabaseConfigurationReader];
dbConfigMock.[WhenAskedForConfigValues].[ReturnEmpty];
ConfigurationAssembler assembler = new ConfigurationAssembler(fileConfigMock, dbConfigMock);
Configuration config = assembler.getConfiguration();
assertTrue(config.isEmpty());
}
}
Two things are important here:
The two reader objects are injected to the ConfigurationAssembler from outside via its constructor - this technique is called Dependency Injection. It is very helpful and important architectural principle, which generally leads to a better and cleaner architecture (and greatly helps in unit testing, especially when using mock objects).
The test now asserts exactly what it states: The ConfigurationAssembler returns ('assembles') an empty config when the underlying reading mechanisms on their part return an empty result set. And because we're using mock objects to provide the config values, the test runs in complete isolation. We can be sure that we're testing only the correct functioning of the ConfigurationAssembler class (its handling of empty values, namely), and nothing else.
Oh, and maybe it's easier for you to start with TDD instead of BDD, because BDD is only a subset of TDD and builds on top of the concepts of TDD. So you can only do (and understand) BDD effectively when you know TDD.
HTH!

DLL Reflection?

Is something like this possible? If so, could you point me in the right direction for learning how?
applicationx tries to run the method start() in dll_one.dll
dll_one.dll runs the command
applicationx tries to run the method run() in dll_one.dll
dll_one.dll doesn't have a method run() and hasn't prepared for such an occurance.
dll_one.dll asks dll_two.dll if it has a run()
dll_two runs run()
Basically, I want it so if dllA doesn't have a method that the application is looking for, it asks dllB. This is assuming, as well, that ApplicationX and dllB don't know anything about dllA and dllA kind of just appeared out of nowhere (I want dlls dynamically like a patch to my applications without having to rewrite ALL of the methods, properties, etc. in the dll and have everything else just routed to the old dll).
Any ideas? Keep in mind, I'm using vb.net so a .net reference is appreciated.
It seems like you're asking for a plug-in architecture for your app (except that "patch" part is bothering me). If so, you can try MEF, which solves this exact problem.
The specific thing you ask for isn't possible. You can't have a non-existent method call automatically re-routed to a different dll. You can't "run the method run() in dll_one.dll" unless you've compiled that code, and it won't compile if the method doesn't exist. You also can't compile code against dllB and then drop dllA in and have it intercept method calls. Reflection could conceivably solve part of your problem, but you'd not want to base your code around calling all methods by reflection - it'd be horrendously unperformant and not very maintainable.
As Anton suggests, a plugin approach might work. However, this would rely on you being able to specify up-front the interface for your plugin, which sounds like it would contradict your original requirement.
Another problem: if you'd not deployed dllA until later, how would your ApplicationX know to call method start() in dll_one.dll anyway? You'd surely need to re-deploy at least the base application for that part to work.
These kinds of problem are often best solved by having a more specific set of requirements to work to: what functionality are you likely to want to extend or change in the future? Could you support a common set of interfaces that allow extensibility via plugins, or can you need to redeploy encapsulated chunks of your application with new functionality? Is there UI involved or is this just to change back-end logic? Questions like this could help to suggest more viable solutions.