Rhinomocks, how to verify that a stub/mock was never called at all? - rhino-mocks

Using Rhinomocks, how can I verify that a Mock/stub was never called at all? Meaning no methods was called on the mock/stub?
I am aware of the AssertWasNotCalled method, but this method requires that I mention a method name. (Perhaps I have a class with 10 different methods that could be called).
Log.AssertWasNotCalled(x => x.LogAndReportException(null, null), x => x.IgnoreArguments());

You can use a Strict mock, althought this is a feature that may go away in the future:
var mocks = new MockRepository();
var cm = mocks.StrictMock<ICallMonitor>();
cm.Replay();
cm.HangUp(); // this will cause VerifyAllExpectations to throw
cm.VerifyAllExpectations();
In this syntax, a Strick Mock only allows explicitly defined calls.

You can use the StrictMock method to create a strict mock - this will fail if any unexcepted method call is used. According to Ayende's site, this is discouraged, but it sounds like exactly the scenario where it would be useful.

When you are using mocks, you should not assert every single call was made or not. That couples your tests to a particular implementation and makes them fragile and a refactoring nightmare.
If I ever ran into this situation I would rethink why I wanted to assert that a dependency was never used.
Obviously, if the dependency is not used anywhere, just remove it. If it is needed for some operations, but all the operations in the dependency are destructive operations and you want to make sure some operation does not do harm with them, you should assert explicitly that the destructive operations were not called and allow the implementation to do whatever it wants with the non-destructive operations (if there are any). This makes your tests more explicit and less fragile.

Related

Test assertions inside test doubles?

Is it a good practice to write a EXPECT(something) inside a test double (e.g. spy or mock) method? To ensure the test double is used in a specific way for testing?
If not, what would be a preferred solution?
If you would write a true Mock (as per definition from xUnit Test Patterns) this is exactly what defines this kind of test double. It is set up with the expectations how it will be called and therefore also includes the assertions. That's also how mocking frameworks produce mock objects under the hood. See also the definition from xUnit Test Patterns:
How do we implement Behavior Verification for indirect outputs of the SUT?
How can we verify logic independently when it depends on indirect inputs from other software components?
Replace an object the system under test (SUT) depends on with a test-specific object that verifies it is being used correctly by the SUT.
Here, indirect outputs means that you don't want to verify that the method under test returns some value but that there is something happening inside the method being tested that is behaviour relevant to callers of the method. For instance, that while executing some method the correct behaviour lead to an expected important action. Like sending an email or sending a message somewhere. The mock would be the doubled dependency that also verifies itself that this really happened, i.e. that the method under test really called the method of the dependency with the expected parameter(s).
A spy on the other hand shall just record things of interest that happened to the doubled dependency. Interrogating the spy about what happened (and sometimes also how often) and then judging if that was correct by asserting on the expected events is the responsibility of the test itself. So a mock is always also a spy with the addition of the assertion (expectation) logic. See also Uncle Bobs blog The Little Mocker for a great explanation of the different types of test doubles.
TL;DR
Yes, the mock includes the expectations (assertion) itself, the spy just records what happened and lets the test itself asks the spy and asserts on the expected events.
Mocking frameworks also implement mocks like explained above as they all follow the specified xunit framework.
mock.Verify(p => p.Send(It.IsAny<string>()));
If you look at the above Moq example (C#), you see that the mock object itself is configured to in the end perform the expected verification. The framework makes sure that the mock's verification methods are executed. A hand-written would be setup and than you would call the verification method on the mock object yourself.
Generally, you want to put all EXPECT statements inside individual tests to make your code readable.
If you want to enforce certain things on your test stub/spy, it is probably better to use exceptions or static asserts because your test is usually using them as a black box, and it uses them in an unintended way, your code will either not get compiled, or it will throw and give you the full stack trace which also will cause your test to fail (so you can catch the misuse).
For mocks, however, you have full control over the use and you can be very specific about how they are called and used inside each test. For example in Google test, using GMock matchers, you can say something like:
EXPECT_CALL(turtle, Forward(Ge(100)));
which means expect Forward to be called on the mock object turtle with a parameter equal or greater than 100. Any other value will cause the test to fail.
See this video for more examples on GMock matchers.
It is also very common to check general things in a test fixture (e.g. in Setup or TearDown). For example, this sample from google test enforces each test to finish in a certain amount of time, and the EXPECT statement is in teardown rather than each individual test.

Ninject MockingKernel with Saboteurs

Is it possible to use MockingKernel so that it generates mock objects automatically that, if interacted with, will throw an exception (a.k.a, saboteurs)?
This is useful when you want to get an object with various dependencies, but you know your code should only be interacting with some of them. If you don't explicitly Bind a dependency (via ToMock, etc.), it should return an object that throws an exception the first time it is interacted with.
This is much better than waiting until the code finishes executing, then writing a bunch of checks to make sure you didn't call into a mock.
Does this already exist?
The answer provided above did not indicate how to setup the Ninject MockingKernel using MOQ so that the default behavior is Strict. For the benefit of others, here is what I found.
The Ninject.MockingKernel.Moq namespace provides the class NinjectSettingsExtensions with the methods SetMockBehavior() and GetMockBehavior() that allow you to specify which mocking behavior to use as the global default. I have NOT been able to find any way to override the default for an individual GetMock() request.
using Ninject;
using Ninject.MockingKernel.Moq;
var kernelSettings = new NinjectSettings();
kernelSettings.SetMockBehavior(MockBehavior.Strict);
using(var kernel = new MoqMockingKernel(kernelSettings))
{
var mockFoo = kernel.GetMock<IFoo>(); // mockFoo.Behavior == MockBehavior.Strict
}
I had been using NSubstitute's implementation of MockingKernel. NSubstitute doesn't really support a "strict" mode and you can't configure it through the NSubstituteMockingKernel class.
However, you can configure Moq to do strict mode. Best of all, the MoqMockingKernel class allows you to change the mock behavior globally. This way, any calls that aren't configured result in an exception being thrown.
This is exactly what I was looking for. The only pain was switching from NSubstitute to Moq.

Generate a Mock object with a Method which raises an event

I am working on a VB.NET project which requires the extensive used of Unit Tests but am having problems mocking on of the classes.
Here is a breakdown of the issue:
Using NUnit and Rhino Mock 3.6
VS2010 & VB.NET
I have an interface which contains a number of methods and an Event.
The class which implements that Interface raises the event when one of the methods is called.
When I mock the object in my tests I can stub methods and create/assert expectations on the methods with no problems.
How do I configure the mock object so that when a method is called the event is raised so that I can assert that is was raised?
I have found numerous posts using C# which suggest code like this
mockObject.MyEvent += null...
When I try this 'MyEvent' does not appear in Intellisense.
I'm obviously not configuring my test/mock correctly but with so few VB.NET examples out there I'm drawing a blank.
Sorry for my lack of VB syntax; I'm a C# guy. Also, I think you should be congratulated for writing tests at all, regardless of test first or test last.
I think your code needs refactoring. It sounds like you have an interface that requires implementations to contain an event, and then another class (which you're testing) depends on this interface. The code under test then executes the event when certain things happen.
The question in my mind is, "Why is it a publically exposed event?" Why not just a method that implementations can define? I suppose the event could have multiple delegates being added to it dynamically somewhere, but if that's something you really need, then the implementation should figure out how that works. You could replace the event with a pair of methods: HandleEvent([event parameters]) and AddEventListener(TheDelegateType listener). I think the meaning and usage of those should be obvious enough. If the implementation wants to use events internally, it can, but I feel like that's an implementation detail that users of the interface should not care about. All they should care about is adding their listener and that all the listeners get called. Then you can just assert that HandleEvent or AddEventListener were called. This is probably the simplest way to make this more testable.
If you really need to keep the event, then see here for information on mocking delegates. My advice would be to mock a delegate, add it to the event during set up, and then assert it was called. This might also be useful if you need to test that things are added to the event.
Also, I wouldn't rely on Intellisense too much. Mocking is done via some crafty IL code, I believe. I wouldn't count on Intellisense to keep up with members of its objects, especially when you start getting beyond normal methods.

BDD and outside-in approach, how to start with testing

All,
I'm trying to grasp all the outside-in TDD and BDD stuff and would like you to help me to get it.
Let's say I need to implement Config Parameters functionality working as follows:
there are parameters in file and in database
both groups have to be merged into one parameters set
parameters from database should override those from files
Now I'd like to implement this with outside-in approach, and I stuck just at the beginning. Hope you can help me to get going.
My questions are:
What test should I start with? I just have sth as follows:
class ConfigurationAssemblerTest {
#Test
public void itShouldResultWithEmptyConfigurationWhenBothSourcesAreEmpty() {
ConfigurationAssembler assembler = new ConfigurationAssembler();
// what to put here ?
Configuration config = assembler.getConfiguration();
assertTrue(config.isEmpty());
}
}
I don't know yet what dependencies I'll end with. I don't know how I'm gonna write all that stuff yet and so on.
What should I put in this test to make it valid? Should I mock something? If so how to define those dependencies?
If you could please show me the path to go with this, write some plan, some tests skeletons, what to do and in what order it'd be super-cool. I know it's a lot of writing, so maybe you can point me to any resources? All the resources about outside-in approach I've found were about simple cases with no dependencies etc.
And two questions to mocking approach.
if mocking is about interactions and their verification, does it mean that there should not be state assertions in such tests (only mock verifications) ?
if we replace something that doesn't exist yet with mock just for test, do we replace it later with real version?
Thanks in advance.
Ok, that's indeed a lot of stuff. Let's start from the end:
Mocking is not only about 'interactions and their verification', this would be only one half of the story. In fact, you're using it in two different ways:
Checking, if a certain call was made, and eventually also checking the arguments of the call (this is the 'interactions and verification' part).
Using mocks to replace dependencies of the class-under-test (CUT), eventually setting up return values on the mock objects as required. Here, you use mock objects to isolate the CUT from the rest of the system (so that you can handle the CUT as an isolated 'unit', which sort of runs in a sandbox).
I'd call the first form dynamic or 'interaction-based' unit testing, it uses the Mocking frameworks call verification methods. The second one is more traditional, 'static' unit testing which asserts a fact.
You shouldn't ever have the need to 'replace something that doesn't exist yet' (apart from the fact that this is - logically seen - completely impossible). If you feel like you need to do this, then this is a clear indication that you're trying to make the second step before the first.
Regarding your notion of 'outside-in approach': To be honest, I've never heard of this before, so it doesn't seem to be a very prominent concept - and obviously not a very helpful one, because it seems to confuse things more than clarifying them (at least for the moment).
Now onto your first question: (What test should I start with?):
First things first - you need some mechanism to read the configuration values from file and database, and this functionality should be encapsulated in separate helper classes (you need, among other things, a clean Separation of concerns for effectively doing TDD - this usually is totally underemphasized when introducing TDD/BDD). I'd suggest an interface (e.g. IConfigurationReader) which has two implementations (one for the file stuff and one for the database, e.g. FileConfigurationReader and DatabaseConfigurationReader). In TDD (not necessarily with a BDD approach) you would also have corresponding test fixtures. These fixtures would cover test cases like 'What happens if the underlying data store contains no/invalid/valid/other special values?'. This is what I'd advice you to start with.
Only then - with the reading mechanism in operation and your ConfigurationAssembler class having the necessary dependencies - you would start to write tests for/implement the ConfigurationAssembler class. Your test then could look like this (Because I'm a C#/.NET guy, I don't know the appropriate Java tools. So I'm using pseudo-code here):
class ConfigurationAssemblerTest {
#Test
public void itShouldResultWithEmptyConfigurationWhenBothSourcesAreEmpty() {
IConfigurationReader fileConfigMock = new [Mock of FileConfigurationReader];
fileConfigMock.[WhenAskedForConfigValues].[ReturnEmpty];
IConfigurationReader dbConfigMock = new [Mock of DatabaseConfigurationReader];
dbConfigMock.[WhenAskedForConfigValues].[ReturnEmpty];
ConfigurationAssembler assembler = new ConfigurationAssembler(fileConfigMock, dbConfigMock);
Configuration config = assembler.getConfiguration();
assertTrue(config.isEmpty());
}
}
Two things are important here:
The two reader objects are injected to the ConfigurationAssembler from outside via its constructor - this technique is called Dependency Injection. It is very helpful and important architectural principle, which generally leads to a better and cleaner architecture (and greatly helps in unit testing, especially when using mock objects).
The test now asserts exactly what it states: The ConfigurationAssembler returns ('assembles') an empty config when the underlying reading mechanisms on their part return an empty result set. And because we're using mock objects to provide the config values, the test runs in complete isolation. We can be sure that we're testing only the correct functioning of the ConfigurationAssembler class (its handling of empty values, namely), and nothing else.
Oh, and maybe it's easier for you to start with TDD instead of BDD, because BDD is only a subset of TDD and builds on top of the concepts of TDD. So you can only do (and understand) BDD effectively when you know TDD.
HTH!

"Fluent interfaces" that maintain order in the invokation chain

Is there an elegant/convinient way (without creating many "empty" classes or at least they should be not annoying) to have fluent interfcaes that maintain order on compilation level.
Fluent interfaces:
http://en.wikipedia.org/wiki/Fluent_interface
with an idea to permit this compilation
var fluentConfig = new ConfigurationFluent().SetColor("blue")
.SetHeight(1)
.SetLength(2)
.SetDepth(3);
and decline this
var fluentConfig = new ConfigurationFluent().SetLength(2)
.SetColor("blue")
.SetHeight(1)
.SetDepth(3);
Each step in the chain needs to return an interface or class that only includes the methods that are valid to use after the current step. In other words, if SetColor must come first, ConfigurationFluent should only have a SetColor method. SetColor would then return an object that only has a SetHeight method, and so forth.
In reality, the return values could all be the same instance of ConfigurationFluent but cast to different interfaces explicitly implemented by that class.
I've got a set of three ways of doing this in C++ using essentially a compile time FSM to validate the actions. You can find the code on github.
The short answer is no, there is no elegant or convenient way to enforce an order of constructing a class that properly impelemnts the "Fluent Interface" as you've linked.
The longer answer starts with playing devil's advocate. If I had dependent properties (i.e. properties that required other properties to be set first), then I could implement them something like this:
method SetLength(int millimeters)
if color is null throw new ValidationException
length = millimeters
return this
end
(NOTE: the above does not map to any real language, it is just psuedocode)
So now I have exceptions to worry about. If I don't obey the rules, the fluent object will throw an exception. Now let's say I have a declaration like yours:
var config = new Fluent().SetLength(2).SetHeight(1).SetDepth(3).SetColor("blue");
When I catch the ValidationException because length depends on the color being set first, how am I as the user supposed to know what the correct order is? Even if I had each SetX method on a different line, the stacktrace will just give me the line where the config variable was declared in most languages. Furthermore, how am I supposed to keep the rules of this object straight in my head compared to other objects? It is a cocophony of conflicting ideals.
Such precedence checks violate the spirit of the "Fluent Interface" approach. That approach was designed for conveniently configure complex objects. You take the convenience out when you attempt to enforce order.
To properly and elegantly implement the fluent interface there are a couple of guidelines that are best observed to make consumers of your class thank you:
Provide meaningful default values: minimizes need to change values, and minimizes chances of creating an invalid object.
Do not perform configuration validation until explicitly asked to do so. That event can be when we use the configuration to create a new fully configured object, or when the consumer explicitly calls a Validate() method.
In any exceptions thrown, make sure the error message is clear and points out any inconsistencies.
maybe the compiler could check that methods are called in the same order as they are defined.
this could be a new feature for compilers.
Or maybe by means of annotations, something like:
class ConfigurationFluent {
#Called-before SetHeight
SetColor(..) {}
#Called-After SetColor
SetHeight(..) {}
#Called-After SetHeight
SetLength(..){ }
#Called-After SetLength
SetDepth(..) {}
}
You can implement a state machine of valid sequence of operations and on each method call the state machine and verify if the sequence of operation is allowed or throw an exception if not.
I will not suggest this approach for Configurations though, it can get very messy and not readable