It seems like all of the Mockito examples I have looked at, "fake" the behavior of the object they are testing.
If I have an object that has a the method:
public int add(int a, int b) {return a+b}
I would simply use JUnit to assert whether two integers passed in would result in the correct output.
With all of the examples I've seen with Mockito, people are doing things like when.Object.add(2,3).thenReturn(5). What's the point of using this testing framework, if all you're doing is telling the object how to act on the test side, rather than the object side?
Mocking frameworks are good for testing a system by mocking that system's dependencies; you wouldn't use a mocking framework to mock or stub add if you are testing add. Let's break this out a bit further:
Testing add
A mocking framework is not good for testing your add method above. There are no dependencies other than the very stable and extremely-well-tested JVM and JRE.
public int add(int a, int b) {return a+b}
However, it might be good for testing your add method if it were to interact with another object like this:
public int add(int a, int b, AdditionLogger additionLogger) {
int total = a + b;
additionLogger.log(a, b, total);
return total;
}
If AdditionLogger isn't written yet, or if it's written to communicate with a real server or other external process, then a mocking framework would absolutely be useful: it would help you come up with a fake implementation of AdditionLogger so you could test your real method's interactions with it.
#Test public void yourTest() {
assertEquals(5, yourObject.add(2, 3, mockAdditionLogger));
verify(mockAdditionLogger).log(2, 3, 5);
}
Testing add's consumers
Coincidentally, a mocking framework is also unlikely to be good for testing consumers of your method above. After all, there is nothing particularly dangerous about a call to add, so assuming it exists you can probably call the real one in an external test. 2 + 3 will always equal 5, and there are no side effects from your calculation, so there's very little to be gained by mocking or verifying.
However, let's give your object another method that adds two numbers with a little bit of random noise:
public int addWithNoise(int a, int b) {
int offset = new Random().nextInt(11) - 5; // range: [-5, 5]
int total = a + b + offset;
return total;
}
With this, it may be very hard for you to write a robust assert-style test against this method; after all, the result is going to be somewhat random! Instead, to make an assert-style test easier, maybe we can stub out addWithNoise to make some of this more predictable.
#Test public void yourTest() {
when(yourObjectMock.addWithNoise(2, 3)).thenReturn(6);
// You're not asserting/verifying the action you stub, you're making the dependency
// *fast and reliable* so you can check the logic of *the real method you're testing*.
assertEquals(600, systemUnderTestThatConsumesYourObject.doThing(yourObjectMock));
}
In summary
It can be easier to explain mocking and mock syntax when interacting with well-known operations like add or well-known interfaces like List, but those examples are not usually realistic cases where mocks are needed. Remember that mocking is only really useful for simulating the dependencies around your system-under-test when you can't use real ones.
Goal of unit-testing is to test the functionality without connecting to any external systems. If you are connecting to any external system, that is considered integration testing.
While performing unit-testing, a system may need some data that could have been retrieved from external systems as database, web/rest services, API etc during Systems/Integration testing. In such scenarios, we need to supply mock/fake data to test some business rules, or any other form of logic.
With above, unit-tests ensure that a particular unit of code works with given set of fake/mocked data, and should behave in similar manner in integrated environment.
Related
First of all I'm a beginner in unit tests. For my tests i want to use NSubstitute, so I read the tutorial on the website and also the mock comparison from Richard Banks. Both of them are testing against interfaces, not against classes. The statement is "Generally this [substituted] type will be an interface, but you can also substitute classes in cases of emergency."
Now I'm wondering about the purpose of testing against interfaces. Here is the example interface from the NSubstitute website (please note, that i have converted the C#-code in VB.net):
Public Interface ICalculator
Function Add(a As Double, b As Double) As Double
Property Mode As String
Event PoweringUp As EventHandler
End Interface
And here is the unit test from the website (under the NUnit-Framework):
<Test>
Sub ReturnValue_For_Methods()
Dim calculator = Substitute.For(Of ICalculator)()
calculator.Add(1, 2).Returns(3)
Assert.AreEqual(calculator.Add(1, 2), 3)
End Sub
Ok, that works and the unit test will perform successful. But what sense makes this? This do not test any code. The Add-Method could have any errors, which will not be detected when testing against interfaces - like this:
Public Class Calculator
Implements ICalculator
Public Function Add(a As Double, b As Double) As Double Implements ICalculator.Add
Return 1 / 0
End Function
...
End Class
The Add-Method performs a division by zero, so the unit test should fail - but because of testing against the interface ICalculator the test is successful.
Could you please help me to understand that? What sense makes it, not to test the code but the interface?
Thanks in advance
Michael
The idea behind mocking is to isolate a class we are testing from its dependencies. So we don't mock the class we are testing, in this case Calculator, we mock an ICalculator when testing a class that uses an ICalculator.
A small example is when we want to test how something interacts with a database, but we don't want to use a real database for some quick tests. (Please excuse the C#.)
[Test]
public void SaveTodoItemToDatabase() {
var substituteDb = Substitute.For<IDatabase>();
var todoScreen = new TodoViewModel(substituteDb);
todoScreen.Item = "Read StackOverflow";
todoScreen.CurrentUser = "Anna";
todoScreen.Save();
substituteDb.Received().SaveTodo("Read StackOverflow", "Anna");
}
The idea here is we've separated the TodoViewModel from the details of saving to the database. We don't want to worry about configuring a database, or getting a connection string, or having data from previous test runs interfering with future tests runs. Testing with a real database can be very valuable, but in some cases we just want to test a smaller unit of functionality. Mocking is one way of doing this.
For the real app, we'll create a TodoViewModel with a real implementation of IDatabase, and provided that implementation follows the expected contract of the interface then we can have a reasonable expectation that it will work.
Hope this helps.
Update in response to comment
The test for TodoViewModel assumes the implementation of the IDatabase works so we can focus on that class' logic. This means we'll probably want a separate set of tests for implementations of IDatabase. Say we have a SqlServerDbimplementation, then we can have some tests (probably against a real database) that check it does what it promises. In those tests we'll no longer be mocking the database interface, because that's what we're testing.
Another thing we can do is have "contract tests" which we can apply to any IDatabase implementation. For example, we could have a test that says for any implementation, saving an item then loading it up again should return the same item. We can then run those tests against all implementations, SqlDb, InMemoryDb, FileDb etc. In this way we can state our assumptions about the dependencies we're mocking, then check that the actual implementations meet our assumptions.
I have a sample(incomplete) class like
class ABC{
public:
void login();
void query_users();
//other methods
private:
//member data
}
This class should be used in a way that login needs to be called first and then only other methods like query_users, etc., can be called. Login sets some private member data for the other methods to use. Is there any simpler way to achieve this other than calling a function that checks if the member data is set at the start of every other method in the class?
There are two general approach I know of, and they differ a good bit. You'll have to pick the appropriate mechanism for the task - in standard class-based OO languages (e.g. Java/C++/C#/Python), they are the only two approaches I know of. (There may be other approaches in different paradigms that I am unfamiliar with.)
1. Check the state.
This is done in many classes already that have to track the state of the system/backing resource. Two common examples are (file) stream and database connections.
A "template" might look like:
void Logon(credentials) { ..; loggedOn = true }
void DieUnlessLoggedIn { if (!loggedOn) { throw .. } }
void DoStuff () { DieUnlessLoggedIn(); .. }
While the above approach is pretty generic, some languages may support invariants (Eiffel), decorations (Python), annotations, AOP, or other assertion mechanisms.
This approach is useful for dynamic state in a mutable world: e.g. what happens after "Logout"? The state for DoStuff is invalid again until a re-logon (if it's allowed). However, this approach cannot be used for compile-time checks in general in mainstream OOP languages because the run-time state simply is not available at compile-time.
2. Use multiple types to represent state.
Create two separate types, such that type ServiceLogon (method Logon) creates ServiceAccess (method DoStuff). Thus DoStuff can only be called (on type ServiceAccess) after created from Logon (on ServiceLogon). This works well to enforce calling order semantics in static languages with member hiding - because programs won't compile if it's wrong.
login = new ServiceLogon(credentials)
access = login.Logon();
access.DoStuff(); // can't be called before obtained via Logon
Using the type to encode additional state can be overly complex as it can fracture a class-based type system, but is useful in "builder" and "repository" patterns and such; basically, ask if the type warrants being split to maintain SRP, then considering this approach.
This approach cannot handle things like "logout" entirely without incorporating state checking as type ServiceAccess would (in the clean sense) always represent the same state due to it being encoded in the type.
1. & 2. Use state checking and state/role-specific types.
A hybrid is totally acceptable, of course, and the above two approaches are not mutually exclusive. It may make sense to separate the roles making one type (and thus methods invoked upon it) dependent upon another method while still checking runtime state as appropriate. As per above, #1 is really good for runtime guards (which can be highly dynamic) while #2 can enforce certain rules at compile-time.
What you can do is to create instances of ABC form a static factory method that returns the instance you can use. In pseudo-code:
abc = ABC.login(); //sets all the state
users = abc.query_users();
I am not sure this is the best way but you can make login() private and call it as part of the constructor, which would ensure that login() is called at time of object creation itself and after that only any other functions can be called (unless you have static functions)
class ABC{
public ABC(credentials){
login(credentails);
}
public:
void query_users();
//other methods
private:
void login();
//member data
}
It will already work first when it goes from the top down. If you want to make sure that login is successful then call the other methods from inside the login() method.
like:
public void login(){
//do login code
if(//code for login check){
//run other methods
}
else{
login(); //re-run login workings
}
}
If you really want to follow good patterns you might try making as many of your classes immutable as possible.
This would imply that your constructor sets the total state (does the entire login) and then the order of the method calls is totally irrelevant.
A method can be tested either with mock object or without. I prefer the solution without mock when they are not necessary because:
They make tests more difficult to understand.
After refactoring it is a pain to fix junit tests if they have been implemented with mocks.
But I would like to ask your opinion. Here the method under test:
public class OndemandBuilder {
....
private LinksBuilder linksBuilder;
....
public OndemandBuilder buildLink(String pid) {
broadcastOfBuilder = new LinksBuilder(pipsBeanFactory);
broadcastOfBuilder.type(XXX).pid(pid);
return this;
}
Test with mocks:
#Test
public void testbuildLink() throws Exception {
String type = "XXX";
String pid = "test_pid";
LinksBuilder linkBuilder = mock(LinksBuilder.class);
given(linkBuilder.type(type)).willReturn(linkBuilder);
//builderFactory replace the new call in order to mock it
given(builderFactory.createLinksBuilder(pipsBeanFactory)).willReturn(linkBuilder);
OndemandBuilder returnedBuilder = builder.buildLink(pid);
assertEquals(builder, returnedBuilder); //they point to the same obj
verify(linkBuilder, times(1)).type(type);
verify(linkBuilder, times(1)).pid(pid);
verifyNoMoreInteractions(linkBuilder);
}
The returnedBuilder obj within the method buildLink is 'this' that means that builder and returnedBuilder can't be different because they point to the same object in memory so the assertEquals is not really testing that it contains the expected field set by the method buildLink (which is the pid).
I have changed that test as below, without using mocks. The below test asserts what we want to test which is that the builder contains a LinkBuilder not null and the LinkBuilder pid is the one expected.
#Test
public void testbuildLink() throws Exception {
String pid = "test_pid";
OndemandBuilder returnedBuilder = builder.buildLink(pid);
assertNotNull(returnedBuilder.getLinkBuilder());
assertEquals(pid, returnedBuilder.getLinkBuilder().getPid());
}
I wouldn't use mock unless they are necessary, but I wonder if this makes sense or I misunderstand the mock way of testing.
Mocking is a very powerful tool when writing unit tests, in a nut shell where you have dependencies between classes, and you want to test one class that depends on another, you can use mock objects to limit the scope of your tests so that you are only testing the code in the class that you want to test, and not those classes it depends on. There is no point me explaining further, I would highly recommend you read the brilliant Martin Fowler work Mocks Aren't Stubs for a full introduction into the topic.
In your example, the test without mocks is definitely cleaner, but you will notice that your test exercises code in both the OndemandBuilder and LinksBuilder classes. It may be that this is what you want to do, but the 'problem' here is that should the test fail, it could be due to issues in either of those two classes. In your case, because the code in OndemandBuilder.buildLink is minimal, I would say your approach is OK. However, if the logic in this function was more complex, then I would suggest that you would want to unit test this method in a way that didn't depend on the behavior of the LinksBuilder.type method. This is where mock objects can help you.
Lets say we do want to test OndemandBuilder.buildLink independent of the LinksBuilder implementation. To do this, we want to be able to replace the linksBuilder object in OndemandBuilder with a mock object - by doing this we can precisely control what is returned by calls to this mock object, breaking the dependency on the implementation of LinksBuilder. This is where the technique Dependency Injection can help you - the example below shows how we could modify OndemandBuilder to allow linksBuilder to be replaced with a mock object (by injecting the dependency in the constructor):
public class OndemandBuilder {
....
private LinksBuilder linksBuilder;
....
public class OndemandBuilder(LinksBuilder linksBuilder) {
this.linksBuilder = linksBuilder;
}
public OndemandBuilder buildLink(String pid) {
broadcastOfBuilder = new LinksBuilder(pipsBeanFactory);
broadcastOfBuilder.type(XXX).pid(pid);
return this;
}
}
Now, in your test, when you create your OndemandBuilder object, you can create a mock version of LinksBuilder, pass it into the constructor, and control how this behaves for the purpose of your test. By using mock objects and dependency injection, you can now properly unit test OndemandBuilder independent of the LinksBuilder implementation.
Hope this helps.
It all dependent upon what you understand by UNIT testing.
Because when you are trying to unit test a class , it means you are not worried about the underlying system/interface. You are assuming they are working correctly hence you just mock them. And when i say you are ASSUMING means you are unit testing the underlying interface separately.
So when you are writing your JUnits without mocks essentially you are doing a system or an integration test.
But to answer your question both ways have their advantages/disadvantages and ideally a system should have both.
I have been exploring the CQRS/DDD-principles and patterns for a while now and have started implementing a sample project where I have split my storage-model into a WriteModel and a ReadModel. The WriteModel will use a simple NoSQL-like database where aggregates are stored in a key-value style, with value being just a serialized version of the aggregate.
I am now looking at ProtoBuf-Net for serializing and deserializing my domain model aggregates in and out of storage. Other than this post I haven't found any guidance or tips for using ProtoBuf-Net in this area. The point is that the (ideal) requirements for serialization and deserialization of aggregates is that the domain model should have as little knowledge as possible about this infrastructural concern, which implies the following:
No attributes on the classes
No constructors, getters, setters or any other piece of code just for the sake of serialization.
Ability to use any (custom) type possible and have it serialized/deserialized.
Thus far I have implemented just the serialization of the first versions of my aggregates which works perfectly fine. I use the RuntimeTypeModel.Default-instance to configure the MetaModel at runtime and have UseConstructor = false everywhere, which enables me to completely separate the serialization mechanics from my domain-assembly. I have even implemented a custom post-deserialization mechanism that enables me to just-in-time initialize fields after ProtoBuf-Net has deserialized it into a valid instance. So suppose I have class AggregateA like so:
[Version(1)]
public sealed class AggregateA
{
private readonly int _x;
private readonly string _y;
...
}
Then in my serialization-library I have code something along the following lines:
var metaType = RuntimeTypeModel.Default.Add(typeof(AggregateA), false);
metaType.UseConstructor = false;
metaType.AddField(1, "_x");
metaType.AddField(2, "_y");
...
However, I realize that up to this point I have only implemented the basic scenario, and I am now starting to think about how to approach versioning of my model. I am particularly interested in larger refactoring-scenario's, where type A has been split into type A1 and A2, for example:
[Version(2)]
public sealed class AggregateA1
{
private readonly int _x;
...
}
[Version(2)]
public sealed class AggregateA2
{
private readonly string _y;
...
}
Suppose I have a serialized bunch of instances of AggregateA, but now my domain model knows only AggregateA1 and AggregateA2, how would you handle this scenario with ProtoBuf-Net?
A second question deals with point 3: is ProtoBuf-Net capable of handling arbitrary types if you're willing to put in some extra configuration-effort? I've read about exceptions raised when using the DateTimeOffset-type, which makes me think not all types can be serialized by the framework out-of-the-box, but can I serialize these types by registering them in the RuntimeTypeModel? Should I even want to go there? Or better to forget about serializing common .NET types other than the simple ones?
protobuf-net is intended to work with predictable known models. It is true that everything can be configured at runtime, but I have not put any thought as to how to handle your A1/A2 scenario, precisely because that is not a supported scenario (in my defense, I can't see that working nicely with most serializers). Thinking off the top of my head, if you have the configuration/mapping data somewhere, then you could simply deserialize twice; i.e. as long as we still tell it that AggregateA1._x maps to 1 and AggregateA2._y maps to 2, you can do:
object a1 = model.Deserialize(source, null, typeof(AggregateA1));
source.Position = 0; // rewind
object a2 = model.Deserialize(source, null, typeof(AggregateA2));
However, more complex tweaks would require additional thought.
Re "arbitrary types"... define "arbitrary" ;p In particular, there is support for "surrogate" types which can be useful for some transformations - but without a very specific "problem statement" it is hard to answer completely.
Summary:
protobuf-net has an intended usage, which includes both serialization-aware (attributed, etc) and non-aware scenarios (runtime configuration, etc) - but it also works for a range of more bespoke scenarios (letting you drop to the raw reader/writer API if you want to). It does not and cannot guarantee to be a direct fit for every serialization scenario imaginable, and how well it behaves will depend on how far from that scenario you are.
OK, I know there has been a lot of confusion over the new AAA syntax in Rhino Mocks, but I have to be honest, from what I have seen so far, I like. It reads better, and saves on some keystrokes.
Basically, I am testing a ListController which is going to basically be in charge of some lists of things :) I have created an interface which will eventually become the DAL, and this is of course being stubbed for now.
I had the following code:
(manager is the system under test, data is the stubbed data interface)
[Fact]
public void list_count_queries_data()
{
data.Expect(x => x.ListCount(1));
manager.ListCount();
data.VerifyAllExpectations();
}
The main aim of this test is to just ensure that the manager is actually querying the DAL. Note that the DAL is not actually even there, so there is no "real" value coming back..
However, this is failing since I need to change the expectation to have a return value, like:
data.Expect(x => x.ListCount(1)).Return(1);
This will then run fine, and the test will pass, however - what is confusing me is that at this point in time, the return value means nothing. I can change it to 100, 50, 42, whatever and the test will always pass?
This makes me nervous, because a test should be explicit and should totally fail if the expected conditions are not met right?
If I change the test to (the "1" is the expected ID the count is linked to):
[Fact]
public void list_count_queries_data()
{
manager.ListCount();
data.AssertWasCalled(x => x.ListCount(1));
}
It all passes fine, and if I switch the test on it's head to AssertWasNotCalled, it fails as expected.. I also think it reads a lot better, is clearer about what is being tested and most importantly PASSES and FAILS as expected!
So, am I missing something in the first code example? What are your thoughts on making assertions on stubs? (there was some interesting discussion here, I personally liked this response.
What is your test trying to achieve?
What behaviour or state are you verifying? Specifically, are you verifying that the collaborator (data) is having its ListCount method called (interaction based testing), or do you just want to make ListCount return a canned value to drive the class under test while verifying the result elsewhere (traditional state based testing)?
If you want set an expectation, use a mock and an expectation:
Use MockRepository.CreateMock<IMyInterface>() and myMock.Expect(x => x.ListCount())
If you want to stub a method, use MockRepository.CreateStub<IMyInterface>() and myStub.Stub(x => x.ListCount()).
(aside: I know you can use stub.AssertWasCalled() to achieve much the same thing as mock.Expect and with arguably better reading syntax, but I'm just drilling into the difference between mocks & stubs).
Roy Osherove has a very nice explanation of mocks and stubs.
Please post more code!
We need a complete picture of how you're creating the stub (or mock) and how the result is used with respect to the class under test. Does ListCount have an input parameter? If so, what does it represent? Do you care if it was called with a certain value? Do you care if ListCount returns a certain value?
As Simon Laroche pointed out, if the Manager is not actually doing anything with the mocked/stubbed return value of ListCount, then the test won't pass or fail because of it. All the test would expect is that the mocked/stubbed method is called -- nothing more.
To better understand the problem, consider three pieces of information and you will soon figure this out:
What is being tested
In what situation?
What is the expected result?
Compare:
Interaction based testing with mocks. The call on the mock is the test.
[Test]
public void calling_ListCount_calls_ListCount_on_DAL()
{
// Arrange
var dalMock = MockRepository.Mock<IDAL>();
var dalMock.Expect(x => x.ListCount()).Returns(1);
var manager = new Manager(dalMock);
// Act
manager.ListCount();
// Assert -- Test is 100% interaction based
dalMock.VerifyAllExpectations();
}
State based testing with a stub. The stub drives the test, but is not a part of the expectation.
[Test]
public void calling_ListCount_returns_same_count_as_DAL()
{
// Arrange
var dalStub = MockRepository.Stub<IDAL>();
var dalStub.Stub(x => x.ListCount()).Returns(1);
var manager = new Manager(dalMock);
// Act
int listCount = manager.ListCount();
// Assert -- Test is 100% state based
Assert.That(listCount, Is.EqualTo(1),
"count should've been identical to the one returned by the dal!");
}
I personally favour state-based testing where at all possible, though interaction based testing is often required for APIs that are designed with Tell, Don't Ask in mind, as you won't have any exposed state to verify against!
API Confusion. Mocks ain't stubs. Or are they?
The distinction between a mock and a stub in rhino mocks is muddled. Traditionally, stubs aren't meant to have expectations -- so if your test double didn't have its method called, this wouldn't directly cause the test to fail.
... However, the Rhino Mocks API is powerful, but confusing as it lets you set expectations on stubs which, to me, goes against the accepted terminology. I don't think much of the terminology, either, mind. It'd be better if the distinction was eliminated and the methods called on the test double set the role, in my opinion.
I think it has to do with what your manager.ListCount() is doing with the return value.
If it is not using it then your DAL can return anything it won't matter.
public class Manager
{
public Manager(DAL data)
{
this.data = data
}
public void ListCount()
{
data.ListCount(1); //Not doing anything with return value
DoingSomeOtherStuff();
}
}
If your list count is doing something with the value you should then put assertions on what it is doing. For exemple
Assert.IsTrue(manager.SomeState == "someValue");
Have you tried using
data.AssertWasCalled(x => x.ListCount(1) = Arg.Is(EXPECTED_VALUE));