There are quite a few written about stub vs mocks, but I can't see the real difference between fake and stub. Can anyone put some light on it?
I assume you are referring to the terminology as introduced by Meszaros. Martin Fowler does also mentions them regularly. I think he explains the difference pretty well in that article.
Nevertheless, I'll try again in my own words :)
A Fake is closer to a real-world implementation than a stub. Stubs contain basically hard-coded responses to an expected request; they are commonly used in unit tests, but they are incapable of handling input other than what was pre-programmed.
Fakes have a more real implementation, like some kind of state that may be kept for example. They can be useful for system tests as well as for unit testing purposes, but they aren't intended for production use because of some limitation or quality requirement.
A fake has the same behavior as the thing that it replaces.
A stub has a "fixed" set of "canned" responses that are specific to your test(s).
A mock has a set of expectations about calls that are made. If these expectations are not met, the test fails.
All of these are similar in that they replace production collaborators that code under test uses.
To paraphrase Roy Osherove in his book The Art of Unit Testing (second edition):
A Fake is any object made to imitate another object. Fakes can be used either as stubs or mocks.
A Stub is a fake that is provided to the class you are testing to satisfy its requirements, but is otherwise ignored in the unit test.
A Mock is a fake that is provided to the class you are testing, and will be inspected as part of the unit test to verify functionality.
For example, the MyClass class you are testing may utilize both a local logger and a third-party web service as part of its operation. You would create a FakeLogger and a FakeWebService, but how they are used determines whether they are stubs or mocks.
The FakeLogger might be used as a stub: it is provided to MyClass and pretends to be a logger, but actually ignores all input and is otherwise just there to get MyClass to operate normally. You don't actually check FakeLogger in your unit tests, and as far as you're concerned it's there to make the compiler shut up.
The FakeWebService might be used as a mock: you provide it to MyClass, and in one of your units tests you call MyClass.Foo() which is supposed to call the third party web service. To verify that this happened, you now check your FakeWebService to see if it recorded the call that it was supposed to receive.
Note that either of these could be reversed and depend on what it is you're testing in a particular unit test. If your unit test is testing the content of what is being logged then you could make a FakeLogger that dutifully records everything it's told so you can interrogate it during the unit test; this is now a mock. In the same test you might not care about when the third-party web service is called; your FakeWebService is now a stub. How you fill in the functions of your fake thus depends on whether it needs to be used as a stub or a mock or both.
In summary (direct quote from the book):
A fake is a generic term that can be used to describe either a stub or a mock object because they both look like the real object. . . . The basic difference is that stubs can't fail tests. Mocks can.
All the rest is implementation details.
These might help
http://xunitpatterns.com/Mocks,%20Fakes,%20Stubs%20and%20Dummies.html
http://hamletdarcy.blogspot.com/2007/10/mocks-and-stubs-arent-spies.html
Related
Is it a good practice to write a EXPECT(something) inside a test double (e.g. spy or mock) method? To ensure the test double is used in a specific way for testing?
If not, what would be a preferred solution?
If you would write a true Mock (as per definition from xUnit Test Patterns) this is exactly what defines this kind of test double. It is set up with the expectations how it will be called and therefore also includes the assertions. That's also how mocking frameworks produce mock objects under the hood. See also the definition from xUnit Test Patterns:
How do we implement Behavior Verification for indirect outputs of the SUT?
How can we verify logic independently when it depends on indirect inputs from other software components?
Replace an object the system under test (SUT) depends on with a test-specific object that verifies it is being used correctly by the SUT.
Here, indirect outputs means that you don't want to verify that the method under test returns some value but that there is something happening inside the method being tested that is behaviour relevant to callers of the method. For instance, that while executing some method the correct behaviour lead to an expected important action. Like sending an email or sending a message somewhere. The mock would be the doubled dependency that also verifies itself that this really happened, i.e. that the method under test really called the method of the dependency with the expected parameter(s).
A spy on the other hand shall just record things of interest that happened to the doubled dependency. Interrogating the spy about what happened (and sometimes also how often) and then judging if that was correct by asserting on the expected events is the responsibility of the test itself. So a mock is always also a spy with the addition of the assertion (expectation) logic. See also Uncle Bobs blog The Little Mocker for a great explanation of the different types of test doubles.
TL;DR
Yes, the mock includes the expectations (assertion) itself, the spy just records what happened and lets the test itself asks the spy and asserts on the expected events.
Mocking frameworks also implement mocks like explained above as they all follow the specified xunit framework.
mock.Verify(p => p.Send(It.IsAny<string>()));
If you look at the above Moq example (C#), you see that the mock object itself is configured to in the end perform the expected verification. The framework makes sure that the mock's verification methods are executed. A hand-written would be setup and than you would call the verification method on the mock object yourself.
Generally, you want to put all EXPECT statements inside individual tests to make your code readable.
If you want to enforce certain things on your test stub/spy, it is probably better to use exceptions or static asserts because your test is usually using them as a black box, and it uses them in an unintended way, your code will either not get compiled, or it will throw and give you the full stack trace which also will cause your test to fail (so you can catch the misuse).
For mocks, however, you have full control over the use and you can be very specific about how they are called and used inside each test. For example in Google test, using GMock matchers, you can say something like:
EXPECT_CALL(turtle, Forward(Ge(100)));
which means expect Forward to be called on the mock object turtle with a parameter equal or greater than 100. Any other value will cause the test to fail.
See this video for more examples on GMock matchers.
It is also very common to check general things in a test fixture (e.g. in Setup or TearDown). For example, this sample from google test enforces each test to finish in a certain amount of time, and the EXPECT statement is in teardown rather than each individual test.
As per official website, NSubstitute is A friendly substitute for .NET mocking libraries.
I did a search / reading around this & found good article for reference & this is it.
Here are few lines from it
For Stub
Because this code only knows about abstractions (ie. the interfaces), it's easy to run this code without using the production implementation of those interfaces. I could just create another implementations just for the test that implements those interfaces, but doesn't call the database. These test implementations are known as 'stubs'.
& for Mock
A mocking library allows you to simulate an interface or abstract type's implementation. You instantiate a 'mock' object of the interface, and tell that mock object what it should return if a method/property is called against that mock. You can also assert that a method/property was or wasn't called.
So if we want to corelate / understand it better, such as in terms of mock / stub library, what it is ? Is it a Mock / Stub / both with simple way of doing things?
I think the definitive source on this is Gerard Meszaros' xUnit Test Patterns book. Martin Fowler has a good summary of Meszaros' types of test doubles. From these definitions, stubs are test doubles that return specific results, whereas mocks are set up with specific expectations on the calls that should be received before a test runs.
NSubstitute is designed for Arrange-Act-Assert (AAA) testing, which to my knowledge was first popularised by the wonderful Moq library. The terms mock and stub predate AAA, so I don't think they exactly fit in with these types of libraries. The terminology has blurred over time so that any test double tends to be called a "mock", even if we aren't setting explicit expectations.
If we are happy to be a bit loose with the definitions, in an NSubstitute context we can use "stubbing" to refer to responses we've set using Returns, and "mocking" as when we assert that an expected call was received using Received. This can be done on the same test double object. i.e. we don't create a mock OR a stub, we create a test double (or "substitute") that can kind of do both. NSubstitute deliberately blurs these lines. From the website:
Mock, stub, fake, spy, test double? Strict or loose? Nah, just substitute for the type you need!
NSubstitute is designed for Arrange-Act-Assert (AAA) testing, so you just need to arrange how it should work, then assert it received the calls you expected once you're done. Because you've got more important code to write than whether you need a mock or a stub.
In answer to your question, if we are being strict with definitions, NSubstitute is a library for creating test doubles, and supports stubs and an alternative to mocks (AAA rather than call expectations). In practice, everyone tends to be loose with the definitions and just call test doubles "mocks".
Most of what I've read about mocks, stubs (test doubles) involves some form of injection of the DOC either through the SUT method itself or constructor or setter methods. And injecting that breaks boundaries like InjectMock are frowned upon as a regular test strategy. But what if you are building a class that you do not want to expose those DOCs? Is there a way to 'unit' test such a module? Without AOP? Is such a test not a real 'unit' test anymore? Is the resistance I'm feeling really design smell and I should expose those DOCs somehow?
For example, lets say I have the following Class that I want to test (unit or otherwise):
public class RemoteRepository {
Properties props = null;
public RemoteRepository(Properties props) { this.props=props; }
public Item export (String itemName) {
JSch ssh = new JSch();
ssh.setIdentity(props.get("keyfile"));
ssh.connect();
ssh.execute("export "+itemName+" "+props.get("exportFilename"));
...
}
Here is a unit I'd like to write a unit test for, but I want to stub or mock out the JSch component. But the objects I create in the method to just do things that the method needs to accomplish are not exposed outside the method even. So I cannot inject a stub to replace them. I could change the export method signature to accept the stub, or add a constructor that does, but that changes my design just to suit a test.
Although the unit will connect to a real server to do the export in prod, when just testing the unit I either want to stub the DOC out completely, or simulate it with a real DOC that is simple and controlled.
This latter approach is like using an in memory db instead of a real one in that it acts and behaves like the eventual db that will be used, but can be confined to just what is needed for the test (eg. just the tables of interest, no heavy security, etc). So I could setup some kind of test double sshd in my test so that when the build runs the test, it has something to test against. This can be a lot of trouble to setup and maintain however and seems like overkill - sometimes trying to stub out a real DOC is harder than just using the real DOC somehow.
Am I stuck trying to setup a test framework that provides an sshd test double? Am I looking at this the wrong way? Do I just use AOP or mock library methods that break the class scope boundaries?
To restate the basic problem is that a lot of times I want to test a method that has complex DOCs (ie. those that interact with other systems: network, db, etc) and I don't want to change the design just to accommodate test double DOC injection. How do you approach testing in such a scenario?
My recommendation, based on personal experience, is to write integration tests where DOCs (Depended On Components) are not mocked.
However, if for whatever reason the teams insists on having unit tests instead, you would have to either use a suitable mocking tool (AOP tools are able, but not a good fit here), or change the design of SUT and DOCs in order to use "weaker" mocking tools.
I found, that when I writing unit tests, especially for methods who do not return the value, I mostly write tests in white box testing manner. I could use reflection to read private data to check is it in the proper state after method execution, etc...
this approach has a lot of limitation, most important of which is
You need to change your tests if you rework method, even is API stay
the same
It's wrong from information hiding (encapsulation) point of view -
tests is a good documentation for our code, so person who will read
it could get some unnecessary info about implementation
But, if method do not return a value and operate with private data, so it's start's very hard (almost impossible) to test like with a black-box testing paradigm.
So, any ideas for a good solution in that problem?
White box testing means that you necessarily have to pull some of the wiring out on the table to hook up your instruments. Stuff I've found helpful:
1) One monolithic sequence of code, that I inherited and didn't want to rewrite, I was able to instrument by putting a state class variable into, and then setting the state as each step passed. Then I tested with different data and matched up the expected state with the actual state.
2) Create mocks for any method calls of your method under test. Check to see that the mock was called as expected.
3) Make needed properties into protected instead of private, and create a sub-class that I actually tested. The sub-class allowed me to inspect the state.
I could use reflection to read private data to check is it in the proper state after method execution
This can really be a great problem for maintenance of your test suite
in .Net instead you could use internal access modifier, so you could use the InternalsVisibleToAttribute in your class library to make your internal types visible to your unit test project.
The internal keyword is an access modifier for types and type members. Internal types or members are accessible only within files in the same assembly
This will not resolve every testing difficulty, but can help
Reference
All,
I'm trying to grasp all the outside-in TDD and BDD stuff and would like you to help me to get it.
Let's say I need to implement Config Parameters functionality working as follows:
there are parameters in file and in database
both groups have to be merged into one parameters set
parameters from database should override those from files
Now I'd like to implement this with outside-in approach, and I stuck just at the beginning. Hope you can help me to get going.
My questions are:
What test should I start with? I just have sth as follows:
class ConfigurationAssemblerTest {
#Test
public void itShouldResultWithEmptyConfigurationWhenBothSourcesAreEmpty() {
ConfigurationAssembler assembler = new ConfigurationAssembler();
// what to put here ?
Configuration config = assembler.getConfiguration();
assertTrue(config.isEmpty());
}
}
I don't know yet what dependencies I'll end with. I don't know how I'm gonna write all that stuff yet and so on.
What should I put in this test to make it valid? Should I mock something? If so how to define those dependencies?
If you could please show me the path to go with this, write some plan, some tests skeletons, what to do and in what order it'd be super-cool. I know it's a lot of writing, so maybe you can point me to any resources? All the resources about outside-in approach I've found were about simple cases with no dependencies etc.
And two questions to mocking approach.
if mocking is about interactions and their verification, does it mean that there should not be state assertions in such tests (only mock verifications) ?
if we replace something that doesn't exist yet with mock just for test, do we replace it later with real version?
Thanks in advance.
Ok, that's indeed a lot of stuff. Let's start from the end:
Mocking is not only about 'interactions and their verification', this would be only one half of the story. In fact, you're using it in two different ways:
Checking, if a certain call was made, and eventually also checking the arguments of the call (this is the 'interactions and verification' part).
Using mocks to replace dependencies of the class-under-test (CUT), eventually setting up return values on the mock objects as required. Here, you use mock objects to isolate the CUT from the rest of the system (so that you can handle the CUT as an isolated 'unit', which sort of runs in a sandbox).
I'd call the first form dynamic or 'interaction-based' unit testing, it uses the Mocking frameworks call verification methods. The second one is more traditional, 'static' unit testing which asserts a fact.
You shouldn't ever have the need to 'replace something that doesn't exist yet' (apart from the fact that this is - logically seen - completely impossible). If you feel like you need to do this, then this is a clear indication that you're trying to make the second step before the first.
Regarding your notion of 'outside-in approach': To be honest, I've never heard of this before, so it doesn't seem to be a very prominent concept - and obviously not a very helpful one, because it seems to confuse things more than clarifying them (at least for the moment).
Now onto your first question: (What test should I start with?):
First things first - you need some mechanism to read the configuration values from file and database, and this functionality should be encapsulated in separate helper classes (you need, among other things, a clean Separation of concerns for effectively doing TDD - this usually is totally underemphasized when introducing TDD/BDD). I'd suggest an interface (e.g. IConfigurationReader) which has two implementations (one for the file stuff and one for the database, e.g. FileConfigurationReader and DatabaseConfigurationReader). In TDD (not necessarily with a BDD approach) you would also have corresponding test fixtures. These fixtures would cover test cases like 'What happens if the underlying data store contains no/invalid/valid/other special values?'. This is what I'd advice you to start with.
Only then - with the reading mechanism in operation and your ConfigurationAssembler class having the necessary dependencies - you would start to write tests for/implement the ConfigurationAssembler class. Your test then could look like this (Because I'm a C#/.NET guy, I don't know the appropriate Java tools. So I'm using pseudo-code here):
class ConfigurationAssemblerTest {
#Test
public void itShouldResultWithEmptyConfigurationWhenBothSourcesAreEmpty() {
IConfigurationReader fileConfigMock = new [Mock of FileConfigurationReader];
fileConfigMock.[WhenAskedForConfigValues].[ReturnEmpty];
IConfigurationReader dbConfigMock = new [Mock of DatabaseConfigurationReader];
dbConfigMock.[WhenAskedForConfigValues].[ReturnEmpty];
ConfigurationAssembler assembler = new ConfigurationAssembler(fileConfigMock, dbConfigMock);
Configuration config = assembler.getConfiguration();
assertTrue(config.isEmpty());
}
}
Two things are important here:
The two reader objects are injected to the ConfigurationAssembler from outside via its constructor - this technique is called Dependency Injection. It is very helpful and important architectural principle, which generally leads to a better and cleaner architecture (and greatly helps in unit testing, especially when using mock objects).
The test now asserts exactly what it states: The ConfigurationAssembler returns ('assembles') an empty config when the underlying reading mechanisms on their part return an empty result set. And because we're using mock objects to provide the config values, the test runs in complete isolation. We can be sure that we're testing only the correct functioning of the ConfigurationAssembler class (its handling of empty values, namely), and nothing else.
Oh, and maybe it's easier for you to start with TDD instead of BDD, because BDD is only a subset of TDD and builds on top of the concepts of TDD. So you can only do (and understand) BDD effectively when you know TDD.
HTH!