Suppose I am using TDD to create some class A. After I am done and have a "green" bar, I decide that I want to extract some class B with only static methods from class A using some refactoring tool. I now have class A and class B both fully unit tested, but only through the test class for class A. Should I also now create a test class specific to functionality of class B, even though that would be duplicating test?
As always, it depends on your context. What do you care about?
Overall behaviour
If you're building a system for internal use, or even a public (web) service, where the software you're shipping is the entire system, you don't have to care too much about a single class. If you're building a system, then test the system.
As long as it's covered by tests, you know that your system behaves correctly. However, you may run into a situation that after some months, you realize that you no longer need the original A class, so you delete it and its corresponding unit tests. This may cause test coverage of B to drop, so it may be a good idea to keep an eye on code coverage trends.
Unit behaviour
If you're building a (class) library, or framework, you're shipping each public class as part of the product. If you have multiple users of your library, you'll need to start thinking about how to avoid breaking changes.
One of the most effective ways to avoid breaking changes is to cover each class by unit tests. As long as you don't change the tests, you know that breaking changes are unlikely if all tests are green. However, that requires that you test all your public classes and members.
Thus, if you extract B to a public class, it's now a class that other consumers may depend on, and it would be a breaking change if you change it. Therefore, you should cover it with new tests. If you're building a unit, then test the unit.
From what you have described the answer is to create another new test. If either changes by you (or someone else who is not familiar with the "shared test") the other class will in no longer be tested.
If this seems wasteful, put the common test code in a third class...
Related
I have a two questions:
1.) On tearDown method of mocks.
People says that its a common practice to set all the mocks to null in teardown method like below:
public void tearDown(){
mockOne=null;
mockTwo=null;
}
I want to know does this really make sense or we should be doing something useful in the tearDown?
JVM will not take care to nullify the mocks?
) Also, whether all the variables that are being used in the jUnit should be defined at class level or at method level? I know creating at class level promotes code reuse but will it not uncessarary occupy memory all the time in maintaining the state?
Thanks.
To your first point, I would say that I personally seldom if ever have a use for a tearDown() method, certainly not to lose a reference to a mock object. I am not inclined to believe "people" on that one.
To your second point, just declare your variables in the smallest scope that you possibly can as what is paramount is that your tests both capture some required behavior and are easily comprehended.
As per usual, keep it simple and strive to have very minimal setup (Arrange), invocation of test subject (Act) and verification (Assert) in each test. If you have trouble with this, it's a sign that your test subject (or class under test) may be doing too much.
1.
I don't think your tearDown does a whole lot of useful things. Your test cases will usually be running on a different VM from the production server VMs, so there's no need to be so cautious about GC (if that's what the intention of setting the mocks to null is). The mock objects among the stubs that your test classes create will be eligible for GC as soon as they go out of scope, which is at worst, after execution of all the test cases and exiting of the JVM.
Sure it might help the garbage-collector reclaim these resources sooner than without it, but unless you run a really large suite of unit tests, this can't be a problem.
If, on the other hand, the idea is to make sure the changes made to the mock objects by one test method do not affect another subsequent test method, this makes sense somewhat.
Don't overthink these fixtures - have something to setup for the subsequent test case(s)? Do it in the #Before annotated setUp method. Have something like cleaning up resources like the database connections and system resources? Do it in the After annotated tearDown method.
Normally, you'll not even write a tearDown method unless you're faced with one of the above said requirements unlike the setUp method which might prove far more useful to you.
2.
My first point above partly answers your questions related to the memory footprint. There's no need to be so cautious with the unit tests.
You'd typically not write unit tests that are long running processes.
Besides, creating the objects closer to where you use them and limiting their scope and exposure to a required minimum is generally a good practice - so if you are not planning to reuse the objects/mocks, feel free to create them (and thereby make them go out of scope as soon as the test method completes the execution) in your test methods.
I've had a certain feeling these last couple of days that dependency-injection should really be called "I can't make up my mind"-pattern. I know this might sound silly, but really it's about the reasoning behind why I should use Dependency Injection (DI). Often it is said that I should use DI, to achieve a higher level of loose-coupling, and I get that part. But really... how often do I change my database, once my choice has fallen on MS SQL or MySQL .. Very rarely right?
Does anyone have some very compelling reasons why DI is the way to go?
Two words, unit testing.
One of the most compelling reasons for DI is to allow easier unit testing without having to hit a database and worry about setting up 'test' data.
DI is very useful for decoupling your system. If all you're using it for is to decouple the database implementation from the rest of your application, then either your application is pretty simple or you need to do a lot more analysis on the problem domain and discover what components within your problem domain are the most likely to change and the components within your system that have a large amount of coupling.
DI is most useful when you're aiming for code reuse, versatility and robustness to changes in your problem domain.
How relevant it is to your project depends upon the expected lifespan of your code. Depending on the type of work you're doing zero reuse from one project to the next for the majority of code you're writing might actually be quite acceptable.
An example for use the use of DI is in creating an application that can be deployed for several clients using DI to inject customisations for the client, which could also be described as the GOF Strategy pattern. Many of the GOF patterns can be facilitated with the use of a DI framework.
DI is more relevant to Enterprise application development in which you have a large amount of code, complicated business requirements and an expectation (or hope) that the system will be maintained for many years or decades.
Even if you don't change the structure of your program during development phases you will find out you need to access several subsystems from different parts of your program. With DI each of your classes just needs to ask for services and you're free of having to provide all the wiring manually.
This really helps me on concentrating on the interaction of things in the software design and not on "who needs to carry what around because someone else needs it later".
Additionally it also just saves a LOT of work writing boilerplate code. Do I need a singleton? I just configure a class to be one. Can I test with such a "singleton"? Yes, I still can (since I just CONFIGURED it to exist only once, but the test can instantiate an alternative implementation).
But, by the way before I was using DI I didn't really understand its worth, but trying it was a real eye-opener to me: My designs are a lot more object-oriented as they have been before.
By the way, with the current application I DON'T unit-test (bad, bad me) but I STILL couldn't live with DI anymore. It is so much easier moving things around and keeping classes small and simple.
While I semi-agree with you with the DB example, one of the large things that I found helpful to use DI is to help me test the layer I build on top of the database.
Here's an example...
You have your database.
You have your code that accesses the database and returns objects
You have business domain objects that take the previous item's objects and do some logic with them.
If you merge the data access with your business domain logic, your domain objects can become difficult to test. DI allows you to inject your own data access objects into your domain so that you don't depend on the database for testing or possibly demonstrations (ran a demo where some data was pulled in from xml instead of a database).
Abstracting 3rd party components and frameworks like this would also help you.
Aside from the testing example, there's a few places where DI can be used through a Design by Contract approach. You may find it appropriate to create a processing engine of sorts that calls methods of the objects you're injecting into it. While it may not truly "process it" it runs the methods that have different implementation in each object you provide.
I saw an example of this where the every business domain object had a "Save" function that the was called after it was injected into the processor. The processor modified the component with configuration information and Save handled the object's primary state. In essence, DI supplemented the polymorphic method implementation of the objects that conformed to the Interface.
Dependency Injection gives you the ability to test specific units of code in isolation.
Say I have a class Foo for example that takes an instance of a class Bar in its constructor. One of the methods on Foo might check that a Property value of Bar is one which allows some other processing of Bar to take place.
public class Foo
{
private Bar _bar;
public Foo(Bar bar)
{
_bar = bar;
}
public bool IsPropertyOfBarValid()
{
return _bar.SomeProperty == PropertyEnum.ValidProperty;
}
}
Now let's say that Bar is instantiated and it's Properties are set to data from some datasource in it's constructor. How might I go about testing the IsPropertyOfBarValid() method of Foo (ignoring the fact that this is an incredibly simple example)? Well, Foo is dependent on the instance of Bar passed in to the constructor, which in turn is dependent on the data from the datasource that it's properties are set to. What we would like to do is have some way of isolating Foo from the resources it depends upon so that we can test it in isolation
This is where Dependency Injection comes in. What we want is to have some way of faking an instance of Bar passed to Foo such that we can control the properties set on this fake Bar and achieve what we set out to do, test that the implementation of IsPropertyOfBarValid() does what we expect it to do, i.e. return true when Bar.SomeProperty == PropertyEnum.ValidProperty and false for any other value.
There are two types of fake object, Mocks and Stubs. Stubs provide input for the application under test so that the test can be performed on something else. Mocks on the other hand provide input to the test to decide on pass\fail.
Martin Fowler has a great article on the difference between Mocks and Stubs
I think that DI is worth using when you have many services/components whose implementations must be selected at runtime based on external configuration. (Note that such configuration can take the form of an XML file or a combination of code annotations and separate classes; choose what is more convenient.)
Otherwise, I would simply use a ServiceLocator, which is much "lighter" and easier to understand than a whole DI framework.
For unit testing, I prefer to use a mocking API that can mock objects on demand, instead of requiring them to be "injected" into the tested unit from a test. For Java, one such library is my own, JMockit.
Aside from loose coupling, testing of any type is achieved with much greater ease thanks to DI. You can put replace an existing dependency of a class under test with a mock, a dummy or even another version. If a class is created with its dependencies directly instantiated it can often be difficult or even impossible to "stub" them out if required.
I just understood tonight.
For me, dependancy injection is a method for instantiate objects which require a lot of parameters to work in a specific context.
When should you use dependancy injection?
You can use dependancy injection if you instanciate in a static way an object. For example, if you use a class which can convert objects into XML file or JSON file and if you need only the XML file. You will have to instanciate the object and configure a lot of thing if you don't use dependancy injection.
When should you not use depandancy injection?
If an object is instanciated with request parameters (after a submission form), you should not use depandancy injection because the object is not instanciated in a static way.
Suppose I was writing a clone of the game 2048 (http://gabrielecirulli.github.io/2048/) and I want to write a test to verify that "the right thing" happens when the game is "won". Suppose that my game state is encapsulated in a class and that the state itself is private.
I suppose that I could write code to play the game, evaluate through the public interface when I'm about to win and then make the winning move; however, this seems like overkill. I would instead like to set a game state, make the winning move and verify that the object behaves as expected.
What is the recommended way of designing such a test? My current thought is that the test should either be a public member function of the class or that the test infrastructure should be friended by the class. Both of these seem distasteful.
Edit: In response to the first question: I'm assuming in this example that I don't have a method to set the game state and that there's no reason to write one; therefore it would be adding additional functionality just to write a test... ...one that then requires another member function to test, a get game state function. So then I'm writing at least two more public methods and test just to write this one test. Worse, these are methods that essentially break encapsulation such that if the internal details change I have to change these two methods and their tests for no other reason than to have a test. This seems more distasteful than friending a test function.
First, remember that Test-Driven Development is a design-oriented methodology. The primary goal of the tests is to influence the design of the SUT and its collaborators; everything else is just along for the ride.
Second, TDD emphasizes small steps. In his book, Test-Driven Development: By Example, Kent Beck says:
If you have to spend a hundred lines creating the objects for one single assertion, then something is wrong. Your objects are too big and need to be split. (p. 194)
This means you should listen to your intuition about writing the code necessary to win the game being overkill.
You also said:
I would instead like to set a game state, make the winning move and verify that the object behaves as expected.
Which is exactly what you should do.
Why? Because you're testing end-game scenarios. Most/all of the details that led to the end-game are irrelevant - you just want to make sure the program does "the right thing... when the game is won." As such, these are the only details that are relevant to your tests.
So what are these details that are relevant to your tests? To figure them out, it helps to discuss things with a colleague.
Q: How does the test configure the system to indicate the game has been won - without actually playing the game?
A: Tell something that the game has been won.
Q: What object would the test tell that the game has been won?
A: I don't know. But to keep things simple, let's say it's some object serving the role of "Referee".
By asking these questions, we've teased out some details of the design. Specifically, we've identified a role which can be represented in OOP by an interface.
What might this "Referee" role look like? Perhaps:
(pseudocode)
begin interface Referee
method GameHasBeenWon returns boolean
end interface
The presence of an interface establishes a seam in the design, which allows tests to use test-doubles in place of production objects. Not only that, it allows the implementation of this functionality to change (e.g., a rule change affecting how a game is determined to be "won") without having to modify any of the surrounding code.
This ties in directly with something else you mentioned:
I'm assuming in this example that I don't have a method to set the game state and that there's no reason to write one; therefore it would be adding additional functionality just to write a test...
A test is a consumer of your code. If it is difficult for a test to interact with your code, then it will be even more difficult for production code (having many more constraints) to interact with it. This is what is meant by "Listening to your tests".
Note that there are a lot of possible designs that can fall out of TDD. Every developer is going to have their own preferences which will influence the look and feel of the architecture. The main takeaway is that TDD helps break your program up into many small pieces, which is one of the core tenets of object oriented design.
I often need to set up the same test structures for different test cases. Therefore I created a TestService class that has several public methods for the test classes.
I guess that this is not the best place to put them as TestService will also be deployed although not needed on production.
Where would you put those commonly test methods? Is there a "best practice" for this?
The way i do it is similar to the above answers. If I have a MyDomain class, I put a MyDomainTestHelper class in a test package. The helper has static methods on it that return the domain objects in question. So i might have a
static createTestMyDomain() {...}
that creates an instance with sensible defaults and
static createTestMyDomain(String something)
so I can specify something and so on.
You should not put them in a "service" as you mentioned. But any way you can be DRY is a good way.
I tend to try to keep to the DRY principle as much as I can when writing tests. This shows in mainly two ways:
I usually implement a TestData class, that holds all the data I feed into my tests in static properties, to make it easier to keep track of what input I give the tests, and to make sure I never give different input when I intend to give the same. This resides in a test project, or in a class library referenced only by the test projects.
If I find I need the same setup or teardown routines in many tests, I often make use of inheritance - create a base class that the test fixtures can inherit, where all the common actions go.
Footnote: I do almost all of my development in C#, but I believe these practices can be applied in any language with a testing framework.
Why don't you put it next to your test classes? The usual way is to put all test-related code into a directory separate from the source directory (e.g. "test"). I.e. not only the test cases themselves, but also supporting classes, such as utility methods, mock implementations etc.
The test cases should be put into the same package as the class tested by them (but in a different physical directory, as explained above). A test utility class which is used by several test cases should be put into some common utility package.
Many people use Mock Objects when they are writing unit tests. What is a Mock Object? Why would I ever need one? Do I need a Mock Object Framework?
Object Mocking is used to keep dependencies out of your unit test.
Sometimes you'll have a test like "SelectPerson" which will select a person from the database and return a Person object.
To do this, you would normally need a dependency on the database, however with object mocking you can simulate the interaction with the database with a mock framework, so it might return a dataset which looks like one returned from the database and you can then test your code to ensure that it handles translating a dataset to a person object, rather than using it to test that a connection to the database exists.
Several people have already answered the 'what', but here are a couple of quick 'whys' that I can think of:
Performance
Because unit tests should be fast, testing a component that
interacts with a network, a database, or other time-intensive
resource does not need to pay the penalty if it's done using mock
objects. The savings add up quickly.
Collaboration
If you are writing a nicely encapsulated piece of
code that needs to interact with someone else's code (that hasn't
been written yet, or is in being developed in parallel - a common
scenario), you can exercise your code with mock objects once an
interface has been agreed upon. Otherwise your code may not begin to
be tested until the other component is finished.
A mock object lets you test against just what you are writing, and abstract details such as accessing a resource (disk, a network service, etc). The mock then lets you pretend to be that external resource, or class or whatever.
You don't really need a mock object framework, just extend the class of the functionality you don't want to worry about in your test and make sure the class you are testing can use your mock instead of the real thing (pass it in via a constructor or setter or something.
Practice will show when mocks are helpful and when they aren't.
EDIT: Mocking resources is especially important so you don't have to rely on them to exist during the test, and you can mock the details of how they exist and what they respond (such as simulating a FileNotFoundException, or a webservice that is missing, or various possible return values of a webservice)... all without the slow access times involved (mocking will prove MUCH faster than accessing such resources in the test).
Do I need a Mock Object Framework?
Certainly not. Sometimes, writing mocks by hand can be quite tedious. But for simple things, it's not bad at all. Applying the principle of Last Responsible Moment to mocking frameworks, you should only switch from hand-written mocks to a framework when you've proven to yourself that hand-writing mocks is more trouble than it's worth.
If you're just getting starting with mocking, jumping straight into a framework is going to at least double your learning curve (can you double a curve?). Mocking frameworks will make much more sense when you've spent a few projects writing mocks by hand.
Object Mocking is a way to create a "virtual" or mocked object from an interface, abstract class, or class with virtual methods. It allows you to sort of wrap one of these in your own definition for testing purposes. It is useful for making an object that is relied on for a certain code block your are testing.
A popular one that I like to use is called Moq, but there are many others like RhinoMock and numerous ones that I don't know about.
It allows you to test how one part of your project interacts with the rest, without building the entire thing and potentially missing a vital part.
EDIT: Great example from wikipedia: It allows you to test out code beforehand, like a car designer uses a crash test dummy to test the behavior of a car during an accident.
Another use is it will let you test against other parts of your system that aren't built yet. For example, if your class depends on some other class that is part of a feature that someone else is working on, you can just ask for a mostly complete interface, program to the interface and just mock the details as you expect them to work. Then, make sure your assumptions about the interface were correct (either while you are developing, or once the feature is complete).
Whether or not you a mocking framework is useful depends in part on the language of the code you're writing. With a static language, you need to put in extra effort in order to trick the compiler into accepting your mock objects as a replacement for the real thing. In a dynamically-typed language such as Python, Ruby or Javascript, you can generally just attach the methods onto arbitrary object or class and pass that as the parameter -- so a framework would add much less value.
2 recommended mocking frameworks for .net Unit testing are Typemock Isolator and Rhino Mock.
In the following link you can see an explanation from Typemock as to why you need a mocking framework for Unit Testing.