I'm in a dilemma of whether I'll write tests for methods that are a product of refactoring another method.
First question, consider this plot.
class Puzzle(
val foo: List<Pieces>,
val bar: List<Pieces>
) {
init {
// code to validate foo
// code to validate bar
}
}
Here I'm validating parameters in constructing an object. This code is the result of TDD. But with TDD we write fail_test -> pass test -> refactor, when refactoring I transferred validator methods to a helper class PuzzleHelper.
object PuzzleHelper {
fun validateFoo() {
...
}
fun validateBar() {
...
}
}
Do I still need to test validateFoo and validateBar in this case?
Second question
class Puzzle(
val foo: List<Pieces>,
val bar: List<Pieces>
) {
...
fun getPiece(inPosition: Position) {
validatePosition()
// return piece at position
}
fun removePiece(inPosition: Position) {
validatePosition()
// remove piece at position
}
}
object PuzzleHelper {
...
fun validatePosition() {
...
}
}
Do I still need to write test for getPiece and removePiece that involve position validation?
I really want to be fluent in using TDD, but don't know how to start. Now I finally dive-in and don't care whats ahead, all I want is product quality. Hope to hear from your enlightenment soon.
When we get to the refactoring stage of the Red -> Green -> Refactor cycle, we're not supposed to add any new behavior. This means that all the code is already tested, so no new tests are required. You can easily validate you've done this by changing the refactored code and watch it fail a test. If it doesn't, you added something you weren't supposed to.
In some cases, if the extracted code is reused in other places, it might make sense to transfer the tests to a test suite for the refactored code.
As for the second question, that depends on your design, as well as some things that aren't in your code. For example, I don't know what you'd like to do if validation fails. You'll have to add different tests for those cases in case validation fails for each method.
The one thing I would like to point out, is that placing methods in a static object (class functions, global functions, however you want to call it) makes it harder to test code. If you'd like to test your class methods when ignoring validation (stubbing it to always pass) you won't be able to do that.
I prefer to make a collaborator that gets passed to the class as a constructor argument. So your class now gets a validator: Validator and you can pass anything you want to it in the test. A stub, the real thing, a mock, etc.
Do I still need to test validateFoo and validateBar in this case?
It depends.
Part of the point of TDD is that we should be able to iterate on the internal design; aka refactoring. That's the magic that allows us to start from a small investment in up front design and work out the rest as we go -- the fact that we can change things, and the tests evaluate the change without getting in the way.
That works really well when the required behaviors of your system are stable.
When the required behaviors of the system are not stable, when we have a lot of decisions that are in flux, when we know the required behaviors are going to change but we don't know which... having a single test that spans many unstable behaviors tends to make the test "brittle".
This was the bane of automated UI testing for a long time -- because testing a UI spans pretty much every decision at every layer of the system, tests were constantly in maintenance to eliminate cascades of false positives that arose in the face of otherwise insignificant behavior changes.
In that situation, you may want to start looking into ways introduce bulkheads that prevent excessive damage when a requirement changes. We start writing tests that validate that the test subject behaves in the same way that some simpler oracle behaves, along with a test that the simpler oracle does the right thing.
This, too, is part of the feedback loop of TDD -- because tests that span many unstable behaviors is hard, we refactor towards designs that support testing behaviors at an isolated grain, and larger compositions in terms of their simpler elements.
Related
Most of what I've read about mocks, stubs (test doubles) involves some form of injection of the DOC either through the SUT method itself or constructor or setter methods. And injecting that breaks boundaries like InjectMock are frowned upon as a regular test strategy. But what if you are building a class that you do not want to expose those DOCs? Is there a way to 'unit' test such a module? Without AOP? Is such a test not a real 'unit' test anymore? Is the resistance I'm feeling really design smell and I should expose those DOCs somehow?
For example, lets say I have the following Class that I want to test (unit or otherwise):
public class RemoteRepository {
Properties props = null;
public RemoteRepository(Properties props) { this.props=props; }
public Item export (String itemName) {
JSch ssh = new JSch();
ssh.setIdentity(props.get("keyfile"));
ssh.connect();
ssh.execute("export "+itemName+" "+props.get("exportFilename"));
...
}
Here is a unit I'd like to write a unit test for, but I want to stub or mock out the JSch component. But the objects I create in the method to just do things that the method needs to accomplish are not exposed outside the method even. So I cannot inject a stub to replace them. I could change the export method signature to accept the stub, or add a constructor that does, but that changes my design just to suit a test.
Although the unit will connect to a real server to do the export in prod, when just testing the unit I either want to stub the DOC out completely, or simulate it with a real DOC that is simple and controlled.
This latter approach is like using an in memory db instead of a real one in that it acts and behaves like the eventual db that will be used, but can be confined to just what is needed for the test (eg. just the tables of interest, no heavy security, etc). So I could setup some kind of test double sshd in my test so that when the build runs the test, it has something to test against. This can be a lot of trouble to setup and maintain however and seems like overkill - sometimes trying to stub out a real DOC is harder than just using the real DOC somehow.
Am I stuck trying to setup a test framework that provides an sshd test double? Am I looking at this the wrong way? Do I just use AOP or mock library methods that break the class scope boundaries?
To restate the basic problem is that a lot of times I want to test a method that has complex DOCs (ie. those that interact with other systems: network, db, etc) and I don't want to change the design just to accommodate test double DOC injection. How do you approach testing in such a scenario?
My recommendation, based on personal experience, is to write integration tests where DOCs (Depended On Components) are not mocked.
However, if for whatever reason the teams insists on having unit tests instead, you would have to either use a suitable mocking tool (AOP tools are able, but not a good fit here), or change the design of SUT and DOCs in order to use "weaker" mocking tools.
Background
I'm in the process of reworking and refactoring a huge codebase which was written with neither testability nor maintainability in mind. There is a lot of global/static state going on. A function needs a database connection, so it just conjures one up using a global static method: $conn = DatabaseManager::getConnection($connName);. Or it wants to load a file, so it does it using $fileContents = file_get_contents($hardCodedFilename);.
Much of this code does not have proper tests and has only ever been tested directly in production. So the first thing I am intending on doing is write unit tests, to ensure the functionality is correct after refactoring. Now sadly code like the examples above is barely unit testable, because none of the external dependencies (database connections, file handles, ...) can be properly mocked.
Abstraction
To work around this I have created very thin wrappers around for example the system functions, that can be used in places where non-mockable function calls were used before. (I'm giving these examples in PHP, but I assume they are applicable for any other OOP language as well. Also this is a highly shortened example, in reality I am dealing with much larger classes.)
interface Time {
/**
* Returns the current time in seconds since the epoch.
* #return int for example: 1380872620
*/
public function current();
}
class SystemTime implements Time {
public function current() {
return time();
}
}
These can be used in the code like so:
class TimeUser {
/**
* #var Time
*/
private $time;
/**
* Prints out the current time.
*/
public function tellsTime() {
// before:
echo time();
// now:
echo $this->time->current();
}
}
Since the application only depends on the interface, I can replace it in a test with a mocked Time instance, which for example allows to predefine the value to return for the next call to current().
Injection
So far so basic. My actual question is how to get the proper instances into the classes that depend upon them. From my Understanding of Dependency injection, services are meant to be passed down by the application into the components that need them. Usually these services would be created in a {{main()}} method or at some other starting point and then strung along until they reach the components where they are needed.
This model likely works well when creating a new application from scratch, but for my situation it's less than ideal, since I want to move gradually to a better design. So I've come up with the following pattern, which automatically provides the old functionality while leaving me with the flexibility of substituting services.
class TimeUser {
/**
* #var Time
*/
private $time;
public function __construct(Time $time = null) {
if ($time === null) {
$time = new SystemTime();
}
$this->time = $time;
}
}
A service can be passed into the constructor, allowing for mocking of the service in a test, yet during "regular" operation, the class knows how to create its own collaborators, providing a default functionality, identical to what was needed before.
Problem
I've been told that this approach is unclean and subverts the idea of dependency injection. I do understand that the true way would be to pass down dependencies, like outlined above, but I don't see anything wrong with this simpler approach. Keep in mind also that this is a huge system, where potentially hundreds of services would need to be created up front (Service locator would be an alternative, but for now I am trying to go this other direction).
Can someone shed some light onto this issue and provide some insight into what would be a better way to achieve a refactoring in my case?
I think You've made first good step.
Last year I was on DutchPHP and there was a lecture about refactoring, lecturer described 3 major steps of extracting responsibilyty froma god class:
Extract code to private method (it should be simple copy paste since
$this is the same)
Extract code to separate class and pull
dependency
Push dependency
I think you are somewhere between 1st and 2nd step. You have a backdoor for unit tests.
Next thing according to above algorithm is to create some static factory (lecturer named it ApplicationFactory) which will be used instead of creation of instance in TimeUser.
ApplicationFactory is some kind of ServiceLocator pattern. This way you will inverse dependency (according to SOLID principle).
If you are happy with that you should remove passing Time instance into constructor and use ServiceLocator only (without backdoor for unit tests, You should stub service locator)
If you are not, then You have to find all places where TimeUser is being instantiated and inject Time implemenation:
new TimeUser(ApplicationFactory::getTime());
After some time yours ApplicationFactory will become very big. Then You have to made a decision:
Split it into smaller factories
Use some dependency injection container (Symfony DI, AurynDI or
something like that)
Currently my team is doing something similar. We are extracting responsibilities to seperate classes and inject them. We have an ApplicationFactory but we use it as service locator at as hight level as possible so classes bellow gets all dependencies injected and don't know anything about ApplicationFactory. Our application factory is big and now we are preparing to replace it with SymfonyDI.
You asked for a good mechanism to do this.
You've described some stages you might force the program to go through to accomplish this, but you are still apparantly planning to do this by hand at apparantly a very high cost.
If you really want to get this done on a huge code base, you might consider automating the steps using a program transformation engine: http://en.wikipedia.org/wiki/Program_transformation
Such a tool can let you write explicit rules for modifying code. Done right, this can make code changes reliably. That doesn't minimize your need for testing, but can let you spend more time writing tests and less time hand-changing the code (erroneously).
All,
I'm trying to grasp all the outside-in TDD and BDD stuff and would like you to help me to get it.
Let's say I need to implement Config Parameters functionality working as follows:
there are parameters in file and in database
both groups have to be merged into one parameters set
parameters from database should override those from files
Now I'd like to implement this with outside-in approach, and I stuck just at the beginning. Hope you can help me to get going.
My questions are:
What test should I start with? I just have sth as follows:
class ConfigurationAssemblerTest {
#Test
public void itShouldResultWithEmptyConfigurationWhenBothSourcesAreEmpty() {
ConfigurationAssembler assembler = new ConfigurationAssembler();
// what to put here ?
Configuration config = assembler.getConfiguration();
assertTrue(config.isEmpty());
}
}
I don't know yet what dependencies I'll end with. I don't know how I'm gonna write all that stuff yet and so on.
What should I put in this test to make it valid? Should I mock something? If so how to define those dependencies?
If you could please show me the path to go with this, write some plan, some tests skeletons, what to do and in what order it'd be super-cool. I know it's a lot of writing, so maybe you can point me to any resources? All the resources about outside-in approach I've found were about simple cases with no dependencies etc.
And two questions to mocking approach.
if mocking is about interactions and their verification, does it mean that there should not be state assertions in such tests (only mock verifications) ?
if we replace something that doesn't exist yet with mock just for test, do we replace it later with real version?
Thanks in advance.
Ok, that's indeed a lot of stuff. Let's start from the end:
Mocking is not only about 'interactions and their verification', this would be only one half of the story. In fact, you're using it in two different ways:
Checking, if a certain call was made, and eventually also checking the arguments of the call (this is the 'interactions and verification' part).
Using mocks to replace dependencies of the class-under-test (CUT), eventually setting up return values on the mock objects as required. Here, you use mock objects to isolate the CUT from the rest of the system (so that you can handle the CUT as an isolated 'unit', which sort of runs in a sandbox).
I'd call the first form dynamic or 'interaction-based' unit testing, it uses the Mocking frameworks call verification methods. The second one is more traditional, 'static' unit testing which asserts a fact.
You shouldn't ever have the need to 'replace something that doesn't exist yet' (apart from the fact that this is - logically seen - completely impossible). If you feel like you need to do this, then this is a clear indication that you're trying to make the second step before the first.
Regarding your notion of 'outside-in approach': To be honest, I've never heard of this before, so it doesn't seem to be a very prominent concept - and obviously not a very helpful one, because it seems to confuse things more than clarifying them (at least for the moment).
Now onto your first question: (What test should I start with?):
First things first - you need some mechanism to read the configuration values from file and database, and this functionality should be encapsulated in separate helper classes (you need, among other things, a clean Separation of concerns for effectively doing TDD - this usually is totally underemphasized when introducing TDD/BDD). I'd suggest an interface (e.g. IConfigurationReader) which has two implementations (one for the file stuff and one for the database, e.g. FileConfigurationReader and DatabaseConfigurationReader). In TDD (not necessarily with a BDD approach) you would also have corresponding test fixtures. These fixtures would cover test cases like 'What happens if the underlying data store contains no/invalid/valid/other special values?'. This is what I'd advice you to start with.
Only then - with the reading mechanism in operation and your ConfigurationAssembler class having the necessary dependencies - you would start to write tests for/implement the ConfigurationAssembler class. Your test then could look like this (Because I'm a C#/.NET guy, I don't know the appropriate Java tools. So I'm using pseudo-code here):
class ConfigurationAssemblerTest {
#Test
public void itShouldResultWithEmptyConfigurationWhenBothSourcesAreEmpty() {
IConfigurationReader fileConfigMock = new [Mock of FileConfigurationReader];
fileConfigMock.[WhenAskedForConfigValues].[ReturnEmpty];
IConfigurationReader dbConfigMock = new [Mock of DatabaseConfigurationReader];
dbConfigMock.[WhenAskedForConfigValues].[ReturnEmpty];
ConfigurationAssembler assembler = new ConfigurationAssembler(fileConfigMock, dbConfigMock);
Configuration config = assembler.getConfiguration();
assertTrue(config.isEmpty());
}
}
Two things are important here:
The two reader objects are injected to the ConfigurationAssembler from outside via its constructor - this technique is called Dependency Injection. It is very helpful and important architectural principle, which generally leads to a better and cleaner architecture (and greatly helps in unit testing, especially when using mock objects).
The test now asserts exactly what it states: The ConfigurationAssembler returns ('assembles') an empty config when the underlying reading mechanisms on their part return an empty result set. And because we're using mock objects to provide the config values, the test runs in complete isolation. We can be sure that we're testing only the correct functioning of the ConfigurationAssembler class (its handling of empty values, namely), and nothing else.
Oh, and maybe it's easier for you to start with TDD instead of BDD, because BDD is only a subset of TDD and builds on top of the concepts of TDD. So you can only do (and understand) BDD effectively when you know TDD.
HTH!
I'm hoping someone can help me see the real value in these tests because I read a lot about them and even worked with them on a recent project. But I never really saw the real value in the tests. Sure I can understand why they are written and see how they could sort of be useful. But it just seems that there is little ROI on these types of tests. Here is an example of some class in our code base:
class ServiceObject : IServiceObject
{
Dependency _dependency;
ServiceObject(Dependency dependency)
{
this._dependency = dependency
}
bool SomeMethod()
{
dependency.SomePublicMethod();
}
}
Then in our behavioral tests we would write test like so (pseudo code):
void ServiceObject_SomeMethod_Uses_Dependency_SomePublicMethod()
{
create mock of IServiceObject;
stub out dependency object
add expectation of call to dependency.SomePublicMethod for IServiceObject.SomeMethod
call mockserviceobject.SomeMethod
check whether expectation was satisfied
}
Obviously there are some details missing but if you are familiar with this type of testing you will understand.
So my question really derives from the fact that I can't see how it is valuable to know that my ServiceObject calls into the dependency method. I understand the reasoning behind it because you want to make sure that the logic of the method is hitting the parts that it is supposed to. But what I can't see is how this is a sustainable testing pattern?
I wrote the logic and know how the code should work so why would it ever change after I test it once to make sure that it is working? Now you can say that if you work in a team environment you might want to make sure that someone doesn't come along and change the code so that the dependency is accidentally skipped and thus would want to make sure they are aware of it. But what if it was skipped for a valid reason. Then that whole test and maybe any others would all have to be scrapped.
Anyways I am just hoping for someone to turn the light on as to the true potential of these types of tests.
The objective is to test the class in isolation, truly Unit test it. Verify that it performs exactly its responsibilities.
In this case the code is rather trivial. The value of this kind of testing may become more clear when there is conditionality in the processing path, parameters are passed to the dependency and processing is performed on the results from the dependency.
For example, suppose instead a method was like this:
bool someMethod(int paramX, int param Y ){
if ( (paramX / paramY) > 5 ){
return dependency.doOneThing(paramX);
} else {
return dependency.doSomethingElse(paramY);
}
}
Now we have quite a few tests to write, and I think the value becomes more obvious. Especially when we write a test with paramY set to zero.
I decided to add unit tests to existing project (quite big one).
I am using "google toolbox for mac" for various type of STAssert... and OCMock framework.
But I think I'm testing wrong. For example, I have public function saveData, which doesn't return anything and only change internal state of the object. Should I test it? Due to encapsulation principle - I don't have to worry much about object implementation and I mustn't depend on private variables (because they can change/be deleted in the future)
#implementation Foo
-(void) saveData {
internalData_ = 88;
}
In real project this function saveData is 100 lines long and it's change a lot of private variables of the class.
So, should I test it or not? I have a little previous experience in unit testing and cannot make decision by my own.
Does the internal state that gets changed affect any later calls on that object? If so, you should include it in a unit test like
Test a()
Do saveData()
Test a() again
Even if not, it might be a good idea to unit test it. Not for determining whether other code will break by using this method, but for automatically testing the correct implementation of the method. Even though the method doesn't return anything, it probably still has some kind of contract ("If I call it, this must happen") and you should check if what should've happened, happened (e.g. a line added in a log file, or something).
Now, how to check that if the method doesn't return anything, is another question entirely. Ironically enough, that's an implementation detail of the unit test.