Don't understand the purpose of behavioral tests - testing

I'm hoping someone can help me see the real value in these tests because I read a lot about them and even worked with them on a recent project. But I never really saw the real value in the tests. Sure I can understand why they are written and see how they could sort of be useful. But it just seems that there is little ROI on these types of tests. Here is an example of some class in our code base:
class ServiceObject : IServiceObject
{
Dependency _dependency;
ServiceObject(Dependency dependency)
{
this._dependency = dependency
}
bool SomeMethod()
{
dependency.SomePublicMethod();
}
}
Then in our behavioral tests we would write test like so (pseudo code):
void ServiceObject_SomeMethod_Uses_Dependency_SomePublicMethod()
{
create mock of IServiceObject;
stub out dependency object
add expectation of call to dependency.SomePublicMethod for IServiceObject.SomeMethod
call mockserviceobject.SomeMethod
check whether expectation was satisfied
}
Obviously there are some details missing but if you are familiar with this type of testing you will understand.
So my question really derives from the fact that I can't see how it is valuable to know that my ServiceObject calls into the dependency method. I understand the reasoning behind it because you want to make sure that the logic of the method is hitting the parts that it is supposed to. But what I can't see is how this is a sustainable testing pattern?
I wrote the logic and know how the code should work so why would it ever change after I test it once to make sure that it is working? Now you can say that if you work in a team environment you might want to make sure that someone doesn't come along and change the code so that the dependency is accidentally skipped and thus would want to make sure they are aware of it. But what if it was skipped for a valid reason. Then that whole test and maybe any others would all have to be scrapped.
Anyways I am just hoping for someone to turn the light on as to the true potential of these types of tests.

The objective is to test the class in isolation, truly Unit test it. Verify that it performs exactly its responsibilities.
In this case the code is rather trivial. The value of this kind of testing may become more clear when there is conditionality in the processing path, parameters are passed to the dependency and processing is performed on the results from the dependency.
For example, suppose instead a method was like this:
bool someMethod(int paramX, int param Y ){
if ( (paramX / paramY) > 5 ){
return dependency.doOneThing(paramX);
} else {
return dependency.doSomethingElse(paramY);
}
}
Now we have quite a few tests to write, and I think the value becomes more obvious. Especially when we write a test with paramY set to zero.

Related

DDD reusable functionality in an Entity/Aggregate

I have the following desing in DDD
Post Aggregate with
Body: HTML of the post
Banner entity with
Html: HTML of the banner
The Banner entity belongs to Post aggregate, so I want to create a method BodyWithBanners in the Post aggregate.
The point of this method will be to search into the HTML of the Post.Body and insert the HTML of the Banner.
So far, so good.
However I have intention of reuse this functionallity in abstract: "Insert some HTML inside another HTML". So I'm creating a diffent class for doing that: BannerReplacer
Here comes the problem, how should I invoke this new class?
Just create an instance inside the Post.BodyWithBanners method (breaking Dependency Injection)
Passing the BannerReplacer in the constructor of the Post aggregate (This can be a nightmare for creating Post instances)
Passing the BannerReplacer to the BodyWithBanners method (which implies the client using Post must handle the BannerReplacer)
I have chosen for now the first option, but I don't feel really confortable with it, I believe there must be a better way of doing this.
I have chosen for now the first option, but I don't feel really comfortable with it, I believe there must be a better way of doing this.
Much of the time, the first option is fine -- so you should practice being comfortable with it. That mostly means thinking more about what dependency injection is for, and having a clear picture in your mind for whether or not those forces are at play here.
If Banner is an entity, in the domain-driven-design sense, then it is probably something analogous to an in memory state machine. It's got a data structure that it manages, and some functions for changing that data structure, or answering interesting questions about that data structure, but it doesn't have I/O, database, network etc concerns.
That in turn suggests that you can run it the same way in all contexts - you don't need a bunch of substitute implementations to make it testable. You just instantiate one and call its methods.
If it runs the same way in all contexts, then it doesn't need configurable behavior. If you don't need to be able to configure the behavior, then you don't need dependency injection (because all copies of this entity will use (copies of) the same dependencies.
When you do have a configurable behavior, then the analysis is going to need to look at scope. If you need to be able to change that behavior from one invocation to the next, then the caller is going to need to know about it. If the behavior changes less frequently than that, then you can start looking into whether "constructor injection" makes sense.
You know that you intend to use a single BannerReplacer for a given method invocation, so you can immediately start with a method that looks like:
class Banner {
void doTheThing(arg, bannerReplacer) {
/* do the bannerReplacer thing */
}
}
Note that this signature has no dependency at all on the lifetime of the bannerReplacer. More particularly, the BannerReplacer might have a longer lifetime than Banner, or a shorter one. We only care that the lifetime is longer than the doTheThing method.
class Banner {
void doTheThing(arg) {
this.doTheThing(arg, new BannerReplacer())
}
// ...
}
Here, the caller doesn't need to know about BannerReplacer at all; we'll use a new copy of the default implementation every time. Caller's that care which implementation is used can pass in their own.
class Banner {
bannerReplacer = new BannerReplacer()
void doTheThing(arg) {
this.doTheThing(arg, this.bannerReplacer)
}
// ...
}
Same idea as before; we're just using an instance of the BannerReplacer with a longer lifetime.
class Banner {
Banner() {
this(new BannerReplacer())
}
Banner(bannerReplacer) {
this.bannerReplacer = bannerReplacer;
}
void doTheThing(arg) {
this.doTheThing(arg, this.bannerReplacer)
}
// ...
}
Same idea as before, but now we are allowing the "injection" of a default implementation that can outlive the given instance of Banner.
In the long term, the comfort comes from doing the analysis to understand the requirements of the current problem, so that you can choose the appropriate tool.

TDD Unit Test sub-methods

I'm in a dilemma of whether I'll write tests for methods that are a product of refactoring another method.
First question, consider this plot.
class Puzzle(
val foo: List<Pieces>,
val bar: List<Pieces>
) {
init {
// code to validate foo
// code to validate bar
}
}
Here I'm validating parameters in constructing an object. This code is the result of TDD. But with TDD we write fail_test -> pass test -> refactor, when refactoring I transferred validator methods to a helper class PuzzleHelper.
object PuzzleHelper {
fun validateFoo() {
...
}
fun validateBar() {
...
}
}
Do I still need to test validateFoo and validateBar in this case?
Second question
class Puzzle(
val foo: List<Pieces>,
val bar: List<Pieces>
) {
...
fun getPiece(inPosition: Position) {
validatePosition()
// return piece at position
}
fun removePiece(inPosition: Position) {
validatePosition()
// remove piece at position
}
}
object PuzzleHelper {
...
fun validatePosition() {
...
}
}
Do I still need to write test for getPiece and removePiece that involve position validation?
I really want to be fluent in using TDD, but don't know how to start. Now I finally dive-in and don't care whats ahead, all I want is product quality. Hope to hear from your enlightenment soon.
When we get to the refactoring stage of the Red -> Green -> Refactor cycle, we're not supposed to add any new behavior. This means that all the code is already tested, so no new tests are required. You can easily validate you've done this by changing the refactored code and watch it fail a test. If it doesn't, you added something you weren't supposed to.
In some cases, if the extracted code is reused in other places, it might make sense to transfer the tests to a test suite for the refactored code.
As for the second question, that depends on your design, as well as some things that aren't in your code. For example, I don't know what you'd like to do if validation fails. You'll have to add different tests for those cases in case validation fails for each method.
The one thing I would like to point out, is that placing methods in a static object (class functions, global functions, however you want to call it) makes it harder to test code. If you'd like to test your class methods when ignoring validation (stubbing it to always pass) you won't be able to do that.
I prefer to make a collaborator that gets passed to the class as a constructor argument. So your class now gets a validator: Validator and you can pass anything you want to it in the test. A stub, the real thing, a mock, etc.
Do I still need to test validateFoo and validateBar in this case?
It depends.
Part of the point of TDD is that we should be able to iterate on the internal design; aka refactoring. That's the magic that allows us to start from a small investment in up front design and work out the rest as we go -- the fact that we can change things, and the tests evaluate the change without getting in the way.
That works really well when the required behaviors of your system are stable.
When the required behaviors of the system are not stable, when we have a lot of decisions that are in flux, when we know the required behaviors are going to change but we don't know which... having a single test that spans many unstable behaviors tends to make the test "brittle".
This was the bane of automated UI testing for a long time -- because testing a UI spans pretty much every decision at every layer of the system, tests were constantly in maintenance to eliminate cascades of false positives that arose in the face of otherwise insignificant behavior changes.
In that situation, you may want to start looking into ways introduce bulkheads that prevent excessive damage when a requirement changes. We start writing tests that validate that the test subject behaves in the same way that some simpler oracle behaves, along with a test that the simpler oracle does the right thing.
This, too, is part of the feedback loop of TDD -- because tests that span many unstable behaviors is hard, we refactor towards designs that support testing behaviors at an isolated grain, and larger compositions in terms of their simpler elements.

What's a good mechanism to move from global state to patterns like dependency injection?

Background
I'm in the process of reworking and refactoring a huge codebase which was written with neither testability nor maintainability in mind. There is a lot of global/static state going on. A function needs a database connection, so it just conjures one up using a global static method: $conn = DatabaseManager::getConnection($connName);. Or it wants to load a file, so it does it using $fileContents = file_get_contents($hardCodedFilename);.
Much of this code does not have proper tests and has only ever been tested directly in production. So the first thing I am intending on doing is write unit tests, to ensure the functionality is correct after refactoring. Now sadly code like the examples above is barely unit testable, because none of the external dependencies (database connections, file handles, ...) can be properly mocked.
Abstraction
To work around this I have created very thin wrappers around for example the system functions, that can be used in places where non-mockable function calls were used before. (I'm giving these examples in PHP, but I assume they are applicable for any other OOP language as well. Also this is a highly shortened example, in reality I am dealing with much larger classes.)
interface Time {
/**
* Returns the current time in seconds since the epoch.
* #return int for example: 1380872620
*/
public function current();
}
class SystemTime implements Time {
public function current() {
return time();
}
}
These can be used in the code like so:
class TimeUser {
/**
* #var Time
*/
private $time;
/**
* Prints out the current time.
*/
public function tellsTime() {
// before:
echo time();
// now:
echo $this->time->current();
}
}
Since the application only depends on the interface, I can replace it in a test with a mocked Time instance, which for example allows to predefine the value to return for the next call to current().
Injection
So far so basic. My actual question is how to get the proper instances into the classes that depend upon them. From my Understanding of Dependency injection, services are meant to be passed down by the application into the components that need them. Usually these services would be created in a {{main()}} method or at some other starting point and then strung along until they reach the components where they are needed.
This model likely works well when creating a new application from scratch, but for my situation it's less than ideal, since I want to move gradually to a better design. So I've come up with the following pattern, which automatically provides the old functionality while leaving me with the flexibility of substituting services.
class TimeUser {
/**
* #var Time
*/
private $time;
public function __construct(Time $time = null) {
if ($time === null) {
$time = new SystemTime();
}
$this->time = $time;
}
}
A service can be passed into the constructor, allowing for mocking of the service in a test, yet during "regular" operation, the class knows how to create its own collaborators, providing a default functionality, identical to what was needed before.
Problem
I've been told that this approach is unclean and subverts the idea of dependency injection. I do understand that the true way would be to pass down dependencies, like outlined above, but I don't see anything wrong with this simpler approach. Keep in mind also that this is a huge system, where potentially hundreds of services would need to be created up front (Service locator would be an alternative, but for now I am trying to go this other direction).
Can someone shed some light onto this issue and provide some insight into what would be a better way to achieve a refactoring in my case?
I think You've made first good step.
Last year I was on DutchPHP and there was a lecture about refactoring, lecturer described 3 major steps of extracting responsibilyty froma god class:
Extract code to private method (it should be simple copy paste since
$this is the same)
Extract code to separate class and pull
dependency
Push dependency
I think you are somewhere between 1st and 2nd step. You have a backdoor for unit tests.
Next thing according to above algorithm is to create some static factory (lecturer named it ApplicationFactory) which will be used instead of creation of instance in TimeUser.
ApplicationFactory is some kind of ServiceLocator pattern. This way you will inverse dependency (according to SOLID principle).
If you are happy with that you should remove passing Time instance into constructor and use ServiceLocator only (without backdoor for unit tests, You should stub service locator)
If you are not, then You have to find all places where TimeUser is being instantiated and inject Time implemenation:
new TimeUser(ApplicationFactory::getTime());
After some time yours ApplicationFactory will become very big. Then You have to made a decision:
Split it into smaller factories
Use some dependency injection container (Symfony DI, AurynDI or
something like that)
Currently my team is doing something similar. We are extracting responsibilities to seperate classes and inject them. We have an ApplicationFactory but we use it as service locator at as hight level as possible so classes bellow gets all dependencies injected and don't know anything about ApplicationFactory. Our application factory is big and now we are preparing to replace it with SymfonyDI.
You asked for a good mechanism to do this.
You've described some stages you might force the program to go through to accomplish this, but you are still apparantly planning to do this by hand at apparantly a very high cost.
If you really want to get this done on a huge code base, you might consider automating the steps using a program transformation engine: http://en.wikipedia.org/wiki/Program_transformation
Such a tool can let you write explicit rules for modifying code. Done right, this can make code changes reliably. That doesn't minimize your need for testing, but can let you spend more time writing tests and less time hand-changing the code (erroneously).

BDD and outside-in approach, how to start with testing

All,
I'm trying to grasp all the outside-in TDD and BDD stuff and would like you to help me to get it.
Let's say I need to implement Config Parameters functionality working as follows:
there are parameters in file and in database
both groups have to be merged into one parameters set
parameters from database should override those from files
Now I'd like to implement this with outside-in approach, and I stuck just at the beginning. Hope you can help me to get going.
My questions are:
What test should I start with? I just have sth as follows:
class ConfigurationAssemblerTest {
#Test
public void itShouldResultWithEmptyConfigurationWhenBothSourcesAreEmpty() {
ConfigurationAssembler assembler = new ConfigurationAssembler();
// what to put here ?
Configuration config = assembler.getConfiguration();
assertTrue(config.isEmpty());
}
}
I don't know yet what dependencies I'll end with. I don't know how I'm gonna write all that stuff yet and so on.
What should I put in this test to make it valid? Should I mock something? If so how to define those dependencies?
If you could please show me the path to go with this, write some plan, some tests skeletons, what to do and in what order it'd be super-cool. I know it's a lot of writing, so maybe you can point me to any resources? All the resources about outside-in approach I've found were about simple cases with no dependencies etc.
And two questions to mocking approach.
if mocking is about interactions and their verification, does it mean that there should not be state assertions in such tests (only mock verifications) ?
if we replace something that doesn't exist yet with mock just for test, do we replace it later with real version?
Thanks in advance.
Ok, that's indeed a lot of stuff. Let's start from the end:
Mocking is not only about 'interactions and their verification', this would be only one half of the story. In fact, you're using it in two different ways:
Checking, if a certain call was made, and eventually also checking the arguments of the call (this is the 'interactions and verification' part).
Using mocks to replace dependencies of the class-under-test (CUT), eventually setting up return values on the mock objects as required. Here, you use mock objects to isolate the CUT from the rest of the system (so that you can handle the CUT as an isolated 'unit', which sort of runs in a sandbox).
I'd call the first form dynamic or 'interaction-based' unit testing, it uses the Mocking frameworks call verification methods. The second one is more traditional, 'static' unit testing which asserts a fact.
You shouldn't ever have the need to 'replace something that doesn't exist yet' (apart from the fact that this is - logically seen - completely impossible). If you feel like you need to do this, then this is a clear indication that you're trying to make the second step before the first.
Regarding your notion of 'outside-in approach': To be honest, I've never heard of this before, so it doesn't seem to be a very prominent concept - and obviously not a very helpful one, because it seems to confuse things more than clarifying them (at least for the moment).
Now onto your first question: (What test should I start with?):
First things first - you need some mechanism to read the configuration values from file and database, and this functionality should be encapsulated in separate helper classes (you need, among other things, a clean Separation of concerns for effectively doing TDD - this usually is totally underemphasized when introducing TDD/BDD). I'd suggest an interface (e.g. IConfigurationReader) which has two implementations (one for the file stuff and one for the database, e.g. FileConfigurationReader and DatabaseConfigurationReader). In TDD (not necessarily with a BDD approach) you would also have corresponding test fixtures. These fixtures would cover test cases like 'What happens if the underlying data store contains no/invalid/valid/other special values?'. This is what I'd advice you to start with.
Only then - with the reading mechanism in operation and your ConfigurationAssembler class having the necessary dependencies - you would start to write tests for/implement the ConfigurationAssembler class. Your test then could look like this (Because I'm a C#/.NET guy, I don't know the appropriate Java tools. So I'm using pseudo-code here):
class ConfigurationAssemblerTest {
#Test
public void itShouldResultWithEmptyConfigurationWhenBothSourcesAreEmpty() {
IConfigurationReader fileConfigMock = new [Mock of FileConfigurationReader];
fileConfigMock.[WhenAskedForConfigValues].[ReturnEmpty];
IConfigurationReader dbConfigMock = new [Mock of DatabaseConfigurationReader];
dbConfigMock.[WhenAskedForConfigValues].[ReturnEmpty];
ConfigurationAssembler assembler = new ConfigurationAssembler(fileConfigMock, dbConfigMock);
Configuration config = assembler.getConfiguration();
assertTrue(config.isEmpty());
}
}
Two things are important here:
The two reader objects are injected to the ConfigurationAssembler from outside via its constructor - this technique is called Dependency Injection. It is very helpful and important architectural principle, which generally leads to a better and cleaner architecture (and greatly helps in unit testing, especially when using mock objects).
The test now asserts exactly what it states: The ConfigurationAssembler returns ('assembles') an empty config when the underlying reading mechanisms on their part return an empty result set. And because we're using mock objects to provide the config values, the test runs in complete isolation. We can be sure that we're testing only the correct functioning of the ConfigurationAssembler class (its handling of empty values, namely), and nothing else.
Oh, and maybe it's easier for you to start with TDD instead of BDD, because BDD is only a subset of TDD and builds on top of the concepts of TDD. So you can only do (and understand) BDD effectively when you know TDD.
HTH!

Should I test public class function, that changes only internal state of the object?

I decided to add unit tests to existing project (quite big one).
I am using "google toolbox for mac" for various type of STAssert... and OCMock framework.
But I think I'm testing wrong. For example, I have public function saveData, which doesn't return anything and only change internal state of the object. Should I test it? Due to encapsulation principle - I don't have to worry much about object implementation and I mustn't depend on private variables (because they can change/be deleted in the future)
#implementation Foo
-(void) saveData {
internalData_ = 88;
}
In real project this function saveData is 100 lines long and it's change a lot of private variables of the class.
So, should I test it or not? I have a little previous experience in unit testing and cannot make decision by my own.
Does the internal state that gets changed affect any later calls on that object? If so, you should include it in a unit test like
Test a()
Do saveData()
Test a() again
Even if not, it might be a good idea to unit test it. Not for determining whether other code will break by using this method, but for automatically testing the correct implementation of the method. Even though the method doesn't return anything, it probably still has some kind of contract ("If I call it, this must happen") and you should check if what should've happened, happened (e.g. a line added in a log file, or something).
Now, how to check that if the method doesn't return anything, is another question entirely. Ironically enough, that's an implementation detail of the unit test.