We've started using Part Cover to track test code coverage of our application. IMO its a great tool for getting an overall score for your tests and for highlighting test areas where you might have been a bit lazy with tests, but today I wrote a test and realised that it didn't really test anything useful, it just increased my coverage!
If you are TDD, then you only write code to pass a test, and the tests are richly describing all the functionality required by the application. So in this scenario is it still very valuable to have coverage analysis?
For those of you that have coverage tools, how religiously do you adhere to keeping the coverage at 100% and do you ever find yourself writing tests that don't really test anything, but just to keep your coverage up? Isn't this a bad thing ?
Coverage tools should only be used to tell you what has not been tested. The scenario you pointed out illustrates why you can't rely on them to show you what code has been tested. Writing tests just so the coverage is 100% is pointless (as you suspected), and it's so easy to game that this isn't really a useful metric. I used to try and stay at or near 100%, but I came to the same conclusion that you did. I was writing tests that didn't really test anything just so the numbers were right. Use the tools to spot areas that you haven't tested yet, then write good tests or accept the fact that those parts of the code aren't critical.
I'll play devil's advocate: if increasing your coverage meant writing a test that "didn't test anything useful," then why was that code there? To me, this would be an argument to remove some mainline code.
Or to develop a test that does do something useful. For example, you may consider that it's not useful to test setters and getters. Neither do I. However, those methods should be tested while testing something else. Otherwise, again, why are they there?
But you raise a good point that coverage tools should not be an end in themselves. Especially since they can't tell you what code you need to write.
I've gone into more detail here: http://www.kdgregory.com/index.php?page=junit.coverage
If you're doing pure TDD, there's less value to code coverage because as you say, you only write code from tests so you should be at around 100% anyway. but then, it's probably pretty rare (and at times not possible) to be doing it so purely.
if you aren't doing pure TDD, 100% is a pretty unrealistic target anyway. I usually try to go for Roy Osherove's method and only test things with logic (e.g. not straight getters/setter or pass-throughs). But then, higher is always better, and it can be tempting to put a couple more tests in there to increase that coverage..!
Good rationalisation ;) But we are human after all, and I for one sleep much better at night knowing that an untested method or path hasn't made it into production.
Related
I would like to know if there is a way that I can generate an HTML coverage report that also includes statements covered on the tests themselves.
Regarding the merits of doing such a thing, I would like to see that my tests are as useful as the rest of my code. I've become accustomed to including my test code coverage in python and this is something I find helpful.
Update for clarification:
People seem to think I'm talking about testing my tests. I'm not. I just want to see that the statements in my tests are definitely being hit in the HTML coverage report. For example, code coverage on a function in my application might show me that everything's been hit, but it won't necessarily show me that every boundary has been tested. Seeing statements lit up in my test sources show me that I wrote my test well enough. Yes, better factored code shouldn't be so complex as to need that assurance, but sometimes things just aren't better.
I'm not sure I understand the reasoning behind this.
Unit tests, especially in Go, should be simple and straight-forward enough that by just reading them you should be able to spot if a statement is useless.
If that is not the case, maybe you are implementing your unit tests in a way that is too complicated?
If that is the case, I can recommend checking table-driven tests for most cases (not suited for most concurrency-heavy code or methods that depend a lot on manipulating the state, though) as well as trying out TDD (test-driven development).
By using TDD, instead of building your tests in order to try to cover all of your code, you would be writing simple tests that simply validate the specs of your code.
You don't write tests for your tests. Where does it end at that point if you do? Those tests for tests aren't covered. You'll need to write tests for your tests for your tests. But wait! Those tests for your tests for your tests don't have coverage so you better write tests for your tests for your tests for your tests.
I'm relatively new to the world of WhiteBox Testing and need help designing a test plan for 1 of the projects that i'm currently working on. At the moment i'm just scouting around looking for testable pieces of code and then writing some unit tests for that. I somehow feel that is by far not the way it should be done. Please could you give me advice as to how best prepare myself for testing this project? Any tools or test plan templates that I could use? THe language being used is C++ if it'll make difference.
One of the goals of white-box testing is to cover 100% (or as close as possible) of the code statements. I suggest finding a C++ code coverage tool so that you can see what code your tests execute and what code you have missed. Then design tests so that as much code as possible is tested.
Another suggestion is to look at boundary conditions in if statments, for loops, while loops etc. and test these for any 'gray' areas, false positives and false negatives.
You could also design tests to look at the life cycle of important variables. Test their definition, their usage and their destruction to make sure they are being used correctly :)
There's three ideas to get you started. Good luck
At the moment i'm just scouting around looking for testable pieces of code and then writing some unit tests for that. I somehow feel that is by far not the way it should be done.
People say that one of the main benefits of 'test driven development' is that it ecourages you to design your components with testability in mind: it makes your components more testable.
My personal (non-TDD) approach is as follows:
Understand the functionality required and implemented: both 'a priori' (i.e. by reading/knowing the software functional specification), and by reading the source code to reverse-engineer the functionality
Implement black box tests for all the implemented/required functionality (see for example 'Should one test internal implementation, or only test public behaviour?').
My testing therefore isn't quite 'white box', except that I reverse-engineer the functionality being tested. I then test that reverse-engineered functionality, and avoid having any useless (and therefore untested) code. I could (but don't often) use a code coverage tool to see how much of the source code is exercised by the black box tests.
Try "Working Effectively with Legacy Code": http://www.amazon.com/Working-Effectively-Legacy-Michael-Feathers/dp/0131177052
It's relevant since by 'legacy' he means code that has no tests. It's also a rather good book.
Relevant tools are: http://code.google.com/p/googletest/ and http://code.google.com/p/gmock/
There may be other unit test and mock frameworks, but I have familiarity with these and I recommend them highly.
Let's say you're coding, and you come across an opportunity for simple code resuse (e.g. pulling a common piece of code out to an accessible place like a Utility class or base class). You might find yourself thinking, "I know it's good to do this, but I have to get this done now, and if I need to make a change to this code, and forget to change it in the other place, my testing framework will let me know."
In other words, you let the awesome tests you (or another developer) has written to remind you to change the code in the other places too.
Is this a legitimate problem that we might find in ourselves or other developers?
You're asking whether unit tests encourage you to rely on them as a method of TODO list? Yes, but I don't think that's sloppy coding. You are, afterall, to start with unit tests failing and code to the test; if you refactor some code and then once again code to the test, that isn't sloppy coding -- it's doing what you're supposed to.
I think the problem with unit tests is simply that you can't cover every corner case in a unit test, and sometimes people assume that a working test means a working app, which isn't true.
In the example you provide, good tests are in fact enabling you to implement sloppy design, however in my experience, bad tests wouldn't have discouraged you from doing the same.
The fallacy in your argument centers around the premise that "getting this done now" means you will save time by implementing sloppy design. The truth of the matter is that you are incurring technical debt whether your tests are good or not. Making a change to that code is now a much more complex task, whether you have a good testing framework to remind you of that or not.
Although immature code may work fine
and be completely acceptable to the
customer, excess quantities will make
a program unmasterable, leading to
extreme specialization of programmers
and finally an inflexible product.
- Ward Cunningham
The strength of good testing practices may be in allowing you to incur that debt with some level of safety. As long as you continue to be aware that this area of the code is now weak, as a result of your choices, then it may be worth the tradeoff -- you ship your product sooner, at the cost of higher debt, with a lower risk of incurring bugs in the short run as a result.
If the tests are good and the code (sloppy or otherwise) pass them, all is good. It would be nice to have good code but sloppy working code is better than good broken code.
I don't use tests as my first option to finding the code that needs changes. I'll use my IDE's search (or refactoring) functionality and look for all the places that call the method in question.
The tests are just a nice addition in case I was accidentally sloppy or accidentally introduced a bug. Test don't make me sloppy from the start, they just reassure me once I think I'm done.
I would say that good tests enable you to fix sloppy coding.
You can certainly write incredibly sloppy code with or without tests. Unit testing makes it slightly easier to get away with it, but only in the short run.
If you have a set of logic copied in two places in your code (IMO the worst thing a developer can do), then you probably have inconsistent tests as well.
The most important job any programmer can do is ruthlessly refactor the code, removing ALL duplication. This almost always shows benefits on even a single iteration.
Why would you think if you had an error in copied code in 2 places that your tests would be any better?
It sounds more to me like sloppy developers and sloppy coding practices are what are leading to sloppy code in your example. The tests you described would prevent the sloppy code from ever getting to far.
I read the latest coding horror post, and one of the comments touched a nerve for me:
This is the type of situation that test driven design/refactoring are supposed to fix. If (big if) you have tests for the interfaces, rewriting the implementation is risk-free, because you will know whether you caught everything.
Now in theory I like the idea of test driven development, but all the times I've tried to make it work, it hasn't gone particularly well, I get out of the habit, and next thing I know all the tests that I had originally written not only don't pass, but they're no longer a reflection of the design of the system.
It's all well and good if you've been handed a perfect design from on high, straight from the start (which in my experience never actually happens), but what if halfway through the production of a system you notice that there's a critical flaw in the design? Then it's no longer a simple matter of diving in and fixing "the bug", but you also have to rewrite all the tests. A fundamental assumption was wrong, and now you have to change it. Now test driven development is no longer a handy thing, but it just means that there's twice as much work to do everything.
I've tried to ask this question before, both of peers, and online, but I've never heard a very satisfactory answer. ... Oh wait.. what was the question?
How do you combine test driven development with a design that has to change to reflect a growing understanding of the problem space? How do you make the TDD practice work for you instead of against you?
Update:
I still don't think I fully understand it all, so I can't really make a decision about which answer to accept. Most of my leaps in understanding have happened in the comments sections, not in the answers. Here' s a collection of my favorites so far:
"Anyone who uses terms like "risk-free"
in software development is indeed full
of shit. But don't write off TDD just
because some of its proponents are
hyper-susceptible to hype. I find it
helps me clarify my thinking before
writing a chunk of code, helps me to
reproduce bugs and fix them, and makes
me more confident about refactoring
things when they start to look ugly"
-Kristopher Johnson
"In that case, you rewrite the tests
for just the portions of the interface
that have changed, and consider
yourself lucky to have good test
coverage elsewhere that will tell you
what other objects depend on it."
-rcoder
"In TDD, the reason to write the tests
is to do design. The reason to make
the tests automated is so that you can
reuse them as the design and code
evolve. When a test breaks, it means
you've somehow violated an earlier
design decision. Maybe that's a
decision you want to change, but it's
good to get that feedback as soon as
possible."
-Kristopher Johnson
[about testing interfaces] "A test would insert some elements,
check that the size corresponds to the
number of elements inserted, check
that contains() returns true for them
but not for things that weren't
inserted, checks that remove() works,
etc. All of these tests would be
identical for all implementations, and
of course you would run the same code
for each implementation and not copy
it. So when the interface changes,
you'd only have to adjust the test
code once, not once for each
implementation."
–Michael Borgwardt
One of the practices of TDD is the use of Baby Steps (which could be very boring in the beggining) which is the use of really small steps in order for you to understand your problem space and make a good and satisfactory solution for your problem.
If you already know the design of your application you aren't doing TDD at all. We should design it while doing your tests.
So the suggestion I would give is for you to concentrate on the baby steps in order to get a proper testable design
I don't think any real practitioner of TDD will claim that it completely eliminates the possibility of error or regression.
Remember that TDD is fundamentally about design, not about testing or quality control. Saying "all my tests pass" does not mean "I'm finished."
If your requirements or high-level design change drastically, then you may need to throw away all your tests along with all the code. That's just how things are sometimes. It doesn't mean that TDD isn't helping you.
Properly applied, TDD should actually make your life a lot easier in the face of changing requirements.
In my experience, code that is easy to test is code that is orthogonal from other subsystems, and which has clearly defined interfaces. Given such a starting point, it is much easier to rewrite significant portions of your application, since you can work with confidence knowing that a) your changes will be isolated to a few subsystems, and b) any breakage will quickly show up as failing tests.
If, on the other hand, you're just slapping unit tests on your code after it has been designed, then you may well have problems when requirements change. There's a difference between tests that fail quickly when subsystems change (because they're effectively flagging regressions) and those that are brittle, because they depend on too many unrelated pieces of system state. The former should be fixable by a few lines of code, while the latter may leave you scratching your head for hours trying to unravel them.
The only true answer is it depends.
There are ways to do TDD wrong, such
that it doesn't fit in with your
environment and eats effort with
minimal benefit.
There are ways to do TDD right, such
that it both cuts costs and increases
quality.
There are ways to something
similar-but-different to TDD, which
may or may not get called TDD, and
may or may not be more appropriate in
your particular situation.
It's a strange quirk of the market for software tools and experts that, to maximise the revenue for those pushing them, they are always written as if they somehow apply to 'all software'.
Truth is, 'software' is every bit as diverse as 'hardware', and nobody would think of buying a book on bridge-making to design an electronic gadget or build a garden shed.
I think you have some misconceptions about TDD. For a good explanation and example of what it is and how to use it, I recommend reading Kent Beck's Test-Driven Development: By Example.
Here are a few further comments that may help you understand what TDD is and why some people swear by it:
"How do you combine test driven development with a design that has to change to reflect a growing understanding of the problem space?"
TDD is a technique for exploring a problem space and creating and evolving a design that meets your needs. TDD is not something you do in addition to doing design; it is doing design.
"How do you make the TDD practice work for you instead of against you?"
TDD is not "twice as much work" as not doing TDD. Yes, you'll write a lot of tests, but that doesn't really take much time, and the effort isn't wasted. You have to test your code somehow, right? Running automated tests are a lot quicker than manually testing whenever you change something.
A lot of TDD tutorials present highly detailed tests of every method of every class. In real life, people don't do this. It is silly to write a test for every setter, every getter, and so on. The Beck book does a good job of showing how to use TDD to quickly design and implement something, slowing down to "baby steps" only when things get tricky. See How Deep Are Your Unit Tests for more on this point.
TDD is not about regression testing. TDD is about thinking before you write code. But having regression tests is a nice side benefit. They don't guarantee that code will never break, but they help a lot.
When you make changes that cause tests to break, that's not a bad thing; it's valuable feedback. Designs do change, and your tests aren't written in stone. If your design has changed so much that some tests are no longer valid, then just throw them away. Write the new tests you need to be confident about the new design.
it's no longer a simple matter of
diving in and fixing "the bug", but
you also have to rewrite all the
tests.
A fundamental creed of TDD is to avoid duplication both in the production code AND in the test code. If a single design change means you have to rewrite everything, you weren't doing TDD (or not doing it correctly at all).
Ideally, in a well-designed system with proper separation of concerns, design changes are local, just like implementation changes. While the real world is rarely ideal, you still usually get something in between: you have to change some of the production code and some of the tests, but not everything, and the changes are mostly simple and may even be done automatically by refactoring tools.
Coding something without knowing what will work best in the UI, while at the same time writing unittests. That is very time consuming. It's better to start out making some prototypes of the GUI to get the interaction right.. and then rewrite it with unittests (if you employer allows you).
Continuous Integration (CI) is one key. If your tests run automatically every time you check in to source control (and everyone else sees it if they fail), it's easier to avoid "stale" tests and stay in the green.
As Mr. Dias mentioned, Baby Steps are important. You make a small refactoring, you run your tests. If tests break, you immediately determine if this is expected (design change) or a failed refactoring. When tests are truly independent (comes with practice), this is seldom very difficult. Evolve your design slowly.
See also http://thought-tracker.blogspot.com/2005/11/notes-on-pragmatic-unit-testing.html - and definitely buy the book!
EDIT: Perhaps I'm looking at this the wrong way. Say you had a legacy codebase that you wanted to redesign. The first thing I would try to do is add tests for the current behavior. Refactoring without tests is risky - you might change behavior. After that, I would start to clean up the design, in small steps, running my unit tests after each step. That would give me confidence that my changes weren't breaking anything.
At some point the API might change. This would be a breaking change - clients would have to be updated. The tests would tell me this - which is good, because I'd have to update any existing clients (including the tests).
Now that's not TDD. But the idea is the same - the tests are specifications of behavior (yes, I'm shading into BDD), and they give me the confidence to refactor the implementation while insuring that I preserve the behavior (as well as letting me know when I change the interface).
In practice, I've found TDD gives me immediate feedback on poor interface design. I'm my first client - I know when my API is hard to use.
We tend to do much less design up front with TDD, knowing it can change. I have taken projects through huge gyrations (it's a web app, no it's a RESTful server, no it's a bot). The tests provide me with the ability to refactor and restructure and evolve your code much more easily than untested code. Although it seems contradictory, it is true-- even though you have more code, you are able to make major changes and have confidence that nothing has broken in the existing functionality.
I understand your concern that fundamental assumptions changing make you throw out tests. This seems intuitive, but I personally haven't seen it. Some tests go, but most are still valid-- often a major change isn't as major as it seems at first. Plus, as you get better at writing tests, you tend to write less brittle ones, which helps.
As a novice developer who is getting into the rhythm of my first professional project, I'm trying to develop good habits as soon as possible. However, I've found that I often forget to test, put it off, or do a whole bunch of tests at the end of a build instead of one at a time.
My question is what rhythm do you like to get into when working on large projects, and where testing fits into it.
Well, if you want to follow the TDD guys, before you start to code ;)
I am very much in the same position as you. I want to get more into testing, but I am currently in a position where we are working to "get the code out" rather than "get the code out right" which scares the crap out of me. So I am slowly trying to integrate testing processes in my development cycle.
Currently, I test as I code, trying to bust the code as I write it. I do find it hard to get into the TDD mindset.. Its taking time, but that is the way I would want to work..
EDIT:
I thought I should probably expand on this, this is my basic "working process"...
Plan what I want from the code,
possible object design, whatever.
Create my first class, add a huge comment to the top outlining
what my "vision" for the class is.
Outline the basic test scenarios.. These will basically
become the unit tests.
Create my first method.. Also writing a short comment explaining
how it is expected to work.
Write an automated test to see if it does what I expect.
Repeat steps 4-6 for each method (note the automated tests are in a huge list that runs on F5).
I then create some beefy tests to emulate the class in the working environment, obviously fixing any issues.
If any new bugs come to light following this, I then go back and write the new test in, make sure it fails (this also serves as proof-of-concept for the bug) then fix it..
I hope that helps.. Open to comments on how to improve this, as I said it is a concern of mine..
Before you check the code in.
First and often.
If I'm creating some new functionality for the system I'll be looking to initially define the interfaces and then write unit tests for those interfaces. To work out what tests to write consider the API of the interface and the functionality it provides, get out a pen and paper and think for a while about potential error conditions or ways to prove that it is doing the correct job. If this is too difficult then it's likely that your API isn't good enough.
In regards to the tests, see if you can avoid writing "integration" tests that test more than one specific object and keep them as "unit" test.
Then create a default implementation of your interface (that does nothing, returns rubbish values but doesn't throw exceptions), plug it into the tests to make sure that the tests fail (this tests that your tests work! :) ). Then write in the functionality and re-run the tests.
This mechanism isn't perfect but will cover a lot of simple coding mistakes and provide you with an opportunity to run your new feature without having to plug it into the entire application.
Following this you then need to test it in the main application with the combination of existing features.
This is where testing is more difficult and if possible should be partially outsourced to good QA tester as they'll have the knack of breaking things. Although it helps if you have these skills too.
Getting testing right is a knack that you have to pick up to be honest. My own experience comes from my own naive deployments and the subsequent bugs that were reported by the users when they used it in anger.
At first when this happened to me I found it irritating that the user was intentionally trying to break my software and I wanted to mark all the "bugs" down as "training issues". However after reflecting on it I realised that it is our role (as developers) to make the application as simple and reliable to use as possible even by idiots. It is our role to empower idiots and thats why we get paid the dollar. Idiot handling.
To effectively test like this you have to get into the mindset of trying to break everything. Assume the mantle of a user that bashes the buttons and generally attempts to destroy your application in weird and wonderful ways.
Assume that if you don't find flaws then they will be discovered in production to your companies serious loss of face. Take full responsibility for all of these issues and curse yourself when a bug you are responsible (or even part responsible) for is discovered in production.
If you do most of the above then you should start to produce much more robust code, however it is a bit of an art form and requires a lot of experience to be good at.
A good key to remember is
"Test early, test often and test again, when you think you are done"
When to test? When it's important that the code works correctly!
When hacking something together for myself, I test at the end. Bad practice, but these are usually small things that I'll use a few times and that's it.
On a larger project, I write tests before I write a class and I run the tests after every change to that class.
I test constantly. After I finish even a loop inside of a function, I run the program and hit a breakpoint at the top of the loop, then run through it. This is all just to make sure that the process is doing exactly what I want it to.
Then, once a function is finished, you test it in it's entirety. You probably want to set a breakpoint just before the function is called, and check your debugger to make sure that it works perfectly.
I guess I would say: "Test often."
I've only recently added unit testing to my regular work flow but I write unit tests:
to express the requirements for each new code module (right after I write the interface but before writing the implementation)
every time I think "it had better ... by the time I'm done"
when something breaks, to quantify the bug and prove that I've fixed it
when I write code which explicitly allocates or deallocates memory---I loath hunting for memory leaks...
I run the tests on most builds, and always before running the code.
Start with unit testing. Specifically, check out TDD, Test Driven Development. The concept behind TDD is you write the unit tests first, then write your code. If the test fails, you go back and re-work your code. If it passes, you move on to the next one.
I take a hybrid approach to TDD. I don't like to write tests against nothing, so I usually write some of the code first, then put the unit tests in. It's an iterative process, one which you're never really done with. You change the code, you run your tests. If there's any failures, fix and repeat.
The other sort of testing is integration testing, which comes along later in the process, and might typically be done by a QA testing team. In any case, integration testing addresses the need to test the pieces as a whole. It's the working product you're concerned with testing. This one is more difficult to deal with b/c it usually involves having automated testing tools (like Robot, for ex.).
Also, take a look at a product like CruiseControl.NET to do continuous builds. CC.NET is nice b/c it will run your unit tests with each build, notifying you immediately of any failures.
We don't do TDD here (though some have advocated it), but our rule is that you're supposed to check your unit tests in with your changes. It doesn't always happen, but it's easy to go back and look at a specific changeset and see whether or not tests were written.
I find that if I wait until the end of writing some new feature to test, I forget many of the edge cases that I thought might break the feature. This is ok if you are doing things to learn for yourself, but in a professional environment, I find my flow to be the classic form of: Red, Green, Refactor.
Red: Write your test so that it fails. That way you know the test is asserting against the correct variable.
Green: Make your new test pass in the easiest way possible. If that means hard-coding it, that's ok. This is great for those that just want something to work right away.
Refactor: Now that your test passes, you can go back and change your code with confidence. Your new change broke your test? Great, your change had an implication you didn't realize, now your test is telling you.
This rhythm has made me speed my development over time because I basically have a history compiler for all the things I thought that needed to be checked in order for a feature to work! This, in turn, leads to many other benefits, that I won't get to here...
Lots of great answers here!
I try to test at the lowest level that makes sense:
If a single computation or conditional is difficult or complex, add test code while you're writing it and ensure each piece works. Comment out the test code when you're done, but leave it there to document how you tested the algorithm.
Test each function.
Exercise each branch at least once.
Exercise the boundary conditions -- input values at which the code changes its behavior -- to catch "off by one" errors.
Test various combinations of valid and invalid inputs.
Look for situations that might break the code, and test them.
Test each module with the same strategy as above.
Test the body of code as a whole, to ensure the components interact properly. If you've been diligent about lower-level testing, this is essentially a "confidence test" to ensure nothing broke during assembly.
Since most of my code is for embedded devices, I pay particular attention to robustness, interaction between various threads, tasks, and components, and unexpected use of resources: memory, CPU, filesystem space, etc.
In general, the earlier you encounter an error, the easier it is to isolate, identify, and fix it--and the more time you get to spend creating, rather than chasing your tail.*
**I know, -1 for the gratuitous buffer-pointer reference!*