How to plan for whitebox testing - testing

I'm relatively new to the world of WhiteBox Testing and need help designing a test plan for 1 of the projects that i'm currently working on. At the moment i'm just scouting around looking for testable pieces of code and then writing some unit tests for that. I somehow feel that is by far not the way it should be done. Please could you give me advice as to how best prepare myself for testing this project? Any tools or test plan templates that I could use? THe language being used is C++ if it'll make difference.

One of the goals of white-box testing is to cover 100% (or as close as possible) of the code statements. I suggest finding a C++ code coverage tool so that you can see what code your tests execute and what code you have missed. Then design tests so that as much code as possible is tested.
Another suggestion is to look at boundary conditions in if statments, for loops, while loops etc. and test these for any 'gray' areas, false positives and false negatives.
You could also design tests to look at the life cycle of important variables. Test their definition, their usage and their destruction to make sure they are being used correctly :)
There's three ideas to get you started. Good luck

At the moment i'm just scouting around looking for testable pieces of code and then writing some unit tests for that. I somehow feel that is by far not the way it should be done.
People say that one of the main benefits of 'test driven development' is that it ecourages you to design your components with testability in mind: it makes your components more testable.
My personal (non-TDD) approach is as follows:
Understand the functionality required and implemented: both 'a priori' (i.e. by reading/knowing the software functional specification), and by reading the source code to reverse-engineer the functionality
Implement black box tests for all the implemented/required functionality (see for example 'Should one test internal implementation, or only test public behaviour?').
My testing therefore isn't quite 'white box', except that I reverse-engineer the functionality being tested. I then test that reverse-engineered functionality, and avoid having any useless (and therefore untested) code. I could (but don't often) use a code coverage tool to see how much of the source code is exercised by the black box tests.

Try "Working Effectively with Legacy Code": http://www.amazon.com/Working-Effectively-Legacy-Michael-Feathers/dp/0131177052
It's relevant since by 'legacy' he means code that has no tests. It's also a rather good book.
Relevant tools are: http://code.google.com/p/googletest/ and http://code.google.com/p/gmock/
There may be other unit test and mock frameworks, but I have familiarity with these and I recommend them highly.

Related

When to use BDD and when just unittests?

I have a task to write tests for future Django Channels+DRF project, don't ask why (we only have swagger documentation for now). So the tests have to test the user use cases (like scenario that may be complex). I have researched about that and found BDD. Here is the question, considering that our project later may have simple unit tests too what should I use, i.e. BDD seems decent but I think it may be excessive for use and may be there is a way of just writing unittests for user use case scenarious and I can get by with that. Does anyone have experience with that? It would be great if you provide articles and code examples.
Scenarios are a bit different to use-cases. A use-case often covers several capabilities. For instance, in the simple laundry use-case shown here, a housekeeper does several things when performing a wash:
washes each load
dries each load.
folds certain items
irons some items
All of these go into the "weekly laundry" use-case.
A scenario in BDD is much more fine-grained. It describes one capability taking place in a particular context or set of contexts. So for instance you might have:
Given the weekly laundry has been washed and dried
And it contains several sheets
And some underpants
When the housekeeper does the folding
Then the sheets should be folded
But the underpants should not.
You can see that we've skipped a couple of the capabilities. This scenario is focused on the capability of folding, and shows how a well-behaved housekeeper would do it. Washing and drying would have to be covered in separate scenarios.
So that's the difference between a use-case and a scenario. Now let's look at a unit test.
When we write code, we don't write it all in one big class or function. We split it up into small pieces. In the same way that a scenario describes an example of the behaviour of the system from the perspective of the users, a unit test describes the behaviour of a class or other small piece of code from the perspective of its users - usually other classes!
So let's imagine that we're on a car purchasing site. We have several capabilities:
Authentication
Searching for cars
Purchasing a car
Listing a car
Removing a car from the list
Each of these will have lots of different classes making it up. Even searching for a car could involve a front-end, a search component, a database of cars, a persistence layer, a webserver, etc.. For each piece of code, we describe the behaviour of that code.
(BDD actually started out at this level; with examples of how classes behave - JBehave was intended to replace JUnit. But JUnit got better and we didn't need this bit any more. I still find it helpful to think of these as examples rather than tests.)
Typically I'll have both scenarios and unit tests in my codebase; one set of them looking from a user / stakeholder perspective at the whole system, and the other set describing my classes in finer detail.
The scenarios help me show how the system behaves and why it's valuable. The unit tests help me drive out good design and separate responsibilities. Both of them provide living documentation which helps to keep the system maintainable and make it easier for newcomers to come on board.
Generally this is how I program:
I have a rough idea of what I want to achieve
I talk to someone about it and write down some scenarios
If we don't quite know what we're looking for, I'll get something working (a spike)
Once we understand better what we're looking for, I automate the scenario first
I take the simplest case and start writing the UI
When the UI needs another class to work, I write some examples of how that code should work (unit tests) first
Then I write the code (or refactor it, because spikes are messy)
When that code needs another class to work, I write some examples of it
If I don't have code that's needed at any point in my unit tests, I use mocks.
Generally we keep the scenarios and the unit tests in different places.
You can see some examples of how I've done this here. It's a tetris game with scenarios which automate the whole game through the UI, and lower-level unit tests which describe the behaviour of particular pieces like the heartbeat which drops the shapes.
Having said that - if your codebase is very simple, you can probably get away with just the scenarios or just the unit tests; you might not need both. But if it starts getting more complex, consider refactoring and adding whatever you need. It's OK to be pragmatic about it, as long as it's easy to change.

Is there value in maintaining my TDD tests when requirements change?

I've been using TDD for a while now and it works well most of the time. From my understanding, I write the test (red) and work the code (green). This serves as a great tool to focus on coding just what is required.
The application I'm currently working on has fairly loose user requirements to say the least! This can create the need to change the existing code base in trivial manner all the way up to redesigning full sections.
When I do this a lot of my tests fail ... understandably since the design has changed. What should I do with these old tests? I'm finding maintaining them can become an issue.
I suppose the core of my question is:
Is TDD used more to create a coding "map" to help focus you as a developer to write code and then some other testing paradigm is used in conjunction to ensure that everything "works" when the code is handed off? Or, do people use TDD as a full-stop-shop that can both help create cleaner code AND work as a full test suite, hence I'll need to maintain my full test suite
Tests written while doing TDD absolutely are valuable throughout the lifetime of an application.
The purpose of TDD is to allow you to build up your code test by test, so that it always meets the requirements you've implemented thus far (this is the "works" part), is fully tested and is well factored. When you're done you'll have a full regression test suite as a bonus, which is valuable. So for both proving what requirements you've implemented and for regression, it's valuable to keep your tests running.
If requirements change so fast that you can't maintain the test suite, you have a project management problem. Explain to your customers that you don't have enough time to ensure quality, or that they need to hire a test engineer.

Is it possible to include BDD into an almost finished project?

My software company never did BDD or even TDD before. Testing before meant to simply try out the new software some days before deployment.
Our recent project is about 70% done. We also use it as a playground for new technologies, tools and ways of developement. My boss wanted that I switch to it to "test testing".
I tried out Selenium 2 and RSpec. Both are promising, but how to catch up months of developement? Further problems are:
new language
never wrote a line of code by myself
huge parts are written by freelancer
lots of fancy hacking
no documentation at all besides some source comments and flow charts
All I was able to was to do was to cover a whole process with Selenium. This appeared to be quite painfully (but still possible), since the software was obivously never meant to be testet this way. We have lots of dynamically generated id ´s, fancy jQuery and much more. Dont even know how to get started with RSpec.
So, is it still possible to apply BDD to this project? Or should I run far away and never come back?
Before you start - have you asked your boss what he values from the testing? I'd clarify this with your boss first. The main benefits of system-level BDD are in the conversations with business stakeholders before the code is written. You won't be able to get this if all you're doing is wrapping existing code. At a unit level, the main benefits are in questioning the responsibilities of classes and their behavior, which, again, you won't be able to get. It might be useful for helping you understand the value of the application and each level of code, though.
If your boss just wants to try out BDD or TDD it may be simpler to start a new project, or even grab an existing project from someone else to wrap some tests around. If he genuinely wants to experiment with BDD over legacy code, then it may be worth persisting with what you have - #Esko's book suggestion rocks. Putting higher-level system tests around the existing functionality can help you to avoid breaking things when you refactor the lower-level code (and you will need to do so, if you want to put tests in place). I additionally recommend Martin Fowler's "Refactoring".
RSpec is best suited to applying BDD at a unit level, as a variant of TDD. If you're looking to wrap automated tests around your whole environment, take a look at Cucumber. It makes reusing steps a lot simpler. You can then call Selenium directly from your steps.
I put together a page of links on BDD here, which I hope will help newcomers to understand it better. Best of luck.
Reading the book Working Effectively with Legacy Code might be helpful. There is also a short version in PDF form.

does anyone have parasoft .test or jtest experience

First i have no experience on parasoft .test or jtest experience. I have read the datasheet that the product could automatically generate unit test.
but I am woundering how useful the auto generated unit test are. Does it really do not need any other effort by developer?
any experience sharing are welcome.
thanks a lot!
We used JTest for our product recently. We didn't use the standard product, we used the Eclipse Plugin. The standard product is built on the OSGI framework (read: it's like Eclipse), but you have to import and create your projects. We were already using Eclipse, so it made sense for us to simply use the plugin, which has all of the same capabilities.
While there are many things that JTest can do for you, there are also many irritating things about it. For example, Jtest's static analysis tool is what is really worthwhile, IMHO. It can look for lots of errors and has a pretty good reporting system. But, while unit test generation is okay, but I think I spent as much or more time fixing and enhancing the generated tests than I would have just making them myself. Administering Jtest is also somewhat complicated and involved.
The built-in mechanisms to make unit tests, stub objects, parameterized unit tests, etc. are not well documented. At least, my little brain couldn't make good use of them in the two years we used the product. However, a lot of their super awesome features (like GUI tracing, command-line interface, the Bug Detective, reporting system etc.) all require extra, very expensive licenses.
Really, Jtest just gives you an easy way to manage the execution of static and unit testing. But it's really expensive. I can't believe they charge thousands of dollars per license of that stuff. You'll also find that they will want to train you, which you almost need because the documentation is pretty bad. Which is odd, because the user's guide is like 900 pages long.
But here's a big hint: you can do it for free. If I had to do it over, I would have pushed hard for using these products (which, oddly enough, look and feel very similar to Jtest)
http://code.google.com/javadevtools/codepro/doc/index.html
I wouldn't get Jtest thinking that this will be a small something to add to your developer's routine. Jtest can become a huge time and process sink.
Jtest is very very useful.Yes it generates it own test cases which requires lot more efforts for fixing them.I use it in different form.I delete all the generated unnecessary test cases.I made one another file which create database connection and set various other parameters sets.Also after configuration the code will work without mocking if all of the code is ready and if it is not ready than you can stubs the required methods.
Static code analyzer is good(for checking null pointer exception)
Checking code conventions is very good.
Write your custom code guidlines as use cases and execute it on your code.
Code coverage.
Debug while testing.
The auto generated unit tests still needs a developer to decide what results are correct or not, so you have to sit down and do the job. A lot of the boiler plate code is of course auto generated, so a small time saver there. I haven't used it much, but did evaluate jtest for an earlier employer. Seemed like a great product, if I remember correctly. :)
Alas there will never be a silver bullet that addresses all unit testing requirements, but JTest & .Test (& C++Test for that matter) about as close as you will get. Uggwar is correct that the developer will still need to verify outcomes for the basic auto generated tests, however there is a whole lot more to it.
These tools can be used to create basic regression tests, these are there to tell you when something has changes, not whether what it is testing is right or wrong. You can also trace a running application and then generate JUnit/NUnit/CPPUnit tests that recreate what was going on in the application. These tend to be far more useful tests, which are used as regression tests for items of functionality.
Other functionality includes the ability to generate stubs, use spreadsheets as datasources and provide an object repository. There is a while lot more too ....
Give them a try.
http://www.parasoft.com

The value of test code coverage tools

We've started using Part Cover to track test code coverage of our application. IMO its a great tool for getting an overall score for your tests and for highlighting test areas where you might have been a bit lazy with tests, but today I wrote a test and realised that it didn't really test anything useful, it just increased my coverage!
If you are TDD, then you only write code to pass a test, and the tests are richly describing all the functionality required by the application. So in this scenario is it still very valuable to have coverage analysis?
For those of you that have coverage tools, how religiously do you adhere to keeping the coverage at 100% and do you ever find yourself writing tests that don't really test anything, but just to keep your coverage up? Isn't this a bad thing ?
Coverage tools should only be used to tell you what has not been tested. The scenario you pointed out illustrates why you can't rely on them to show you what code has been tested. Writing tests just so the coverage is 100% is pointless (as you suspected), and it's so easy to game that this isn't really a useful metric. I used to try and stay at or near 100%, but I came to the same conclusion that you did. I was writing tests that didn't really test anything just so the numbers were right. Use the tools to spot areas that you haven't tested yet, then write good tests or accept the fact that those parts of the code aren't critical.
I'll play devil's advocate: if increasing your coverage meant writing a test that "didn't test anything useful," then why was that code there? To me, this would be an argument to remove some mainline code.
Or to develop a test that does do something useful. For example, you may consider that it's not useful to test setters and getters. Neither do I. However, those methods should be tested while testing something else. Otherwise, again, why are they there?
But you raise a good point that coverage tools should not be an end in themselves. Especially since they can't tell you what code you need to write.
I've gone into more detail here: http://www.kdgregory.com/index.php?page=junit.coverage
If you're doing pure TDD, there's less value to code coverage because as you say, you only write code from tests so you should be at around 100% anyway. but then, it's probably pretty rare (and at times not possible) to be doing it so purely.
if you aren't doing pure TDD, 100% is a pretty unrealistic target anyway. I usually try to go for Roy Osherove's method and only test things with logic (e.g. not straight getters/setter or pass-throughs). But then, higher is always better, and it can be tempting to put a couple more tests in there to increase that coverage..!
Good rationalisation ;) But we are human after all, and I for one sleep much better at night knowing that an untested method or path hasn't made it into production.