Documenting functional tests - documentation

I'm about to start writing e2e tests for a web app i've been working on for the last months. I am currently investigating how to best document these tests. In my company the way it's been done before (on older, non-web programs) is to have a big word document that describes the action of each test, and the expected result. Tests are then run with a third party software, and if any test fail, we can use the documentation to troubleshoot.
This way works fine, but i'm wondering if there is a more efficient, "web-based" way of documenting the e2e tests. We have no prior experience with web-based apps, and my research lead me to observablehq's javascript-based notebooks. I thought maybe it is possible to integrate the actual tests into it, along with the test specifications and then run the codeblocks from there. But i'm not sure this approach is worth the extra effort rather than the current way we do things.
I guess what i'm asking is how other developers are documenting e2e tests for web-based apps, and lessons learned around it?

If you can use an automation framework that makes you build the tests from a specification. This is typically a markdown file which describes the business case being tested. Each of the steps are executed by the framework. This means that you can re-use the steps as you build out the specifications. An example of this is Gauge. You can read their documentation on building specifications to get a better idea of what I mean.
There are a few advantages to following this approach:
The specifications are stored alongside the code. This means that the test cases follow the code as it evolves. In the 'old days' where this was stored in documents there was a challenge keeping this in sync with versions of the code.
The tests are self documentation the specification both drives the test and documents the test.
The test reports are produced in HTML and therefore are easier to understand.

Good documentation is key, when talking about end to end testing it could be a little more challenging. Use cases and its data organization is the first thing to address. You want your test case input and output verification organized in a cohesive way, including specification and use case description.
Some project with e2e test case documentation example:
Cloud storage mirror
Cross vendor database synchronizer
Finally you might be interested in test data organization

Related

Can you write unit tests if you don't actually write the code?

For instance, I assume this is what SDETs do?
They don't actually write the functional code but they're able to write integration/unit tests, am i correct?
But can someone learn to read code and then start writing tests?
This is actually a good question. I have been at the same place when I was working only on Manual Testing. Here is how I experienced things when I transitioned to automation. To answer your question, yes someone can read code and start writing tests on it but you need to understand the code that you are going to test.
There are different types of testing methodologies that are used when testing an application. These tests are done in layers so that the application is properly tested. Here is how the layering looks like:
1) Unit testing: This part is usually written by developers. This is because they have written the code and know the functionalities of the code and is easier for them to write. I am an SDET and I have written unit tests. There was only one opportunity that presented itself and it was when we were refactoring our code and there was a lot of room to write Unit Tests. In unit tests, you test functions in isolation by giving it some values and verifying an expected value. This is not something an SDET does, but should be able to do if provided the chance.
2) Integration testing: This part is also usually written by developers but the definition of integration testing is a bit vague. It means testing multiple modules in isolation. This can be modules in backend or modules in frontend but not together. Frameworks that help achieve this are code level integration tests for the technology you are using. Like for Angular application, there are deep integration tests that test the HTML and CSS of a component and there are shallow integration tests that just test two component's logic together. This can be written by an SDET but is usually written by the developer.
3) API testing (contract based testing): Pact helps us achieve this. There are other tools like rest assured, postman and jmeter that help in testing API end points. Pact helps test the integration of APIs on the frontend and verifies that integration in backend. This is very popular with microservices. This is something that can be written either by the developer or by the SDET.
4) End to End testing: This is something that is the sole responsibility of SDET. This covers testing of user flows depending on user stories. It is testing the entire stack together. Backend and frontend. This allows SDETs to automate how a user would use the application. This is also called as blackbox testing. There are different frameworks that help achieve this. Selenium, Protractor, Cypress, Detox etc. This is the sole responsibility of an SDET.
5) Load testing: This is again something that an SDET does. Using tools like hey, jmeter, loadrunner etc. These tests allow the SDET to initiate a heavy load on the system and check for breaking points of the system.
6) Performance testing: Testing the performance of the webpage for an end user depending on the page load time, the SEO optimisation and the weight of elements of page. This is something achieved by google's lighthouse tool that is an amazing tool to use. I am not aware of anything else that is as amazing as lighthouse because it gives us a lot of data that we can use to improve our website. This is a primary job of an SDET.
7) CI/CD: Continuous Integration and Continuous Deployments is something that requires architectural knowledge of the system. This is something you can do when you are an SDET3 or a lead QA engineer. For systems like AWS and GCP, using CI build tools like Jenkins and CircleCI, one can set up a pipeline that runs all the above tests when ever a branch is merged into master or whenever a pull request is created. Creating the pipeline will require you to have knowledge of Docker, Kubernetes, Jenkins and your test frameworks. First you dockerize your tests, then you build the image and push it to a directory in cloud, then you use the image to create a kubernetes job that runs everytime a change is presented in your code.
These are the levels of work that an SDET does. It takes time and hard work to have an understanding of all testing frameworks and how everything fits together. An SDET should have knowledge of the server, http protocols, frontend, backend, browsers, caching, pipeline management and orchestration of tests.
Yes, sure. You can write unit tests increasing test coverage of the codebase. That's very qualified work from software test engineering since you need to be aware what is going on in the code. This skill is definitely great!
I advise you to take a look on so called "mutation coverage". Usage of mutation coverage as a better metric than simple unit test coverage. Mutation tests are changing logical operators in the different parts of the codebase (making so called "mutants") and then are running unit tests to find out how many unit tests will fail showing their effectiveness (if after mutants were injected the result is the same as without them - unit tests quality is low and they won't catch any new injected issues to the codebase).

What test methods do you use for developing websites?

There are a lot of testing methods out there i.e. blackbox, graybox, unit, functional, regression etc.
Obviously, a project cannot take on all testing methods. So I asked this question to gain an idea of what test methods to use and why should I use them. You can answer in the following format:
Test Method - what you use it on
e.g.
Unit Testing - I use it for ...(blah, blah)
Regression Testing - I use it for ...(blah, blah)
I was asked to engage into TDD and of course I had to research testing methods. But there is a whole plethora of them and I don't know what to use (because they all sound useful).
1. Unit Testing is used by developers to ensure unit code he wrote is correct. This is usually white box testing as well as some level of black box testing.
2. Regression Testing is a functional testing used by testers to ensure that new changes in system has not broken any of existing functionality
3. Functional testing is testing conducted on a complete, integrated system to evaluate the system's compliance with its specified requirements. Functionality testing falls within the scope of black box testing, and as such, should require no knowledge of the inner design of the code or logic
.
This Test-driven development and Feature Driven Development wiki articles will be of great help for you.
For TDD you need to follow following process:
Document feature (or use case) that
you need to implement or enhance
in your application that
currently does not exists.
Write set of functional test
cases that can ensure above
feature (from step 1) works. You may need to
write multiple test cases for
above feature to test all different
possible work flows.
Write code to implement above feature (from step 1).
Test this code using test cases you
had written earlier (in step 2). The actual
testing can be manual but I would recommend to create automated tests
if possible.
If all test cases pass, you are good to
go. If not, you need to update code (go back to step 3)
so as to make the test case pass.
TDD is to ensure that functional test cases which were written before you coded should work and does not matter how code was implemented.
There is no "right" or "wrong" in testing. Testing is an art and what you should choose and how well it works out for you depends a lot from project to project and your experience.
But as a professional Test Expert my suggestion is that you have a healthy mix of automated and manual testing.
(Examples below are in PHP but you can easily find the correct examples for what ever langauge/framework you are using)
AUTOMATED TESTING
Unit Testing
Use PHPUnit to test your classes, functions and interaction between them.
http://phpunit.sourceforge.net/
Automated Functional Testing
If it's possible you should automate a lot of the functional testing. Some frame works have functional testing built into them. Otherwise you have to use a tool for it. If you are developing web sites/applications you might want to look at Selenium.
http://www.webinade.com/web-development/functional-testing-in-php-using-selenium-ide
Continuous Integration
Use CI to make sure all your automated tests run every time someone in your team makes a commit to the project.
http://martinfowler.com/articles/continuousIntegration.html
MANUAL TESTING
As much as I love automated testing it is, IMHO, not a substitute for manual testing. The main reason being that an automated can only do what it is told and only verify what it has been informed to view as pass/fail. A human can use it's intelligence to find faults and raise questions that appear while testing something else.
Exploratory Testing
ET is a very low cost and effective way to find defects in a project. It take advantage of the intelligence of a human being and a teaches the testers/developers more about the project than any other testing technique i know of. Doing an ET session aimed at every feature deployed in the test environment is not only an effective way to find problems fast, but also a good way to learn and fun!
http://www.satisfice.com/articles/et-article.pdf
This answer is (almost) identical to one that I gave to another question. Check out that question since it had some other good answers that might help you.
How can we decide which testing method can be used?
I usually do the following things:
Page consistency in case of multi-page web sties.
Testing the database connections.
Testing the functionalities that can be affected by the change I just made.
I test functions with sample input to make sure they work fine (especially those that are algorithm-like).
In some cases I implement features very simply hard-coding most of the settings then implement the settings later, testing after implementing every setting.
Most of these apply to applications, too.
Well before going to the answer i would like to clear testing concept about multiple methods.
There are six main testing types which cover all most all testing methods.
Black Box Testing
White Box Testing
Grey Box Testing
Functional Testing
Integration Testing
Usability Testing
Almost all Testing methods lies under these types, you can also use some testing method in multiple types like you can use Smoke testing in black box or white box approach on the basis of resources available to test.
So for testing a web site completely you need to use at least following testing methods on the basis of resources available to test. These are at least methods which should be used to test a web site, but there may be some more imp methods on the basic of nature of website.
Requirement Testing
Smock Testing
System Testing
Integration Testing
Regression Testing
Security Testing
Performance & Load Testing
Deployment Testing
You should at least use all of above (8) testing methods to test a web site no matter what testing type you are focusing. You can automate you test in some areas and you can do this manually it all depends upon the resources availability.
There is specifically no hard and fast rule to follow any testing type or any method. As you know "Testing Is An ART" so art don't have rules or boundaries. Its totally up to you What you use to test and how.......
Hope you got the answer of question.
Selenium is very good for testing websites.
The answer depends on the Web framework used (if any). Django for example has built-in testing functions.
For PHP (or functional web testing), SimpleTest is pretty good and well... simple. It support Unit Testing (PHP only) and Web Testing. Tests can run in the IDE (Eclipse), or in the browser (meaning on your server).
The other answers posted so far focus on unit/functional/performance/etc. testing, and they are all reasonable.
However, one the key questions you should ask is, "how effective is my testing?".
This is often answered with test coverage tools, that determine which parts of your application actually get exercised by some set of tests. The ideal test coverage tool lets you test your application by any method you can imagine (including all the standard answers above) and will then report what part and what percentage of your code was exercised. Most importantly, it will tell you what code you did not exercise. You can then inspect that code and decide if more testing is warranted, or if you don't care. If the untested code has to do with "disk full error handling" and you belive that 1TB disks are common, you might decide to ignore that. If the untested code is the input validation logic leading to SQL queries, you might decide that you must test that logic to ensure that no SQL injection attacks can occur.
What test coverage tools let you do it to make a rational decision that you have tested adequately, using data about what parts of your code has been exercised. So regardless of how you test, best practices indicates you should also do test coverage analysis.
Test coverage tools can be obtained from a variety of sources. SD provides a family of test coverage tools that handle C, C++, Java, C#, PHP and COBOL, all of which are used to support web site testing in various ways.

Automatic testing for web based projects

Recently I've came up with the question is it worth at all to spent development time to generate automatic unit test for web based projects? I mean it seems useless at some point because at some point those projects are oriented on interactions with users/clients, so you cannot anticipate the whole possible set of user action so you be able to check the correctness of content showed. Even regression test can hardly be done. So I'm very eager to know to know the opinion of other experienced developers.
Selenium have a good web testing framework
http://seleniumhq.org/
Telerik are also in the process of developing one for web app testing.
http://www.telerik.com/products/web-ui-test-studio.aspx
You cannot anticipate the whole
possible set of user action so you be
able to check the correctness of
content showed.
You can't anticipate all the possible data your code is going to be handed, or all the possible race conditions if it's threaded, and yet you still bother unit testing. Why? Because you can narrow it down a hell of a lot. You can anticipate the sorts of pathological things that will happen. You just have to think about it a bit and get some experience.
User interaction is no different. There are certain things users are going to try and do, pathological or not, and you can anticipate them. Users are just inputting particularly imaginative data. You'll find programmers tend to miss the same sorts of conditions over and over again. I keep a checklist. For example: pump Unicode into everything; put the start date after the end date; enter gibberish data; put tags in everything; leave off the trailing newline; try to enter the same data twice; submit a form, go back and submit it again; take a text file, call it foo.jpg and try to upload it as a picture. You can even write a program to flip switches and push buttons at random, a bad monkey, that'll find all sorts of fun bugs.
Its often as simple as sitting someone down who's unfamiliar with the software and watching them use it. Fight the urge to correct them, just watch them flounder. Its very educational. Steve Krug refers to this as "Advanced Common Sense" and has an excellent book called "Don't Make Me Think" which covers cheap, simple user interaction testing. I highly recommend it. It's a very short and eye opening read.
Finally, the client themselves, if their expectations are properly prepared, can be a fantastic test suite. Be sure they understand its a work in progress, that it will have bugs, that they're helping to make their product better, and that it absolutely should not be used for production data, and let them tinker with the pre-release versions of your product. They'll do all sorts of things you never thought of! They'll be the best and most realistic testing you ever had, FOR FREE! Give them a very simple way to report bugs, preferably just a one button box right on the application which automatically submits their environment and history; the feedback box on Hiveminder is an excellent example. Respond to their bugs quickly and politely (even if its just "thanks for the info") and you'll find they'll be delighted you're so responsive to their needs!
Yes, it is. I just ran into an issue this week with a web site I am working on. I just recently switched-out the data access layer and set up unit tests for my controllers and repositories, but not the UI interactions.
I got bit by a pretty obvious bug that would have been easily caught if I had integration tests. Only through integration tests and UI functionality tests do you find issues with the way different tiers of the application interact with one another.
It really depends on the structure and architecture of your web application. If it contains an application logic layer, then that layer should be easy to unit test with automating tools such as Visual Studio. Also, using a framework that has been designed to enable unit testing, such as ASP.NET MVC, helps alot.
If you're writing a lot of Javascript, there have been a lot of JS testing frameworks that have come around the block recently for unit testing your Javascript.
Other than that, testing the web tier using something like Canoo, HtmlUnit, Selenium, etc. is more a functional or integration test than a unit test. These can be hard to maintain if you have the UI change a lot, but they can really come in handy. Recording Selenium tests is easy and something you could probably get other people (testers) to help you create and maintain. Just know that there is a cost associated with maintaining tests, and it needs to be balanced out.
There are other types of testing that are great for the web tier - fuzz testing especially, but a lot of the good options are commercial tools. One that is open source and plugs into Rails is called Tarantula. Having something like that at the web tier is a nice to have run in a continuous integration process and doesn't require much in the form of maintenance.
Unit tests make sense in TDD process. They do not have much value if you don't do test-first development. However the acceptance test are a big thing for quality of the software. I'd say that acceptance test is a holy grail of the development. Acceptance tests show whether the application satisfies the requirements. How do I know when to stop developing the feature --- only when all my acceptance test pass. Automation of acceptance testing a big thing because I do not have to do it all manualy each time I make changes to the application. After months of development there can be hundreds of test and it becomes unfeasible (sometime impossible) to run all the test manually. Then how do I know if my application still works?
Automation of acceptance tests can be implemented with use of xUnit test frameworks, which makes a confusion here. If I create an acceptance test using phpUnit or httpUnit is it a unit test? My answer is no. It does not matter what tool I use to create and run test. Acceptance test is the one that show whether the features is working IAW requirements. Unit test show whether a class (or function) satisfies the developer's implementation idea. Unit test has no value for the client (user). Acceptance test has a lot of value to the client (and thus to developer, remember Customer Affinity)
So I strongly recommend creating automated acceptance tests for the web application.
The good frameworks for the acceptance test are:
Sahi (sahi.co.in)
Silenium
Simpletest (I't a unit-test framework for php, but includes the browser object that can be used for acceptance testing).
However
You have mentioned that web-site is all about user interaction and thus test automation will not solve the whole problem of usability. For example: testing framework shows that all tests pass, however the user cannot see the form or link or other page element due to accidental style="display:none" in the div. The automated tests pass because the div is present in the document and test framework can "see" it. But the user cannot. And the manual test would fail.
Thus, all web-applications needs manual testing. The automated test can reduce the test workload drastically (80%), but manual test are as well significant for the quality of the resulting software.
As for the Unit testing and TDD -- it make the code quality. It is beneficial to the developers and for the future of the project (i.e. for projects longer that a couple of month). However TDD requires skill. If you have the skill -- use it. If you don't consider gaining the skill, but mind the time it will take to gain. It usually takes about 3 - 6 month to start creating a good Unit tests and code. If you project will last more that a year, I recommend studding TDD and investing time in proper development environment.
I've created a web test solution (docker + cucumber); it's very basic and simple, so easy to understand and modify / improve. It lies in the web directory;
my solution: https://github.com/gyulaweber/hosting_tests

BDD GUI Automation

I've started a new role in my life. I was a front end web developer, but I've now been moved to testing web software, or more so, automating the testing of the software. I believe I am to pursue a BDD (Behavior Driven Development) methodology. I am fairly lost as to what to use, and how to piece it together.
The code that is being used/written is in Java to write a web interface for the application to test. I have documentation of the tests to run, but I've been curious how to go about automating it.
I've been directed to Cucumber as one of the "languages" to help with the automation. I have done some research and come across a web site for a synopsis of BDD Tools/Frame works,
8 Best Behavior Driven Development (BDD) Tools and Testing Frameworks. This helped a little but then I got a little confused of how to implement it. It seems that Selenium is a common denominator in a lot of the BDD frameworks for testing a GUI, but it still doesn't seem to help describe what to do.
I then came across the term Functional Testing tool, and I think that confused me even more. Do they all test a GUI?
I think the one that looked like it was all one package was SmartBear TestComplete, and then there is, what seems to be, another similar application by SmartBear called, SmartBear TestLeft, but I think I saw that they still used Cucumber for BDDing it. There a few others that looked like they might work as well, but I guess the other question is what's the cheapest route?
I guess the biggest problem I have is how to make these tests more dynamic, as the UI/browser dimensions can easily change from system to system, and how do I go about writing automation that can handle this, and tie into a BDD methodology?
Does anyone have any suggestions here? Does anybody out there do this?
Thanks in advance.
BDD Architecture
BDD automation typically consists of a few layers:
The natural language steps
The wiring that ties the steps to their definition
The step definitions, which usually access page objects
Page objects, which provide all the capabilities of a page or widget
Automation over the actual code being exercised, often through the GUI.
The wiring between natural language steps and the step definitions is normally done by the BDD tool (Cucumber).
The automation is normally done using the automation tool (Selenium). Sometimes people do skip the GUI, perhaps targeting an API or the MVC layer instead. It depends how complex the functionality in your web page is. If in doubt, give Selenium a try. I've written automation frameworks for desktop apps; the principle's the same regardless.
Keeping it maintainable
To make the steps easy to maintain and change, keep the steps at a fairly high level. This is frequently referred to as "declarative" as opposed to "imperative". For instance, this is too detailed:
When Fred provides his receipt
And his receipt is scanned
And the cashier clicks "Refund to original card"
And the card is inserted...
Think about what the user is trying to achieve:
When Fred gets a refund to his original card
Generally a scenario will have a few Givens or Thens, but typically only one When (unless you have something like users interacting or time passing, where both events are needed to illustrate the behaviour).
Your page objects in this scenario might well be a "RefundPageObject" or perhaps, if that's too large, a "RefundToCardPageObject". This pattern allows multiple scenario steps to access the same capabilities without duplication, which means that if the way the capabilities are exercised changes, you only need to change them in one place.
Different page objects could also be used for different systems.
Getting started
If you're attacking this for the first time, start by getting an empty scenario that just runs and passes without doing anything (make the steps empty). When you've done this, you'll have successfully wired up Cucumber.
Write the production code that would make the scenario run. (This is the other way round from the way you'd normally do it; normally you'd write the scenario code first. I've found this is a good way to get started though.)
When you can run your scenario manually, add the automation directly to the steps (you've only got one scenario at this point). Use your favourite assertion package (JUnit) to get the outcome you're after. You'll probably need to change your code so that you can automate over it easily, by e.g.: giving relevant test ids to elements in your webpage.
Once you've got one scenario running, try to write any subsequent scenarios first; this helps you think about your design and the testability of what you're about to do. When you start adding more scenarios, start extracting that automation out into page objects too.
Once you've got a few scenarios, have a think about how you might want to address different systems. Avoid using lots of "if" statements if you can; those are hard to maintain. Injecting different implementations of page objects is probably better (the frameworks may well support this by now; I haven't used them in a while).
Keep refactoring as you add more scenarios. If the steps are too big, split them up. If the page objects are too big, divide them into widgets. I like to organize my scenarios by user / stakeholder capabilities (normally related to the "when" but sometimes to the "then") then by different contexts.
So to summarize:
Write an empty scenario
Write the code to make that pass manually
Wire up the scenario using your automation tool; it should now run!
Write another scenario, this time writing the automation before the production code
Refactor the automation, moving it out of the steps into page objects
Keep refactoring as you add more scenarios.
Now you've got a fully wired BDD framework, and you're in a good place to keep going while making it maintainable.
A final hint
Think of this as living documentation, rather than tests. BDD scenarios hardly ever pick up bugs in good teams; anything they catch is usually a code design issue, so address it at that level. It helps people work out what the code does and doesn't do yet, and why it's valuable.
The most important part of BDD is having the conversations about how the code works. If you're automating tests for code that already exists, see if you can find someone to talk to about the complicated bits, at least, and verify your understanding with them. This will also help you to use the right language in the scenarios.
See my post on using BDD with legacy systems for more. There are lots of hints for beginners on that blog too.
Since you feel lost as to where to start, I will hint you about some blogs I have written that talks a bit about your problem.
Some categories that may help you:
http://www.thinkcode.se/blog/category/Cucumber
http://www.thinkcode.se/blog/category/Selenium
This, rather long and old post, might give you hints as well:
http://www.thinkcode.se/blog/2012/11/01/cucumberjvm-not-just-for-testing-guis
Notice that versions are dated, but hopefully it can give some ideas as what too look for.
I am not an expert on the test automation but I am currently working on this part. So let me share some idea and hope it will help you at the current stage.
We have used selenium+cucumber+intellij for testing web application. We have used testcomplete+cucumber+intellij for testing java desktop application.
As to the test of web application, we have provided a test mode in our web application, which allows us to get some useful details of the product and the environment; and also allows us to easily trigger events through clicking the button and inputting text into the test panel under test mode.
I hope these are helpful for you.

Anyone Using Executable Requirements?

In my limited experience with them executable requirements (i.e. specifying all requirements as broken automated tests) have proven to be amazingly successful. I've worked on one project in which we placed a heavy emphasis on creating high-level automated tests which exercised all the functionality of a given use case/user story. It was really amazing to me how much easier development became after we began this practice. Implementing features became so much easier after writing a test and we were able to make major architectural changes to the system with all the confidence in the world that everything still worked the same as it did yesterday.
The biggest problem we ran into was that the tools for managing these types of tests aren't very good. We used Fitnesse quite a bit and as a result I now hate the Fit framework.
I'd like to know 1) if anyone else has experience developing using this type of test-driven requirement definition and 2) what tools you all used to facilitate this.
The primary tool I've also used was FitNesse. I've used it at several companies, with very good results. We did have test cases numbering in the many thousands, and we had to be very disciplined in how we organized and used them.
I've tried some other tools, including writing my own DSL (domain-specific language) and using things like RSpec. I really like RSpec, but it is certainly more of a developer tool than a business one.
I know Rick Mugridge has been working on a tool called ZiBreve (http://www.zibreve.com/visit.php?page=index) which is supposed to have stronger refactoring support. I haven't used it myself, but I know Rick and have talked to him several times. I know there was discussion at Agile 2008 on some different ways to deal with the Fitnesse tests in general.
Other than that, I haven't seen a lot of good tools out there. Even tools like WinRunner are fine for QA type tests, but for exploratory testing of requirements by the business, FitNesse or a custom DSL seem to be the ways to go right now.
You might want to take look at Robot Framework (http://robotframework.org). It's FIT-like but hopefully easier to integrate to different testing tools, version control and continuous integration. Different abstraction levels in the test data also make it easier to maintain the data, and when the separate test data editor gets more mature maintenance gets even easier. The quick start guide introduces the most important features of the framework and acts also as an executable demo.
I've had to use, test and set up both fitnesse and one of it's competitor, GreenPepper for my work, and what I can say is :
GreenPepper is a confluence plugin (confluence is an enterprise wiki from atlassian) and have many of the things you need in an "enterprise" level tool with little to no additional work required :
Better user friendly -rich text- wiki
syntax (makes it easier to work with
for non technical people)
It integrates very well with many
development tools : Eclipse, VB,
maven2 and Nant plugin, I tested most
and was very pleased.
User and access rights are managed by
confluence, which is to say it's good
and make uses of database of your
likin (which might be mandatory
depending on where you work)
Many other functionalities that might
or might not be required : ssl support, remote execution (install the wiki on unix, execute on windows if you are working on a C# project, or reverse)
Looks way better :D
Big downs for GreenPepper are : Configuration is quite hard and documentation is poor (although they seem to be working on it and they answer quite fast on their forum) and also it is not free, you have to pay for both confluence and GreenPepper, which might add up to quite a lot.
Fitnesse is very basic in my opinion, very easy to set up, it works but that's it, you can use some of the fitnesse plugins developed by the open source community, and even some Fit plugins, such as the Eclipse plugin (build the skeletton of the fixture from a fitnesse test file, provided it's in a .fit extension, very usefull). Integration is not ideal, authentification and access rights management is poor, but it's FREE and if you need something, you can do it because it's open source.
I've found that using contracts is a great approach. Metaprogramming contracts are generally lower-level than the types of integration tests you describe, but the two are certainly not mutually exclusive. I find contracts help keep documentation, implementation, and testing all in sync -- this is a major problem of TDD (not that it isn't a problem in non-TDD).
I've tried Fitnesse and its really awful (particularly integration with SVN).
And our company develop similar open-source tool with fit engine: FitPro
Another brilliant tool I've used is Concordion. It has the only disadvantage - requrements in html format
My experience is limited to personal projects and found much the same advantages you mentioned. I recommend http://metacpan.org/pod/Test::Simple::Tutorial which was my inspiration for trying out testing-based development. The perl testing modules seem pretty useful and flexible, though I have nothing to compare them to.
I also believe tests are vital for the maintenance period of a project. If you have good tests to begin with, it saves a lot of time and mistakes later on. I wish I had put more work into tests on my current project.