Unit testing is a practice of writing code tests. TDD is the practice of writing them "before". BDD is the practice of writing behavior/spec driven tests. Can I write BDD "after" or do I have to do it always "before"?
If you write BDD "after", and it's not BDD, then what it is called?
By definition of Behaviour Driven Development, you cannot write the behaviour tests after the code, however that does not mean that doing so isn't useful. You may get more benefit from writing the spec tests first, but they are still useful as regression system tests for your application. So while you're technically not practicing BDD, writing these tests is a good idea. One of the big perks of BDD is that it guides the development of the particular behaviour, so you are losing a lot of value by adding them later, but they still serve some use.
This is the same as writing unit tests after the code in TDD. It's technically not TDD, but having the tests is obviously still useful.
Behavior-Driven Development (BDD) is a variation of Test-Driven Development (TDD) and just like with TDD you should write your tests first.
Some people call BDD for TDD done right, or the way it was intended. Also, you could say that BDD is a mix of Domain-Driven Development (DDD) and TDD.
BDD after development is not BDD, and it is a case of validation rather than specification.
However as the other guys mentioned, it does not mean that adding in an acceptance test suite after-the-fact has no value. You will be building a suite of regression acceptance tests that validate behaviour, before proceeding with further development (large refactoring jobs or new features being added).
From experience I would say if you are to do this task, it is best that the key developers who wrote the production code stay well away from writing the acceptance tests (hopefully in the form of Gherkin scripts); and those that are writing them go back to the original requirements documentation (if any) and most definitely collaborate with some of the stakeholders in doing so. This will help make sure that the acceptance tests you write are closer to specification.
I like the observation that BDD-After is simply a case of writing validation. I also appreciate the comments that a developer doing BDD-After misses some of the other benefits of BDD-As-You-Go. One point that seems worth adding is that writing a secenario/test before the implementation and then having the test pass is also a type of validation that the test itself is sound. Writing a passing test for a feature that already works (BDD-After) may leave a developer wondering if their test will "fail appropriately" should a feature get broken.
Related
And further, how they relate together or even if they do?
What would one do to understand the various pieces to a simple question, how to I properly build a testing facility for my (web or other) application?
Agile Development is a banner term for many things, too numerous to mention, including Scrum and TDD. It typically, but not always follows the Agile Manifesto.
SCRUM
This is a particular flavour of agile. This diagram from wikipedia shows the process:
See wikipedia for more info.
Unit Testing
This is the art of writing code that tests code. Failing tests indicate a problem in your solution.
Test Driven Development
This is the practice of writing tests before code, some of the advantages being that untested code isn't added to the solution, and that the code written is testable.
A proper testing facility, usually leverages something along the lines of xunit, junit, nunit, mstest depending on the framework used, these tests are typically ran via a Continuous Integration build on some kind of build server. That is a build that runs everytime the code changes, that executes tests. This way problems are identified quicker.
I've been using TDD for a while now and it works well most of the time. From my understanding, I write the test (red) and work the code (green). This serves as a great tool to focus on coding just what is required.
The application I'm currently working on has fairly loose user requirements to say the least! This can create the need to change the existing code base in trivial manner all the way up to redesigning full sections.
When I do this a lot of my tests fail ... understandably since the design has changed. What should I do with these old tests? I'm finding maintaining them can become an issue.
I suppose the core of my question is:
Is TDD used more to create a coding "map" to help focus you as a developer to write code and then some other testing paradigm is used in conjunction to ensure that everything "works" when the code is handed off? Or, do people use TDD as a full-stop-shop that can both help create cleaner code AND work as a full test suite, hence I'll need to maintain my full test suite
Tests written while doing TDD absolutely are valuable throughout the lifetime of an application.
The purpose of TDD is to allow you to build up your code test by test, so that it always meets the requirements you've implemented thus far (this is the "works" part), is fully tested and is well factored. When you're done you'll have a full regression test suite as a bonus, which is valuable. So for both proving what requirements you've implemented and for regression, it's valuable to keep your tests running.
If requirements change so fast that you can't maintain the test suite, you have a project management problem. Explain to your customers that you don't have enough time to ensure quality, or that they need to hire a test engineer.
I honestly don't see the difference between BDD and TDD. I mean, both are just tests if what is expected happens. I've seen BDD Tests that are so fleshed out they practically count as TDD tests, and I've seen TDD tests that are so vague that they black box a lot of code. Let's just say I'm pretty convinced that having both is better.
Here's a fun question though. Where do I start? Do I start out with high level BDD tests? Do I start out with low level TDD tests?
I honestly don't see the difference between BDD and TDD.
That's because there isn't any.
I mean, both are just tests if what is expected happens.
That's wrong. BDD and TDD have absolutely nothing whatsoever to do with testing. None. Nada. Zilch. Zip. Nix. Not in the slightest.
Unfortunately, TDD has the word "test" in pretty much everything (not only in its name, but also in test framework, unit test, TestCase (the class you tpyically inherit from), FooTest (the class which typically holds your tests), testBar (the typical naming pattern for a test method), plus a lot test-related terminology such as "assertion" and "verification") which leads some people to believe that it actually does have something to do with tests. So, some smart people said: "Hey, let's just change the name" to remove any potential for confusion.
And that's what BDD is. It's just TDD with any test-related terminology replaced by examples-of-behavior-related terminology:
Test → Example
Assertion → Expectation
assert → should
Unit → Behavior
Verification → Specification
… and so on
BDD is just TDD with different words. If you do TDD right, you are doing BDD. The difference is that – provided you believe at least in the weak form of the Sapir-Whorf Hypothesis – the different words make it easier to do it right.
BDD is from customers point of view and focuses on excpected behavior of the whole system.
TDD is from developpers point of view and focuses on the implementation of one unit/class/feature. It benefits among others from better architecture (Design for testability, less coupling between modules).
From technical point of view (how to write the "test") they are similar.
I would (from an agile point of view) start with one bdd-userstory and implement it using TDD.
From what I've gathered on Wikipedia, BDD includes acceptance and QA test that can't be done without stakeholders/user input. Also BDD uses a natural language to specify its test while TDD usually uses programming language. There might be some overlap between the two but I think it's not the vagueness but BDD's language that is the main difference.
As for where you are to start, well that really depends on your development process, doesn't it? I assume if you are doing bottom-up that you're going to write TDD first and once you reach higher level you'll use BDD to test if those features work as expected.
As k3b noted: main difference would be that BDD is problem-domain oriented while TDD is more oriented solution-domain.
Just copying the answer from Matthew Flynn which I agree more than "TDD and BDD have nothing to do with tests":
Behavior Driven Development is an extension/revision of Test Driven Development. Its purpose is to help the folks devising the system (i.e., the developers) identify appropriate tests to write -- that is, tests that reflect the behavior desired by the stakeholders. The effect ends up being the same -- develop the test and then develop the code/system that passes the test. The hope in BDD is that the tests are actually useful in showing that the system meets the requirements.
UPDATE
Units of code (individual methods) may be too granular to represent the behavior represented by the behavioral tests, but you should still test them with unit tests to guarantee they function appropriately. If this is what you mean by "TDD" tests, then yes, you still need them.
BDD is about getting your TDD right. It provides "structure and diciplene" to your TDD. It guides you in testing the right thing and doing the right amount of test. Here is a fantastic small post on BDD and TDD,
http://codingcraft.wordpress.com/2011/11/12/bdd-get-your-tdd-right/
I think the biggest contribution of BDD over TDD or any other approaches, is making non-technical people(product owners/customers) part of the software development process at all levels.
Writing executable scenarios in natural languages have almost bridged the gap between the requirement and the delivery.
Product owners can himself run the scenarios he had written and test with different data sets if he wants to play around the behavior of the code written by the development team.
That's amazing! Customer is sitting right at the center and precisely not just asking what he really wants but verifying and experiencing the deliverables as well.
A fantastic article on the differences between TDD and BDD:
http://www.lostechies.com/blogs/sean_chambers/archive/2008/12/07/starting-with-bdd-vs-starting-with-tdd.aspx
Should give you everything you need to know, including problems with both, and examples.
Terminology are different, but in my work, i use TDD to dev detail, mainly for unit test, and the BDD is more high level, for customer, QA or no-tech man .
Overall
BDD is really Design-by-Contract using different terms. Generally speaking, BDD is in the form of Given-When-Then, which is roughly analogous to Preconditions (Given), Check-conditions/Loop-invariants (When), and Post-conditions/Invariants (Then).
Notice
Note that BDD is very much Hoare-logic (i.e. {P}C{Q} or {P}recondition-[C]ommand-{Q}Post-condition). Therefore:
Preconditions (Given) must hold true for the command (method/function) to compute correctly. Any violation of the Given (precondition) signals a fault in the calling Client code.
Command(s) (When) are what happens after the precondition(s) are met. In Eiffel, they can be punctuated within the method or function code with other contracts. Think about these as though they are QA/QC checks along a process assembly line.
Post-conditions (Then) must hold true once the Command (When) is finished.
Moral of the Story
Because BDD is just DbC (Hoare-logic) repackaged in different words, this means it is not TDD. Why? Because TDD is not about preconditions/checks/post-condition contracts tied directly to methods, functions, properties, and class-state. TDD is the next step up the ladder in testing methods, functions, properties, and classes with their discrete states. Once you see this and fully appreciate that TDD is not BDD and BDD is not TDD, but that they are separate and complementary technologies for software correctness proofs—THEN—you will finally understand these topics correctly. You will also use and apply them correctly.
Conclusion
Eiffel is the only language I am aware of where BDD (Design-by-Contract) is baked raw into both the language specification and compiler. It is not a Frankenstein bolt-on monster with limitations. In Eiffel, BDD (aka DbC) is an elegant, helpful, useful, and direct participant in the software correctness toolbox.
See Also
Wikipedia helps defined Hoare-logic. See: https://en.wikipedia.org/wiki/Hoare_logic
I have created an example in Eiffel that you can look at. See:
Primary class: https://github.com/ljr1981/stack_overflow_answers/blob/main/src/so_73347395/so_73347395.e
Test class: https://github.com/ljr1981/stack_overflow_answers/blob/main/testing/so_73347395/so_73347395_test_set.e
The main difference is just the wording. BDD uses a more verbose style so that it can be read almost like a sentence.
There are a lot of testing methods out there i.e. blackbox, graybox, unit, functional, regression etc.
Obviously, a project cannot take on all testing methods. So I asked this question to gain an idea of what test methods to use and why should I use them. You can answer in the following format:
Test Method - what you use it on
e.g.
Unit Testing - I use it for ...(blah, blah)
Regression Testing - I use it for ...(blah, blah)
I was asked to engage into TDD and of course I had to research testing methods. But there is a whole plethora of them and I don't know what to use (because they all sound useful).
1. Unit Testing is used by developers to ensure unit code he wrote is correct. This is usually white box testing as well as some level of black box testing.
2. Regression Testing is a functional testing used by testers to ensure that new changes in system has not broken any of existing functionality
3. Functional testing is testing conducted on a complete, integrated system to evaluate the system's compliance with its specified requirements. Functionality testing falls within the scope of black box testing, and as such, should require no knowledge of the inner design of the code or logic
.
This Test-driven development and Feature Driven Development wiki articles will be of great help for you.
For TDD you need to follow following process:
Document feature (or use case) that
you need to implement or enhance
in your application that
currently does not exists.
Write set of functional test
cases that can ensure above
feature (from step 1) works. You may need to
write multiple test cases for
above feature to test all different
possible work flows.
Write code to implement above feature (from step 1).
Test this code using test cases you
had written earlier (in step 2). The actual
testing can be manual but I would recommend to create automated tests
if possible.
If all test cases pass, you are good to
go. If not, you need to update code (go back to step 3)
so as to make the test case pass.
TDD is to ensure that functional test cases which were written before you coded should work and does not matter how code was implemented.
There is no "right" or "wrong" in testing. Testing is an art and what you should choose and how well it works out for you depends a lot from project to project and your experience.
But as a professional Test Expert my suggestion is that you have a healthy mix of automated and manual testing.
(Examples below are in PHP but you can easily find the correct examples for what ever langauge/framework you are using)
AUTOMATED TESTING
Unit Testing
Use PHPUnit to test your classes, functions and interaction between them.
http://phpunit.sourceforge.net/
Automated Functional Testing
If it's possible you should automate a lot of the functional testing. Some frame works have functional testing built into them. Otherwise you have to use a tool for it. If you are developing web sites/applications you might want to look at Selenium.
http://www.webinade.com/web-development/functional-testing-in-php-using-selenium-ide
Continuous Integration
Use CI to make sure all your automated tests run every time someone in your team makes a commit to the project.
http://martinfowler.com/articles/continuousIntegration.html
MANUAL TESTING
As much as I love automated testing it is, IMHO, not a substitute for manual testing. The main reason being that an automated can only do what it is told and only verify what it has been informed to view as pass/fail. A human can use it's intelligence to find faults and raise questions that appear while testing something else.
Exploratory Testing
ET is a very low cost and effective way to find defects in a project. It take advantage of the intelligence of a human being and a teaches the testers/developers more about the project than any other testing technique i know of. Doing an ET session aimed at every feature deployed in the test environment is not only an effective way to find problems fast, but also a good way to learn and fun!
http://www.satisfice.com/articles/et-article.pdf
This answer is (almost) identical to one that I gave to another question. Check out that question since it had some other good answers that might help you.
How can we decide which testing method can be used?
I usually do the following things:
Page consistency in case of multi-page web sties.
Testing the database connections.
Testing the functionalities that can be affected by the change I just made.
I test functions with sample input to make sure they work fine (especially those that are algorithm-like).
In some cases I implement features very simply hard-coding most of the settings then implement the settings later, testing after implementing every setting.
Most of these apply to applications, too.
Well before going to the answer i would like to clear testing concept about multiple methods.
There are six main testing types which cover all most all testing methods.
Black Box Testing
White Box Testing
Grey Box Testing
Functional Testing
Integration Testing
Usability Testing
Almost all Testing methods lies under these types, you can also use some testing method in multiple types like you can use Smoke testing in black box or white box approach on the basis of resources available to test.
So for testing a web site completely you need to use at least following testing methods on the basis of resources available to test. These are at least methods which should be used to test a web site, but there may be some more imp methods on the basic of nature of website.
Requirement Testing
Smock Testing
System Testing
Integration Testing
Regression Testing
Security Testing
Performance & Load Testing
Deployment Testing
You should at least use all of above (8) testing methods to test a web site no matter what testing type you are focusing. You can automate you test in some areas and you can do this manually it all depends upon the resources availability.
There is specifically no hard and fast rule to follow any testing type or any method. As you know "Testing Is An ART" so art don't have rules or boundaries. Its totally up to you What you use to test and how.......
Hope you got the answer of question.
Selenium is very good for testing websites.
The answer depends on the Web framework used (if any). Django for example has built-in testing functions.
For PHP (or functional web testing), SimpleTest is pretty good and well... simple. It support Unit Testing (PHP only) and Web Testing. Tests can run in the IDE (Eclipse), or in the browser (meaning on your server).
The other answers posted so far focus on unit/functional/performance/etc. testing, and they are all reasonable.
However, one the key questions you should ask is, "how effective is my testing?".
This is often answered with test coverage tools, that determine which parts of your application actually get exercised by some set of tests. The ideal test coverage tool lets you test your application by any method you can imagine (including all the standard answers above) and will then report what part and what percentage of your code was exercised. Most importantly, it will tell you what code you did not exercise. You can then inspect that code and decide if more testing is warranted, or if you don't care. If the untested code has to do with "disk full error handling" and you belive that 1TB disks are common, you might decide to ignore that. If the untested code is the input validation logic leading to SQL queries, you might decide that you must test that logic to ensure that no SQL injection attacks can occur.
What test coverage tools let you do it to make a rational decision that you have tested adequately, using data about what parts of your code has been exercised. So regardless of how you test, best practices indicates you should also do test coverage analysis.
Test coverage tools can be obtained from a variety of sources. SD provides a family of test coverage tools that handle C, C++, Java, C#, PHP and COBOL, all of which are used to support web site testing in various ways.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I work in an office which has been doing Agile for a while now. We use Scrum for project management and mix in the engineering practices of XP. It works well and we are constantly learning lessons and refining our process.
I would like to tell you about our usual practices for testing and get feedback on how this could be improved:
TDD: First Line of Defense
We are quite religious about unit testing and I would say our developers are also experienced enough to write comprehensive tests and always isolate the SUT with mocks.
Integration Tests
For our use, integration tests are basically the same as the unit tests just without using the mocks. This tends to catch a few issues which slipped through the unit tests. These tests tend to be difficult to read as they usually involve a lot or work in the before_each and after_each sections of the spec framework as the system has to often reach a certain state in order for the tests to be meaningful.
Functional Testing
We usually do this in a structured, but manual fashion. We have played with Selenium and Windmill, which are cool, but for us at least not quite there yet.
I would like to hear how anyone else is doing things. Do you think that if Integration Tests or Functional Testing are being done well enough the other can be disregarded?
Unit, integration and functional testing, though exercising the same code, are attacking it from different perspectives. It's those perspectives that make the difference, if you were to drop one type of testing then something could work its way in from that angle.
Also, unit testing isn't really about testing your code, especially if you are practising TDD. The process of TDD helps you design your code better, you just get the added bonus of a suite of tests at the end of it.
You haven't mentioned whether you have a continuous integration server running. I would strongly recommend setting one up (Hudson is easy to set up). Then you can have your integration and functional tests run against every check in of the code.
We have experienced that a solid set of selenium tests actually sums up what the customer expects of quality really well. So, in essence we've been having this discussion: If writing selenium tests was as easy as writing unit tests we should focus less on unit tests.
And if there is a bug somewhere that does not have any consequence in the application, who really cares? But there's always the issues surrounding real-life complexities; are you sure your functional tests are capturing the correct scenarios ? There may be underlying complexities caused by other systems that are not directly visible in the behavior of the application.
In reality, writing selenium tests (using a proper programming language, not selenese) does get really simple and fun once you drill through the initial problems. But we're not willing to give up our unit tests quite yet....
Unit testing, integration testing and functional testing all serve different purposes. You should not discard one just because the others are running at a high level of reliability.
I would say (and this is just a matter of opinion) that your Functional tests are your true tests. Ie those tests that actually simulate real-life usage of your application. For this reason, never get rid of them, no matter what.
It sounds like you have a decent system going. Keep it all if you have nothing to lose.
At my current client we don't really separate between unit-tests and integration-tests. The business entities are so dependent on the underlying data-layer (using a homegrown ORM framework) that in effect we have little or no true unit-tests.
The build server is set up with continous integration (CI) in Team Build and to keep this from bogging too much with slow tests (the full test suite takes over an hour to run on the server) we have separated the tests into "slow" tests that gets run twice a day and "fast" tests that get run as part of continous integration. By setting an attribute on the test-method the build-server can tell the difference between the two.
In general, "slow" tests are any that needs to do data-access, use web-services or similar. These would be considered integration tests or functional tests by common convention. Examples are: CRUD-tests, business validation rule tests that need a set of objects to work with, etc.
"Fast" tests are more like unit-tests, where you can reasonably isolate a single object's state and behavior for the test.
I would consider any test that run in tenths of a second or less to be "fast". Anything else is slow and probably shouldn't be run as part of CI.
I agree that you should not get too hung up on the "flavor" of test you use as part of development (expressing acceptance criteria as tests is the exception of course). The individual developer should use their judgement in deciding what type of tests best suit their code. Insisting on unit-tests for a business entity might not reveal the faults a CRUD-test would and so on...
I liken unit testing as making sure the words in a paragraph are spelled correctly. Functional testing is like making sure the paragraph makes sense, and flows well within the document it's living within.
I tend not to separate various flavours of testing in TDD. For me TDD is Test-Driven Development, not Unit Test-Driven Development. So my TDD practice combines unit tests, integration tests, functional and acceptance tests. This results in some components being covered by certain types of tests and others components being covered by other types of tests in a very pragmatic fashion.
I have asked a question about the relevance of this practice and the short answer was that in practice the separation is between fast/simple tests run automatically at every build and slow/complex tests run less often.
My company does functional testing but not unit or integration testing. Im trying to encourage we adopt them, i see them as encouraging better design and a faster indication that all is well. Do you need unit tests if you do functional testing?
I really like Gojko Adzic's concept of a "face saving test" as a way to determine what you test via the UI: http://gojko.net/2007/09/25/effective-user-interface-testing/
You should do all of them because unit,integration and functional testing all serve different purposes.
Belongs to me an important point is that the way write test is very important and TDD is not enought, BDD (behavior driven development) give a good approach...