TDD and BDD Differences - testing

I honestly don't see the difference between BDD and TDD. I mean, both are just tests if what is expected happens. I've seen BDD Tests that are so fleshed out they practically count as TDD tests, and I've seen TDD tests that are so vague that they black box a lot of code. Let's just say I'm pretty convinced that having both is better.
Here's a fun question though. Where do I start? Do I start out with high level BDD tests? Do I start out with low level TDD tests?

I honestly don't see the difference between BDD and TDD.
That's because there isn't any.
I mean, both are just tests if what is expected happens.
That's wrong. BDD and TDD have absolutely nothing whatsoever to do with testing. None. Nada. Zilch. Zip. Nix. Not in the slightest.
Unfortunately, TDD has the word "test" in pretty much everything (not only in its name, but also in test framework, unit test, TestCase (the class you tpyically inherit from), FooTest (the class which typically holds your tests), testBar (the typical naming pattern for a test method), plus a lot test-related terminology such as "assertion" and "verification") which leads some people to believe that it actually does have something to do with tests. So, some smart people said: "Hey, let's just change the name" to remove any potential for confusion.
And that's what BDD is. It's just TDD with any test-related terminology replaced by examples-of-behavior-related terminology:
Test → Example
Assertion → Expectation
assert → should
Unit → Behavior
Verification → Specification
… and so on
BDD is just TDD with different words. If you do TDD right, you are doing BDD. The difference is that – provided you believe at least in the weak form of the Sapir-Whorf Hypothesis – the different words make it easier to do it right.

BDD is from customers point of view and focuses on excpected behavior of the whole system.
TDD is from developpers point of view and focuses on the implementation of one unit/class/feature. It benefits among others from better architecture (Design for testability, less coupling between modules).
From technical point of view (how to write the "test") they are similar.
I would (from an agile point of view) start with one bdd-userstory and implement it using TDD.

From what I've gathered on Wikipedia, BDD includes acceptance and QA test that can't be done without stakeholders/user input. Also BDD uses a natural language to specify its test while TDD usually uses programming language. There might be some overlap between the two but I think it's not the vagueness but BDD's language that is the main difference.
As for where you are to start, well that really depends on your development process, doesn't it? I assume if you are doing bottom-up that you're going to write TDD first and once you reach higher level you'll use BDD to test if those features work as expected.
As k3b noted: main difference would be that BDD is problem-domain oriented while TDD is more oriented solution-domain.

Just copying the answer from Matthew Flynn which I agree more than "TDD and BDD have nothing to do with tests":
Behavior Driven Development is an extension/revision of Test Driven Development. Its purpose is to help the folks devising the system (i.e., the developers) identify appropriate tests to write -- that is, tests that reflect the behavior desired by the stakeholders. The effect ends up being the same -- develop the test and then develop the code/system that passes the test. The hope in BDD is that the tests are actually useful in showing that the system meets the requirements.
UPDATE
Units of code (individual methods) may be too granular to represent the behavior represented by the behavioral tests, but you should still test them with unit tests to guarantee they function appropriately. If this is what you mean by "TDD" tests, then yes, you still need them.

BDD is about getting your TDD right. It provides "structure and diciplene" to your TDD. It guides you in testing the right thing and doing the right amount of test. Here is a fantastic small post on BDD and TDD,
http://codingcraft.wordpress.com/2011/11/12/bdd-get-your-tdd-right/

I think the biggest contribution of BDD over TDD or any other approaches, is making non-technical people(product owners/customers) part of the software development process at all levels.
Writing executable scenarios in natural languages have almost bridged the gap between the requirement and the delivery.
Product owners can himself run the scenarios he had written and test with different data sets if he wants to play around the behavior of the code written by the development team.
That's amazing! Customer is sitting right at the center and precisely not just asking what he really wants but verifying and experiencing the deliverables as well.

A fantastic article on the differences between TDD and BDD:
http://www.lostechies.com/blogs/sean_chambers/archive/2008/12/07/starting-with-bdd-vs-starting-with-tdd.aspx
Should give you everything you need to know, including problems with both, and examples.

Terminology are different, but in my work, i use TDD to dev detail, mainly for unit test, and the BDD is more high level, for customer, QA or no-tech man .

Overall
BDD is really Design-by-Contract using different terms. Generally speaking, BDD is in the form of Given-When-Then, which is roughly analogous to Preconditions (Given), Check-conditions/Loop-invariants (When), and Post-conditions/Invariants (Then).
Notice
Note that BDD is very much Hoare-logic (i.e. {P}C{Q} or {P}recondition-[C]ommand-{Q}Post-condition). Therefore:
Preconditions (Given) must hold true for the command (method/function) to compute correctly. Any violation of the Given (precondition) signals a fault in the calling Client code.
Command(s) (When) are what happens after the precondition(s) are met. In Eiffel, they can be punctuated within the method or function code with other contracts. Think about these as though they are QA/QC checks along a process assembly line.
Post-conditions (Then) must hold true once the Command (When) is finished.
Moral of the Story
Because BDD is just DbC (Hoare-logic) repackaged in different words, this means it is not TDD. Why? Because TDD is not about preconditions/checks/post-condition contracts tied directly to methods, functions, properties, and class-state. TDD is the next step up the ladder in testing methods, functions, properties, and classes with their discrete states. Once you see this and fully appreciate that TDD is not BDD and BDD is not TDD, but that they are separate and complementary technologies for software correctness proofs—THEN—you will finally understand these topics correctly. You will also use and apply them correctly.
Conclusion
Eiffel is the only language I am aware of where BDD (Design-by-Contract) is baked raw into both the language specification and compiler. It is not a Frankenstein bolt-on monster with limitations. In Eiffel, BDD (aka DbC) is an elegant, helpful, useful, and direct participant in the software correctness toolbox.
See Also
Wikipedia helps defined Hoare-logic. See: https://en.wikipedia.org/wiki/Hoare_logic
I have created an example in Eiffel that you can look at. See:
Primary class: https://github.com/ljr1981/stack_overflow_answers/blob/main/src/so_73347395/so_73347395.e
Test class: https://github.com/ljr1981/stack_overflow_answers/blob/main/testing/so_73347395/so_73347395_test_set.e

The main difference is just the wording. BDD uses a more verbose style so that it can be read almost like a sentence.

Related

Can BDD be done "after"?

Unit testing is a practice of writing code tests. TDD is the practice of writing them "before". BDD is the practice of writing behavior/spec driven tests. Can I write BDD "after" or do I have to do it always "before"?
If you write BDD "after", and it's not BDD, then what it is called?
By definition of Behaviour Driven Development, you cannot write the behaviour tests after the code, however that does not mean that doing so isn't useful. You may get more benefit from writing the spec tests first, but they are still useful as regression system tests for your application. So while you're technically not practicing BDD, writing these tests is a good idea. One of the big perks of BDD is that it guides the development of the particular behaviour, so you are losing a lot of value by adding them later, but they still serve some use.
This is the same as writing unit tests after the code in TDD. It's technically not TDD, but having the tests is obviously still useful.
Behavior-Driven Development (BDD) is a variation of Test-Driven Development (TDD) and just like with TDD you should write your tests first.
Some people call BDD for TDD done right, or the way it was intended. Also, you could say that BDD is a mix of Domain-Driven Development (DDD) and TDD.
BDD after development is not BDD, and it is a case of validation rather than specification.
However as the other guys mentioned, it does not mean that adding in an acceptance test suite after-the-fact has no value. You will be building a suite of regression acceptance tests that validate behaviour, before proceeding with further development (large refactoring jobs or new features being added).
From experience I would say if you are to do this task, it is best that the key developers who wrote the production code stay well away from writing the acceptance tests (hopefully in the form of Gherkin scripts); and those that are writing them go back to the original requirements documentation (if any) and most definitely collaborate with some of the stakeholders in doing so. This will help make sure that the acceptance tests you write are closer to specification.
I like the observation that BDD-After is simply a case of writing validation. I also appreciate the comments that a developer doing BDD-After misses some of the other benefits of BDD-As-You-Go. One point that seems worth adding is that writing a secenario/test before the implementation and then having the test pass is also a type of validation that the test itself is sound. Writing a passing test for a feature that already works (BDD-After) may leave a developer wondering if their test will "fail appropriately" should a feature get broken.

testability of a design

Much has been written about testing of code. But how do we ensure that our design is functionally correct in the first place? Just as we have JUnit for testing Java code, are there some tools that can be used, say, to test a UML based design, where the tests are expressed in form of functional requirements?
These are some vague thoughts, but wanted to know if there's a methodical, automatable approach to testing the design first. In other words, can we also have 'Test Driven Design'?
are there some tools that can be used to test a UML based design, where the tests are expressed in form of functional requirements?
You cannot test a design based on functional requirements because the satisfaction of functional requirements depends on the implementation, not on the design.
In other words, you can follow a bad design (any one of the infinite possible designs) and still meet that set of functional requirements if you implement the required behavior.
Interesting topic!
Firstly, no software architects in my personal network use UML as the only way to design their systems, and I also know of no software architects who create UML at the level of detail required to execute a mechanical test.
Secondly, I personally have a deep dislike of the UML modeling tools. If such a formal verification method is implemented, it's likely to be in something like Rational Rose - I swore long ago I'd never go anywhere near that again.
However, having said all that - in formal software shops, it's common to have requirements tracability, typically implemented as a matrix which shows business requirements on one axis, and design artifacts on the other. This way, you can see whether any requirements are not matched with a corresponding solution, or if there are elements in the solution which do not meet a specific business requirement.
Keeping this matrix updated is a pain, so it's not often used in agile teams, but if you're building softwware for a bank or the space shuttle, it is a valuable technique.
This tells you whether your design is complete - though not whether it's "right".
There's no way I know of to tell whether a design is "right" without either building it and testing, or relying on human experience and knowledge.

How to plan for whitebox testing

I'm relatively new to the world of WhiteBox Testing and need help designing a test plan for 1 of the projects that i'm currently working on. At the moment i'm just scouting around looking for testable pieces of code and then writing some unit tests for that. I somehow feel that is by far not the way it should be done. Please could you give me advice as to how best prepare myself for testing this project? Any tools or test plan templates that I could use? THe language being used is C++ if it'll make difference.
One of the goals of white-box testing is to cover 100% (or as close as possible) of the code statements. I suggest finding a C++ code coverage tool so that you can see what code your tests execute and what code you have missed. Then design tests so that as much code as possible is tested.
Another suggestion is to look at boundary conditions in if statments, for loops, while loops etc. and test these for any 'gray' areas, false positives and false negatives.
You could also design tests to look at the life cycle of important variables. Test their definition, their usage and their destruction to make sure they are being used correctly :)
There's three ideas to get you started. Good luck
At the moment i'm just scouting around looking for testable pieces of code and then writing some unit tests for that. I somehow feel that is by far not the way it should be done.
People say that one of the main benefits of 'test driven development' is that it ecourages you to design your components with testability in mind: it makes your components more testable.
My personal (non-TDD) approach is as follows:
Understand the functionality required and implemented: both 'a priori' (i.e. by reading/knowing the software functional specification), and by reading the source code to reverse-engineer the functionality
Implement black box tests for all the implemented/required functionality (see for example 'Should one test internal implementation, or only test public behaviour?').
My testing therefore isn't quite 'white box', except that I reverse-engineer the functionality being tested. I then test that reverse-engineered functionality, and avoid having any useless (and therefore untested) code. I could (but don't often) use a code coverage tool to see how much of the source code is exercised by the black box tests.
Try "Working Effectively with Legacy Code": http://www.amazon.com/Working-Effectively-Legacy-Michael-Feathers/dp/0131177052
It's relevant since by 'legacy' he means code that has no tests. It's also a rather good book.
Relevant tools are: http://code.google.com/p/googletest/ and http://code.google.com/p/gmock/
There may be other unit test and mock frameworks, but I have familiarity with these and I recommend them highly.

The Agile Way: Integration Testing vs Functional Testing or both? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I work in an office which has been doing Agile for a while now. We use Scrum for project management and mix in the engineering practices of XP. It works well and we are constantly learning lessons and refining our process.
I would like to tell you about our usual practices for testing and get feedback on how this could be improved:
TDD: First Line of Defense
We are quite religious about unit testing and I would say our developers are also experienced enough to write comprehensive tests and always isolate the SUT with mocks.
Integration Tests
For our use, integration tests are basically the same as the unit tests just without using the mocks. This tends to catch a few issues which slipped through the unit tests. These tests tend to be difficult to read as they usually involve a lot or work in the before_each and after_each sections of the spec framework as the system has to often reach a certain state in order for the tests to be meaningful.
Functional Testing
We usually do this in a structured, but manual fashion. We have played with Selenium and Windmill, which are cool, but for us at least not quite there yet.
I would like to hear how anyone else is doing things. Do you think that if Integration Tests or Functional Testing are being done well enough the other can be disregarded?
Unit, integration and functional testing, though exercising the same code, are attacking it from different perspectives. It's those perspectives that make the difference, if you were to drop one type of testing then something could work its way in from that angle.
Also, unit testing isn't really about testing your code, especially if you are practising TDD. The process of TDD helps you design your code better, you just get the added bonus of a suite of tests at the end of it.
You haven't mentioned whether you have a continuous integration server running. I would strongly recommend setting one up (Hudson is easy to set up). Then you can have your integration and functional tests run against every check in of the code.
We have experienced that a solid set of selenium tests actually sums up what the customer expects of quality really well. So, in essence we've been having this discussion: If writing selenium tests was as easy as writing unit tests we should focus less on unit tests.
And if there is a bug somewhere that does not have any consequence in the application, who really cares? But there's always the issues surrounding real-life complexities; are you sure your functional tests are capturing the correct scenarios ? There may be underlying complexities caused by other systems that are not directly visible in the behavior of the application.
In reality, writing selenium tests (using a proper programming language, not selenese) does get really simple and fun once you drill through the initial problems. But we're not willing to give up our unit tests quite yet....
Unit testing, integration testing and functional testing all serve different purposes. You should not discard one just because the others are running at a high level of reliability.
I would say (and this is just a matter of opinion) that your Functional tests are your true tests. Ie those tests that actually simulate real-life usage of your application. For this reason, never get rid of them, no matter what.
It sounds like you have a decent system going. Keep it all if you have nothing to lose.
At my current client we don't really separate between unit-tests and integration-tests. The business entities are so dependent on the underlying data-layer (using a homegrown ORM framework) that in effect we have little or no true unit-tests.
The build server is set up with continous integration (CI) in Team Build and to keep this from bogging too much with slow tests (the full test suite takes over an hour to run on the server) we have separated the tests into "slow" tests that gets run twice a day and "fast" tests that get run as part of continous integration. By setting an attribute on the test-method the build-server can tell the difference between the two.
In general, "slow" tests are any that needs to do data-access, use web-services or similar. These would be considered integration tests or functional tests by common convention. Examples are: CRUD-tests, business validation rule tests that need a set of objects to work with, etc.
"Fast" tests are more like unit-tests, where you can reasonably isolate a single object's state and behavior for the test.
I would consider any test that run in tenths of a second or less to be "fast". Anything else is slow and probably shouldn't be run as part of CI.
I agree that you should not get too hung up on the "flavor" of test you use as part of development (expressing acceptance criteria as tests is the exception of course). The individual developer should use their judgement in deciding what type of tests best suit their code. Insisting on unit-tests for a business entity might not reveal the faults a CRUD-test would and so on...
I liken unit testing as making sure the words in a paragraph are spelled correctly. Functional testing is like making sure the paragraph makes sense, and flows well within the document it's living within.
I tend not to separate various flavours of testing in TDD. For me TDD is Test-Driven Development, not Unit Test-Driven Development. So my TDD practice combines unit tests, integration tests, functional and acceptance tests. This results in some components being covered by certain types of tests and others components being covered by other types of tests in a very pragmatic fashion.
I have asked a question about the relevance of this practice and the short answer was that in practice the separation is between fast/simple tests run automatically at every build and slow/complex tests run less often.
My company does functional testing but not unit or integration testing. Im trying to encourage we adopt them, i see them as encouraging better design and a faster indication that all is well. Do you need unit tests if you do functional testing?
I really like Gojko Adzic's concept of a "face saving test" as a way to determine what you test via the UI: http://gojko.net/2007/09/25/effective-user-interface-testing/
You should do all of them because unit,integration and functional testing all serve different purposes.
Belongs to me an important point is that the way write test is very important and TDD is not enought, BDD (behavior driven development) give a good approach...

How are you generating tests from specifications?

I came across a printed article by Bertrand Meyer where he states that tests can be generated from specifications. My development team does nothing like this, but it sounds like a good technique to consider. How are you generating tests from specifications? How would you describe the success your having in discovering program faults via this method?
This might be a reference to RSpec, which is a really clever way of developing tests as a series of requirements. I'm still getting used to it, but it's been very handy in both defining what I need to do and then ensuring I do it.
#Tim Sullivan from Bertrand Meyer it can only be related to Eiffel :)
I think he's talking about ESpec. Given the name RSpec from the Ruby Folk, I think we can give them the label "heavily inspired".
I would say it depends on your specs. I have yet to work anywhere where the specs were good enough to create full unit tests from specifications - the level of detail just wasn't there. My managers always told us that if we specified to that level they could just ship the specs off to India and get it coded on the cheap ;)
There are all sorts of ways to do it, ranging from what I'd consider an 'art form' (and not necessarily good art) all the way to mathematically derived tests from formal specifications. At the end of the day, your development team needs to decided on what they can do based on the schedule they are working with. That being said, being able to test software against specs is a Good Thing.
Only your team can gauge the 'depth' of your tests, and that will probably be a function of how good your specs are. If they say something like, 'the login UI needs to provide a cancel button and a login button, and they need to work', your tests are going to be pretty general. But keep in mind - even very general tests are a Good Thing. Testing is a Good Thing. Too many developers have a bad attitude when it comes to testing, but at the end of the day, you're shipping software which should work, and to me, that means a lot.
The effectiveness your tests will having in finding program faults will depend on the detail you put into them. What is especially nice about having test procedures written to specs is that you can test each build to the same level of detail as the previous build (typically referred to as a regression test).