The Agile Way: Integration Testing vs Functional Testing or both? [closed] - testing

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I work in an office which has been doing Agile for a while now. We use Scrum for project management and mix in the engineering practices of XP. It works well and we are constantly learning lessons and refining our process.
I would like to tell you about our usual practices for testing and get feedback on how this could be improved:
TDD: First Line of Defense
We are quite religious about unit testing and I would say our developers are also experienced enough to write comprehensive tests and always isolate the SUT with mocks.
Integration Tests
For our use, integration tests are basically the same as the unit tests just without using the mocks. This tends to catch a few issues which slipped through the unit tests. These tests tend to be difficult to read as they usually involve a lot or work in the before_each and after_each sections of the spec framework as the system has to often reach a certain state in order for the tests to be meaningful.
Functional Testing
We usually do this in a structured, but manual fashion. We have played with Selenium and Windmill, which are cool, but for us at least not quite there yet.
I would like to hear how anyone else is doing things. Do you think that if Integration Tests or Functional Testing are being done well enough the other can be disregarded?

Unit, integration and functional testing, though exercising the same code, are attacking it from different perspectives. It's those perspectives that make the difference, if you were to drop one type of testing then something could work its way in from that angle.
Also, unit testing isn't really about testing your code, especially if you are practising TDD. The process of TDD helps you design your code better, you just get the added bonus of a suite of tests at the end of it.
You haven't mentioned whether you have a continuous integration server running. I would strongly recommend setting one up (Hudson is easy to set up). Then you can have your integration and functional tests run against every check in of the code.

We have experienced that a solid set of selenium tests actually sums up what the customer expects of quality really well. So, in essence we've been having this discussion: If writing selenium tests was as easy as writing unit tests we should focus less on unit tests.
And if there is a bug somewhere that does not have any consequence in the application, who really cares? But there's always the issues surrounding real-life complexities; are you sure your functional tests are capturing the correct scenarios ? There may be underlying complexities caused by other systems that are not directly visible in the behavior of the application.
In reality, writing selenium tests (using a proper programming language, not selenese) does get really simple and fun once you drill through the initial problems. But we're not willing to give up our unit tests quite yet....

Unit testing, integration testing and functional testing all serve different purposes. You should not discard one just because the others are running at a high level of reliability.

I would say (and this is just a matter of opinion) that your Functional tests are your true tests. Ie those tests that actually simulate real-life usage of your application. For this reason, never get rid of them, no matter what.
It sounds like you have a decent system going. Keep it all if you have nothing to lose.

At my current client we don't really separate between unit-tests and integration-tests. The business entities are so dependent on the underlying data-layer (using a homegrown ORM framework) that in effect we have little or no true unit-tests.
The build server is set up with continous integration (CI) in Team Build and to keep this from bogging too much with slow tests (the full test suite takes over an hour to run on the server) we have separated the tests into "slow" tests that gets run twice a day and "fast" tests that get run as part of continous integration. By setting an attribute on the test-method the build-server can tell the difference between the two.
In general, "slow" tests are any that needs to do data-access, use web-services or similar. These would be considered integration tests or functional tests by common convention. Examples are: CRUD-tests, business validation rule tests that need a set of objects to work with, etc.
"Fast" tests are more like unit-tests, where you can reasonably isolate a single object's state and behavior for the test.
I would consider any test that run in tenths of a second or less to be "fast". Anything else is slow and probably shouldn't be run as part of CI.
I agree that you should not get too hung up on the "flavor" of test you use as part of development (expressing acceptance criteria as tests is the exception of course). The individual developer should use their judgement in deciding what type of tests best suit their code. Insisting on unit-tests for a business entity might not reveal the faults a CRUD-test would and so on...

I liken unit testing as making sure the words in a paragraph are spelled correctly. Functional testing is like making sure the paragraph makes sense, and flows well within the document it's living within.

I tend not to separate various flavours of testing in TDD. For me TDD is Test-Driven Development, not Unit Test-Driven Development. So my TDD practice combines unit tests, integration tests, functional and acceptance tests. This results in some components being covered by certain types of tests and others components being covered by other types of tests in a very pragmatic fashion.
I have asked a question about the relevance of this practice and the short answer was that in practice the separation is between fast/simple tests run automatically at every build and slow/complex tests run less often.

My company does functional testing but not unit or integration testing. Im trying to encourage we adopt them, i see them as encouraging better design and a faster indication that all is well. Do you need unit tests if you do functional testing?

I really like Gojko Adzic's concept of a "face saving test" as a way to determine what you test via the UI: http://gojko.net/2007/09/25/effective-user-interface-testing/

You should do all of them because unit,integration and functional testing all serve different purposes.
Belongs to me an important point is that the way write test is very important and TDD is not enought, BDD (behavior driven development) give a good approach...

Related

Need of Integration testing

We have Eclipse UI in the frontend and have a non Java based backend.
We generally write Unit tests separately for both frontend and backend.
Also we write PDE tests which runs Eclipse UI against a dummy backend.
My question is do we need to have integration tests which test end to end.
One reason i might see these integration tests are useful are when i upgrade my frontend /backend i can run end to end tests and i find defects.
I know these kind of questions are dependent on particular scenario.
But would like to what is the general and best practice followed by all here.
cheers,
Saurav
As you say, the best approach is dependant on the application. However, in general it is a good idea to have a suite of integration tests that can test your application end-to-end, to pick up any issues that may occur when you upgrade only one layer of the application without taking those changes into account in another layer. This sounds like it would be definitely worthwhile in your case, given that you have system components written in different languages, which naturally creates more chance of issues arising due added complexity around the component interfaces.
One thing to be aware of when writing end-to-end integration tests (which some would call system tests) is that they tend to be quite fragile when compared to unit tests, which is a combination of a number of factors, including:
They require multiple components to be available for the tests, and for the communication between these components to be configured correctly.
They exercise more code than a unit test, and therefore there are more things that can go wrong that can cause them to fail.
They often involve asynchronous communication, which is more difficult to write tests for than synchronous communication.
They often require complex backend data setup before you can drive tests through the entire application.
Because of this fragility, I would advise trying to write as few tests as possible that go through the whole stack - the focus should be on covering as much functionality as possible in the fewest tests possible, with a bias towards your most important functional use-cases. A good strategy to get started would be:
Pick one key use-case (which ideally touches as many components in the application as possible), and work on getting an end-to-end test for this (even just having this single test will bring a lot of value). Focus on making this test as realistic as possible (i.e. use a production-like deployment), as reliable as possible, and as automated as possible (ideally it should run as part of continuous integration). Even just having this single test brings a lot of value.
Build out tests for other use-cases one test at a time, again focusing on your most important use-cases at first.
This approach will help to ensure that your end-to-end tests are of high quality, which is vital for their long-term health and usefulness. Too many times I have seen people try to introduce a comprehensive suite of such tests to an application, but ultimately fail because the tests are fragile & unreliable, people lose faith in them, don't run or maintain them, and eventually they forget they even had the tests in the first place.
Good luck and have fun!

Can BDD be done "after"?

Unit testing is a practice of writing code tests. TDD is the practice of writing them "before". BDD is the practice of writing behavior/spec driven tests. Can I write BDD "after" or do I have to do it always "before"?
If you write BDD "after", and it's not BDD, then what it is called?
By definition of Behaviour Driven Development, you cannot write the behaviour tests after the code, however that does not mean that doing so isn't useful. You may get more benefit from writing the spec tests first, but they are still useful as regression system tests for your application. So while you're technically not practicing BDD, writing these tests is a good idea. One of the big perks of BDD is that it guides the development of the particular behaviour, so you are losing a lot of value by adding them later, but they still serve some use.
This is the same as writing unit tests after the code in TDD. It's technically not TDD, but having the tests is obviously still useful.
Behavior-Driven Development (BDD) is a variation of Test-Driven Development (TDD) and just like with TDD you should write your tests first.
Some people call BDD for TDD done right, or the way it was intended. Also, you could say that BDD is a mix of Domain-Driven Development (DDD) and TDD.
BDD after development is not BDD, and it is a case of validation rather than specification.
However as the other guys mentioned, it does not mean that adding in an acceptance test suite after-the-fact has no value. You will be building a suite of regression acceptance tests that validate behaviour, before proceeding with further development (large refactoring jobs or new features being added).
From experience I would say if you are to do this task, it is best that the key developers who wrote the production code stay well away from writing the acceptance tests (hopefully in the form of Gherkin scripts); and those that are writing them go back to the original requirements documentation (if any) and most definitely collaborate with some of the stakeholders in doing so. This will help make sure that the acceptance tests you write are closer to specification.
I like the observation that BDD-After is simply a case of writing validation. I also appreciate the comments that a developer doing BDD-After misses some of the other benefits of BDD-As-You-Go. One point that seems worth adding is that writing a secenario/test before the implementation and then having the test pass is also a type of validation that the test itself is sound. Writing a passing test for a feature that already works (BDD-After) may leave a developer wondering if their test will "fail appropriately" should a feature get broken.

What test methods do you use for developing websites?

There are a lot of testing methods out there i.e. blackbox, graybox, unit, functional, regression etc.
Obviously, a project cannot take on all testing methods. So I asked this question to gain an idea of what test methods to use and why should I use them. You can answer in the following format:
Test Method - what you use it on
e.g.
Unit Testing - I use it for ...(blah, blah)
Regression Testing - I use it for ...(blah, blah)
I was asked to engage into TDD and of course I had to research testing methods. But there is a whole plethora of them and I don't know what to use (because they all sound useful).
1. Unit Testing is used by developers to ensure unit code he wrote is correct. This is usually white box testing as well as some level of black box testing.
2. Regression Testing is a functional testing used by testers to ensure that new changes in system has not broken any of existing functionality
3. Functional testing is testing conducted on a complete, integrated system to evaluate the system's compliance with its specified requirements. Functionality testing falls within the scope of black box testing, and as such, should require no knowledge of the inner design of the code or logic
.
This Test-driven development and Feature Driven Development wiki articles will be of great help for you.
For TDD you need to follow following process:
Document feature (or use case) that
you need to implement or enhance
in your application that
currently does not exists.
Write set of functional test
cases that can ensure above
feature (from step 1) works. You may need to
write multiple test cases for
above feature to test all different
possible work flows.
Write code to implement above feature (from step 1).
Test this code using test cases you
had written earlier (in step 2). The actual
testing can be manual but I would recommend to create automated tests
if possible.
If all test cases pass, you are good to
go. If not, you need to update code (go back to step 3)
so as to make the test case pass.
TDD is to ensure that functional test cases which were written before you coded should work and does not matter how code was implemented.
There is no "right" or "wrong" in testing. Testing is an art and what you should choose and how well it works out for you depends a lot from project to project and your experience.
But as a professional Test Expert my suggestion is that you have a healthy mix of automated and manual testing.
(Examples below are in PHP but you can easily find the correct examples for what ever langauge/framework you are using)
AUTOMATED TESTING
Unit Testing
Use PHPUnit to test your classes, functions and interaction between them.
http://phpunit.sourceforge.net/
Automated Functional Testing
If it's possible you should automate a lot of the functional testing. Some frame works have functional testing built into them. Otherwise you have to use a tool for it. If you are developing web sites/applications you might want to look at Selenium.
http://www.webinade.com/web-development/functional-testing-in-php-using-selenium-ide
Continuous Integration
Use CI to make sure all your automated tests run every time someone in your team makes a commit to the project.
http://martinfowler.com/articles/continuousIntegration.html
MANUAL TESTING
As much as I love automated testing it is, IMHO, not a substitute for manual testing. The main reason being that an automated can only do what it is told and only verify what it has been informed to view as pass/fail. A human can use it's intelligence to find faults and raise questions that appear while testing something else.
Exploratory Testing
ET is a very low cost and effective way to find defects in a project. It take advantage of the intelligence of a human being and a teaches the testers/developers more about the project than any other testing technique i know of. Doing an ET session aimed at every feature deployed in the test environment is not only an effective way to find problems fast, but also a good way to learn and fun!
http://www.satisfice.com/articles/et-article.pdf
This answer is (almost) identical to one that I gave to another question. Check out that question since it had some other good answers that might help you.
How can we decide which testing method can be used?
I usually do the following things:
Page consistency in case of multi-page web sties.
Testing the database connections.
Testing the functionalities that can be affected by the change I just made.
I test functions with sample input to make sure they work fine (especially those that are algorithm-like).
In some cases I implement features very simply hard-coding most of the settings then implement the settings later, testing after implementing every setting.
Most of these apply to applications, too.
Well before going to the answer i would like to clear testing concept about multiple methods.
There are six main testing types which cover all most all testing methods.
Black Box Testing
White Box Testing
Grey Box Testing
Functional Testing
Integration Testing
Usability Testing
Almost all Testing methods lies under these types, you can also use some testing method in multiple types like you can use Smoke testing in black box or white box approach on the basis of resources available to test.
So for testing a web site completely you need to use at least following testing methods on the basis of resources available to test. These are at least methods which should be used to test a web site, but there may be some more imp methods on the basic of nature of website.
Requirement Testing
Smock Testing
System Testing
Integration Testing
Regression Testing
Security Testing
Performance & Load Testing
Deployment Testing
You should at least use all of above (8) testing methods to test a web site no matter what testing type you are focusing. You can automate you test in some areas and you can do this manually it all depends upon the resources availability.
There is specifically no hard and fast rule to follow any testing type or any method. As you know "Testing Is An ART" so art don't have rules or boundaries. Its totally up to you What you use to test and how.......
Hope you got the answer of question.
Selenium is very good for testing websites.
The answer depends on the Web framework used (if any). Django for example has built-in testing functions.
For PHP (or functional web testing), SimpleTest is pretty good and well... simple. It support Unit Testing (PHP only) and Web Testing. Tests can run in the IDE (Eclipse), or in the browser (meaning on your server).
The other answers posted so far focus on unit/functional/performance/etc. testing, and they are all reasonable.
However, one the key questions you should ask is, "how effective is my testing?".
This is often answered with test coverage tools, that determine which parts of your application actually get exercised by some set of tests. The ideal test coverage tool lets you test your application by any method you can imagine (including all the standard answers above) and will then report what part and what percentage of your code was exercised. Most importantly, it will tell you what code you did not exercise. You can then inspect that code and decide if more testing is warranted, or if you don't care. If the untested code has to do with "disk full error handling" and you belive that 1TB disks are common, you might decide to ignore that. If the untested code is the input validation logic leading to SQL queries, you might decide that you must test that logic to ensure that no SQL injection attacks can occur.
What test coverage tools let you do it to make a rational decision that you have tested adequately, using data about what parts of your code has been exercised. So regardless of how you test, best practices indicates you should also do test coverage analysis.
Test coverage tools can be obtained from a variety of sources. SD provides a family of test coverage tools that handle C, C++, Java, C#, PHP and COBOL, all of which are used to support web site testing in various ways.

Automatic testing for web based projects

Recently I've came up with the question is it worth at all to spent development time to generate automatic unit test for web based projects? I mean it seems useless at some point because at some point those projects are oriented on interactions with users/clients, so you cannot anticipate the whole possible set of user action so you be able to check the correctness of content showed. Even regression test can hardly be done. So I'm very eager to know to know the opinion of other experienced developers.
Selenium have a good web testing framework
http://seleniumhq.org/
Telerik are also in the process of developing one for web app testing.
http://www.telerik.com/products/web-ui-test-studio.aspx
You cannot anticipate the whole
possible set of user action so you be
able to check the correctness of
content showed.
You can't anticipate all the possible data your code is going to be handed, or all the possible race conditions if it's threaded, and yet you still bother unit testing. Why? Because you can narrow it down a hell of a lot. You can anticipate the sorts of pathological things that will happen. You just have to think about it a bit and get some experience.
User interaction is no different. There are certain things users are going to try and do, pathological or not, and you can anticipate them. Users are just inputting particularly imaginative data. You'll find programmers tend to miss the same sorts of conditions over and over again. I keep a checklist. For example: pump Unicode into everything; put the start date after the end date; enter gibberish data; put tags in everything; leave off the trailing newline; try to enter the same data twice; submit a form, go back and submit it again; take a text file, call it foo.jpg and try to upload it as a picture. You can even write a program to flip switches and push buttons at random, a bad monkey, that'll find all sorts of fun bugs.
Its often as simple as sitting someone down who's unfamiliar with the software and watching them use it. Fight the urge to correct them, just watch them flounder. Its very educational. Steve Krug refers to this as "Advanced Common Sense" and has an excellent book called "Don't Make Me Think" which covers cheap, simple user interaction testing. I highly recommend it. It's a very short and eye opening read.
Finally, the client themselves, if their expectations are properly prepared, can be a fantastic test suite. Be sure they understand its a work in progress, that it will have bugs, that they're helping to make their product better, and that it absolutely should not be used for production data, and let them tinker with the pre-release versions of your product. They'll do all sorts of things you never thought of! They'll be the best and most realistic testing you ever had, FOR FREE! Give them a very simple way to report bugs, preferably just a one button box right on the application which automatically submits their environment and history; the feedback box on Hiveminder is an excellent example. Respond to their bugs quickly and politely (even if its just "thanks for the info") and you'll find they'll be delighted you're so responsive to their needs!
Yes, it is. I just ran into an issue this week with a web site I am working on. I just recently switched-out the data access layer and set up unit tests for my controllers and repositories, but not the UI interactions.
I got bit by a pretty obvious bug that would have been easily caught if I had integration tests. Only through integration tests and UI functionality tests do you find issues with the way different tiers of the application interact with one another.
It really depends on the structure and architecture of your web application. If it contains an application logic layer, then that layer should be easy to unit test with automating tools such as Visual Studio. Also, using a framework that has been designed to enable unit testing, such as ASP.NET MVC, helps alot.
If you're writing a lot of Javascript, there have been a lot of JS testing frameworks that have come around the block recently for unit testing your Javascript.
Other than that, testing the web tier using something like Canoo, HtmlUnit, Selenium, etc. is more a functional or integration test than a unit test. These can be hard to maintain if you have the UI change a lot, but they can really come in handy. Recording Selenium tests is easy and something you could probably get other people (testers) to help you create and maintain. Just know that there is a cost associated with maintaining tests, and it needs to be balanced out.
There are other types of testing that are great for the web tier - fuzz testing especially, but a lot of the good options are commercial tools. One that is open source and plugs into Rails is called Tarantula. Having something like that at the web tier is a nice to have run in a continuous integration process and doesn't require much in the form of maintenance.
Unit tests make sense in TDD process. They do not have much value if you don't do test-first development. However the acceptance test are a big thing for quality of the software. I'd say that acceptance test is a holy grail of the development. Acceptance tests show whether the application satisfies the requirements. How do I know when to stop developing the feature --- only when all my acceptance test pass. Automation of acceptance testing a big thing because I do not have to do it all manualy each time I make changes to the application. After months of development there can be hundreds of test and it becomes unfeasible (sometime impossible) to run all the test manually. Then how do I know if my application still works?
Automation of acceptance tests can be implemented with use of xUnit test frameworks, which makes a confusion here. If I create an acceptance test using phpUnit or httpUnit is it a unit test? My answer is no. It does not matter what tool I use to create and run test. Acceptance test is the one that show whether the features is working IAW requirements. Unit test show whether a class (or function) satisfies the developer's implementation idea. Unit test has no value for the client (user). Acceptance test has a lot of value to the client (and thus to developer, remember Customer Affinity)
So I strongly recommend creating automated acceptance tests for the web application.
The good frameworks for the acceptance test are:
Sahi (sahi.co.in)
Silenium
Simpletest (I't a unit-test framework for php, but includes the browser object that can be used for acceptance testing).
However
You have mentioned that web-site is all about user interaction and thus test automation will not solve the whole problem of usability. For example: testing framework shows that all tests pass, however the user cannot see the form or link or other page element due to accidental style="display:none" in the div. The automated tests pass because the div is present in the document and test framework can "see" it. But the user cannot. And the manual test would fail.
Thus, all web-applications needs manual testing. The automated test can reduce the test workload drastically (80%), but manual test are as well significant for the quality of the resulting software.
As for the Unit testing and TDD -- it make the code quality. It is beneficial to the developers and for the future of the project (i.e. for projects longer that a couple of month). However TDD requires skill. If you have the skill -- use it. If you don't consider gaining the skill, but mind the time it will take to gain. It usually takes about 3 - 6 month to start creating a good Unit tests and code. If you project will last more that a year, I recommend studding TDD and investing time in proper development environment.
I've created a web test solution (docker + cucumber); it's very basic and simple, so easy to understand and modify / improve. It lies in the web directory;
my solution: https://github.com/gyulaweber/hosting_tests

Testing: unit vs. integration vs. others, what is the need for separation? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
To the question Am I unit testing or integration testing? I have answered, a bit provocative: Do your test and let other people spend time with taxonomy.
For me the distinction between various levels of testing is technically pointless: often the same tools are used, the same skills are needed, the same objective is to be reached: remove software faults. At the same time, I can understand that traditional workflows, which most developers use, need this distinction. I just don't feel at ease with traditional workflows.
So, my question aims at better understanding what appears a controversy to me and at gathering various points of view about whether or not this separation between various levels of testing is relevant.
Is my opinion wrong? Do other workflows exist which don't emphasize on this separation (maybe agile methods)? What is your experience on the subject?
Precision: I am perfectly aware of the definitions (for those who aren't, see this question). I think I don't need a lesson about software testing. But feel free to provide some background if your answer requires it.
Performance is typically the reason I segregate "unit" tests from "functional" tests.
Groups of unit tests ought to execute as fast as possible and be able to be run after every compilation.
Groups of functional tests might take a few minutes to execute and get executed prior to checkin, maybe every day or every other day depending on the feature being implemented.
If all of the tests were grouped together, I'd never run any tests until just before checkin which would slow down my overall pace of development.
I'd have to agree with #Alex B in that you need to differentiate between unit tests and integration tests when writing your tests to make your unit tests run as fast as possible and not have any more dependencies than required to test the code under test. You want unit tests to be run very frequently and the more "integration"-like they are the less they will be run.
In order to make this easier, unit tests usually (or ought to) involve mocking or faking external dependencies. Integration tests intentionally leave these dependencies in because that is the point of the integration test. Do you need to mock/fake every external dependency? I'd say not necessarily if the cost of mocking/faking is high and the value returned is low, that is using the dependency does not add significantly to the time or complexity of the test(s).
Over all, though, I'd say it's best to be pragmatic rather than dogmatic about it, but recognize the differences and avoid intermixing if your integration tests make it too expensive to run your tests frequently.
Definitions from my world:
Unit test - test the obvious paths of the code and that it delivers the expected results.
Function test - throughly examine the definitions of the software and test every path defined, through all allowable ranges. A good time to write regression tests.
System test - test the software in it's system environment, relative to itself. Spawn all the processes you can, explore every internal combination, run it a million times overnight, see what falls out.
Integration test - run it on a typical system setup and see if other software causes a conflict with the tested one.
Of course your opinion is wrong, at least regarding complex products.
The main point of automated testing is not to find bugs, but to point out function or module where the problem is.
If engineers constantly have to spend brain resources to troubleshoot test failures - then something is wrong. Of course failures in integration testing may be tricky to deal with, but that shouldn't happen often if all modules have good coverage of unit testing.
And if you get integration testing failure - in ideal world it should be instant to add corresponding (missing) unit tests for involved modules (or parts of the system), which will confirm where exactly the problem is.
But this is where an atomic bomb comes - not all systems can be properly covered with unit tests. If the architecture suffers from excessive coupling or complex dependencies - it is almost impossible to properly cover functionality with unit tests and indeed - integration testing is the only way to go, (besides deep refactoring). In such systems indeed there is no big difference between unit and integration tests.