Is software testing done in the following order?
Unit testing
Integration testing
Functional testing
I want to confirm if Functional testing is done after Integration testing or not.
Thanx
That is a logical ordering, yes. Often followed by User Acceptance Testing and then any form of public alpha/beta testing before release if appropriate.
In a TDD coding environment, the order in which these tests are made to pass generally follows your order; however, they are often WRITTEN in the reverse order.
When a team gets a set of requirements, one of the first things they should do is turn those requirements into one or more automated acceptance tests, which prove that the system meets all functional requirements defined. When this test passes, you're done (IF you wrote it properly). The test, when first written, obviously shouldn't pass. It may not even compile, if it references new object types that you haven't defined yet.
Once the test is written, the team can usually see what is required to make it pass at a high level, and break up development along these lines. Along the way, integration tests (which test the interaction of objects with each other) and unit tests (which test small, atomic pieces of functionality in near-complete isolation from other code) are written. Using refactoring tools like ReSharper, the code of these tests can be used to create the objects and even the logic of the functionality being tested. If you're testing that the output of A+B is C, then assert that A+B == C, then extract a method from that logic in the test fixture, then extract a class containing that method. You now have an object with a method you can call that produces the right answer.
Along the way, you have also tested the requirements: if the requirements assert that an answer, given A and B, must be the logical equivalent of 1+2==5, then the requirements have an inconsistency indicating a need for further clarification (i.e. somebody forgot to mention that D=2 should be added to B before A+B == C) or a technical impossibility (i.e. the calculation requires there to be 25 hours in a day or 9 bits in a byte). It may be impossible (and is generally considered infeasible by Agile methodologies) to guarantee without a doubt that you have removed all of these inconsistencies from the requirements before any development begins.
Related
I have several test cases that I want to optimize by a similarity-based test case selection method using the Jaccard matrix. The first step is to choose a pair with the highest similarity index and then keep one as a candidate and remove the other one.
My question is: based on which strategy do you choose which of the two most similar test cases to remove? Size? Test coverage? Or something else? For example here TC1 and TC10 have the highest similarity. which one will you remove and why?
It depends on why you're doing this, and a static code metric can only give you suggestions.
If you're trying to make the tests more maintainable, look for repeated test code and extract it into shared code. Two big examples are test setup and expectations.
For example, if you tend to do the same setup over and over again you can extract it into fixtures or test factories. You can share the setup using something like setup/teardown methods and shared contexts.
If you find yourself doing the same sort of code over and over again to test condition, extract that into a shared example like a test method or a matcher or a shared example. If possible, replace custom test code with assertions from an existing library.
Another metric is to find tests which are testing too much. If unit A calls units B and C, the test might do the setup for and testing of B and C. This is can be a lot of extra work, makes the test more complex, and makes the units interdependent.
Since all unit A cares about is whether their particular calls to B and C work, consider replacing the calls to B and C with mocks. This can greatly simplify test setup and test performance and reduces the scope of what you're testing.
However, be careful. If B or C changes A might break but the unit test for A won't catch that. You need to add integration tests for A, B and C together. Fortunately, this integration test can be very simple. It can trust that A, B, and C work individually, they've been unit tested, it only has to test that they work together.
If you're doing it to make the tests faster, first profile the tests to determine why they're slow.
Consider if parallelization would help. Consider if the code itself is too slow. Consider if the code is too interdependent and has to do too much work just to test one thing. Consider if you really have to write everything to a slow resource such as a disk or database, or if doing it in memory sometimes would be ok.
It's deceptive to consider tests redundant because they have similar code. Similar tests might test very, very different branches of the code. For example, something as simple as passing in 1 vs 1.1 to the same function might call totally different classes, one for integers and one for floats.
Instead, find redundancy by looking for similarity in what the tests cover. There's no magic percentage of similarity to determine if tests are redundant, you'll have to determine for yourself.
This is because just because a line of code is covered doesn't mean it is tested. For example...
def test_method
call_a
assert(call_b, 42)
end
Here call_a is covered, but it is not tested. Test coverage only tells you what HAS NOT been tested, it CANNOT tell you what HAS been tested.
Finally, test coverage redundancy is good. Unit tests, integration tests, acceptance tests, and regression tests can all cover the same code, but from different points of view.
All static analysis can offer you is candidates for redundancy. Remove redundant tests only if they're truly redundant and only with a purpose. Tests which simply overlap might be serve as regression tests. Many a time I've been saved when the unit tests pass, but some incidental integration test failed. Overlap is good.
If you have nothing, you cannot write a test because there is nothing to test. This seems pretty obvious to me but never seems to be addressed by proponents of TDD.
In order to write a test, you have to first decide what the method or function looks like that you're going to test. You have to know what parameters to pass to it and what you expect to get back. That is what comes first, not the test.
Tests can never come first. The thing that comes first is the design which specifies what classes and methods are going to exist.
It's true that in order to write a test, the test writer must form some conception on how the test code can interact with the System Under Test. In that sense, conceptual design 'comes first'.
Test-driven development (TDD), however, is valuable because it's not (only) a quality assurance methodology. It's first and foremost a fast feedback loop.
While you may have an initial design in mind, once you start to write a test, you may discover that this design doesn't work (or is awkward to use). This often happens, and should cause you to immediately adjust course.
The red-green-refactor cycle suggests a model to think of TDD. Each such cycle may be a minute or two.
Thus, you may start with an initial design in mind, but then adjust it (or completely rethink it) every other minute.
never seems to be addressed by proponents of TDD
I disagree. Plenty of introductions to TDD discuss this. Two good books that discuss this (and much more) are Kent Beck's Test-Driven Development by Example and Nat Pryce and Steve Freeman's Growing Object-Oriented Code Guided by Tests.
It's the other way round.
If you write a test that calls a function which does not exist, your test suite fails and you get an error forcing you to define that function, just like writing any other test forces you to write the implementation.
Your tests don't need to run to be good tests. But this kind of test is not meant to stay in your test suite. They are sometimes referred to as "staircase tests": you need to write them to get going but they are only instrumental.
What happens generally is that as soon as this test passes, you make it fail by being more specific. Technically the test you end up with is the same you would have written after the fact and it didn't take more time to write it, but during this process you were able to run the test suite one or more times, so you're spending less time in an invalid state, so to speak.
I would like to add that there is nothing untrue in your question, but your conclusion doesn't follow the premise: it is true that what come first is the specification, but there is nothing inconsistent with formalising this specification in a test before the code is written. the spec, and the tests, force you to write the code. TDD is an incremental way of formalising the spec that ensures the spec always comes first.
To write a test, you have to first decide what the method or function looks like that you're going to test. You have to know what parameters to pass to it and what you expect to get back. THAT is what comes first, NOT the test. Tests can NEVER come first. The thing that comes first is the design which specifies what classes and methods are going to exist.
Not quite right (not entirely wrong either - It's Complicated[tm])
If you look at the first example in Test Driven Development by Example, you'll see that Beck doesn't begin with classes and methods. He doesn't even begin with a test.
The very first thing that he creates is a "to-do" list, where each of the entries in the todo list is a representation of a behavior (my terminology, not his). So we see things like
$5 + 10 CHF = $10 if rate is 2:1
These days, you'd be more likely to see this idea expressed as Hoare triple (Given/When/Then, Arrange/Act/Assert, etc). But what we have here is a reminder to the programmer that we want an automated check that measures the result of adding two different currencies together, and confirms that the result matches some specification.
In his exercise, his to do list includes a "simpler" test, which is the one he attempts first
$5 * 2 = $10
That same todo list also includes some other concerns the has about the design, NOT expressed in test form. Also, the list grows as he works through the problem.
In this sense, the test absolutely comes first. We write the test in a language to be consumed by humans. Translating the test into a language understood by the machine comes later.
In the second step, where we describe the test to the machine, things get messier. It is absolutely the case that, as we are designing the test, we are also designing the communication protocol that allows the test to measure what the production code does. So there's a certain amount of communication design that is happening in parallel with the "test" design.
But even here, the test is not specifying all of the classes that are going to exist, it's only specifying what it needs to perform its measurement. We describe a facade, but we aren't specifying what lies beyond that facade.
It can happen, as we design more of the system, that the facade we specify is used only by tests, as a way of communicating with a different underlying design of production code.
(Note: I say classes here for consistency with the question and with early literature, taken primarily from examples in Smalltalk or Java. Feel free to substitute "functions" for "classes" if that makes you more comforatble.)
Now, the most common case is that the facade is the production code; we don't typically add elements to the design until we have a non-speculative motivation for them.
"Unit testing" puts some strain on these ideas - how can you possibly write a unit test without first designing your unit boundaries?
The real answer is an unfortunate one -- Kent Beck didn't write unit tests. He wrote "programmer tests" (a term that got retconned in later) and called them unit tests.
Using the testing language of the 1990s (which is when all this mess started), a more appropriate term is probably "composite tests".
You've also got "the London School", that was trying to figure out how to TDD a particular design style; writing a test for that style requires a more complicated testing facade "up front" (roles and interfaces and stable substitute implementations and so on).
It can also be worth keeping in mind the setting.
(Disclaimer: this isn't something I witnessed first hand - think "based on a true story" rather than "facts")
TDD (and its parent idea "test first" programming in XP) are pushing back against "up front design" of the sort where you decide what the class hierarchy and relationships should be, and document them, before you actually sit down to write the code.
The core argument being that the design process needs shorter feedback loops; that we don't get deeply committed to a particular design until we've acquired a lot of evidence that it is going to work out OK.
All that said, yes it is absolutely the case that TDD, as a technique, works so much better in the hands of someone who is already good at software design. See Michael Feathers on the Look, Ma, no hands! era.
There is no magic.
There is a class A doing some calculations and class B, which is calling methods from the class A.
Unit tests were fine for both classes but when used classes together, I discovered that it does not work. The issue was that the types of parameters were incorrect. As this was a part of school assignement, I was supposed to say what kind of test it is. I think it is an integration one, is that correct?
I think so because integration means integrating more modules into one system. And I am integrating two classes together here.
Strictly speaking, a unit test would be one module (class) tested in isolation, with any external dependencies stubbed out.
But in reality, so-called unit tests will often break this rule. For example in Rails, unit tests often hit the database (but this can be avoided).
In the situation you describe here, 'integration' is probably the best term to use.
Note that the meaning of these terms can vary a lot depending on the context. I would call what Nathan Hughes describes a system integration test to distinguish it from more granular tests.
Typically "integration test" means a test that touches some other system, like a database or file system or a web service or whatever. Since in your case it's two classes in the same program, I would categorize it as a unit test.
There's an expectation that a unit test have a small scope, but there isn't any hard and fast rule that it be limited to one method or one class.
Unit tests check that the code within the program is internally consistent, which is what needs doing here.
I've always worked alone and my method of testing is usually compiling very often and making sure the changes I made work well and fix them if they don't. However, I'm starting to feel that that is not enough and I'm curious about the standard kinds of tests there are.
Can someone please tell me about the basic tests, a simple example of each, and why it is used/what it tests?
Thanks.
Different people have slightly different ideas about what constitutes what kind of test, but here are a few ideas of what I happen to think each term means. Note that this is heavily biased towards server-side coding, as that's what I tend to do :)
Unit test
A unit test should only test one logical unit of code - typically one class for the whole test case, and a small number of methods within each test. Unit tests are (ideally) small and cheap to run. Interactions with dependencies are usually isolated with a test double such as a mock, fake or stub.
Integration test
An integration test will test how different components work together. External services (ones not part of the project scope) may still be faked out to give more control, but all the components within the project itself should be the real thing. An integration test may test the whole system or some subset.
System test
A system test is like an integration test but with real external services as well. If this is automated, typically the system is set up into a known state, and then the test client runs independently, making requests (or whatever) like a real client would, and observing the effects. The external services may be production ones, or ones set up in just a test environment.
Probing test
This is like a system test, but using the production services for everything. These run periodically to keep track of the health of your system.
Acceptance test
This is probably the least well-defined term - at least in my mind; it can vary significantly. It will typically be fairly high level, like a system test or an integration test. Acceptance tests may be specified by an external entity (a standard specification or a customer).
Black box or white box?
Tests can also be "black box" tests, which only ever touch the public API, or "white box" tests which take advantage of some extra knowledge to make testing easier. For example, in a white box test you may know that a particular internal method is used by all the public API methods, but is easier to test. You can test lots of corner cases by calling that method directly, and then do fewer tests with the public API. Of course, if you're designing the public API you should probably design it to be easily testable to start with - but it doesn't always work out that way. Often it's nice to be able to test one small aspect in isolation of the rest of the class.
On the other hand, black box testing is generally less brittle than white box testing: by definition, if you're only testing what the API guarantees in its contracts, then the implementation can change as much as it wants without the tests changing. White box tests, on the other hand, are sensitive to implementation changes: if the internal method changes subtly - or gains an extra parameter, for example - then you'll need to change the tests to reflect that.
It all boils down to balance, in the end - the higher the level of the test, the more likely it is to be black box. Unit tests, on the other hand, may well include an element of white box testing... at least in my experience. There are plenty of people who would refuse to use white box testing at all, only ever testing the public API. That feels more dogmatic than pragmatic to me, but I can see the benefits too.
Starting out
Now, as for where you should go next - unit testing is probably the best thing to start with. You may choose to write the tests before you've designed your class (test-driven development) or at roughly the same time, or even months afterwards (not ideal, but there's a lot of code which doesn't have tests but should). You'll find that some of your code is more amenable to testing than others... the two crucial concepts which make testing feasible (IMO) are dependency injection (coding to interfaces and providing dependencies to your class rather than letting them instantiate those dependencies themselves) and test doubles (e.g. mocking frameworks which let you test interaction, or fake implementations which do everything a simple way in memory).
I would suggest reading at least book about this, since the domain is quite huge, and books tend to synthesize better such concepts.
E.g. A very good basis might be: Software Testing Testing Across the Entire Software Development Life Cycle (2007)
I think such a book might explain better everything than some out of context examples we could post here.
Hi… I would like to add on to what Jon Skeet Sir’s answer..
Based on white box testing( or structural testing) and black box testing( or functional testing) the following are the other testing techniques under each respective category:
STRUCTURAL TESTING Techniques
Stress Testing
This is used to test bulk volumes of data on the system. More than what a system normally takes. If a system can stand these volumes, it can surely take normal values well.
E.g.
May be you can take system overflow conditions like trying to withdraw more than available in your bank balance shouldn’t work and withdrawing up to a maximum threshold should work.
Used When
This can be mainly used we your unsure about the volumes up to your system can handle.
Execution Testing
Done in order to check how proficient is a system.
E.g.
To calculate turnaround time for transactions.
Used when:
Early in the development process to see if performance criteria is met or not.
Recovery Testing
To see if a system can recover to original form after a failure.
E.g.
A very common e.g. in everyday life is the System Restore present in Windows OS..
They have restore points used for recovery as one would well know.
Used when:
When a user feels an application critical to him/her at that point of time has stopped working and should continue to work, for which he performs recovery.
Other types of testing which you could find use of include:-
Operations Testing
Compliance Testing
Security Testing
FUNCTIONAL TESTING Techniques include:
Requirements Testing
Regression Testing
Error-Handling Testing
Manual-Support Testing
Intersystem testing
Control Testing
Parallel Testing
There is a very good book titled “Effective methods for Software Testing” by William Perry of Quality Assurance Institute(QAI) which I would suggest is a must read if you want to go in depth w.r.t. Software Testing.
More on the above mentioned testing types would surely be available in this book.
There are also two other very broad categories of Testing namely
Manual Testing: This is done for user interfaces.
Automated Testing: Testing which basically involves white box testing or testing done
through Software Testing tools like Load Runner, QTP etc.
Lastly I would like to mention a particular type of testing called
Exhaustive Testing
Here you try to test for every possible condition, hence the name. This is as one would note pretty much infeasible as the number of test conditions could be infinite.
Firstly there are various tests one can perform. The Question is how does one organize it. Testing is a Vast & enjoying process.
Start Testing with
1.Smoke Testing. Once passed , go ahead with Functionality Testing. This is the Backbone of Testing. If Functionality works fine then 80% of Testing is profitable.
2.Now go with User Interface testing. AS at times User Interface is something that attracts the Client more than functionality. It is the look & feel that a client gets more attracted to it.
3.Now its time to have a look on Cosmetics bugs. Generally these bugs are ignored because of time constraint. But these play a major role depending on the page it is found. A spelling mistake turns to be Major when found on Splash Screen Or Your landing page or the App name itself. Hence these can not be overlooked as well.
4.Do Conduct Compatibility Testing. i,e Testing on Various Browsers & browser Versions. May be devices & OS for Responsive applications.
Happy testing :)
I read the Wikipedia article on scenario testing, but I am sad to say it is very short. I am left wondering: are scenario tests a collection of sequential unit tests? Or, perhaps, like a single multi-step unit test? Do many frameworks support scenario tests, or are they covered by unit testing?
If they have nothing to do with automation, what are they?
I don't think there's any fixed relationship between the number and distribution of tests and scenario tests.
I think the most common code-representation of a scenario is a specific set of business data required to support a specific story (scenario). This is often provided in the form of database data, fake stub data or a combination of both.
The idea is that this dataset has known and well-defined characteristics that will provide well defined results all across a given business process.
For a web application I could have a single web-test (or several for variations) that click through the full scenario. In other cases the scenario is used at a lower level, possibly testing a part of the scenario in a functional test or a unit test. In this case I normally never group the tests by scenario, but choose the functional grouping of tests I normally use for unit/functional tests. Quite often there's a method within "Subsystem1Test" that is called "testScenario1" or maybe "testScenarioInsufficientCredit". I prefer to give my scenarios names.
In addition to korsenvoid's response, in my experience scenario based testing will often be automated as it will be included in regression testing. Regression testing is regularly automated as to do it manually does not scale well with regular releases.
In commercial software, good examples of scenerio tests are tutorials included with the user documentation. These obviously must work in each release or be removed from the docs, and hence must be tested.
While you can carry out scenario testing using sequenced unit tests, my guess is that it is more common to use GUI based automation tools. For example, I use TestComplete in this role with a scripting framework to good effect. Scenario tests are typically carried out from a user/client perspective which can be difficult to accurately replicate at a unit level.
IMHO, scenario testing is a testing activity, as opposed to development activity ; hence it's about testing a product, not unit(s) of that product. The test scenario are end-to-end scenarios, using the natural interfaces of the product. If the product has programmatic interfaces, then you could use an unit test framework, or Fitnesse.