Orthogonal and Combinatorial testing techniques - testing

What is Orthogonal testing technique?
What is Combinatorial testing technique?
What is the difference between them?
I went through wikipedia and other articles and books but still i am unable to understand them.

Orthogonal testing- Orthogonal array testing is a black box testing technique.
It is used when number of inputs to the application under test is small but too complex for an exhaustive testing. It is very effective in finding errors.
All Pair Testing- It is type of testing Technique to test all the pairs using combinatorial method.
1- Pairwise testing is more efficient and effective than orthogonal arrays for software testing.
2- Orthogonal testing is more effective for manufacturing, and agriculture, and advertising.
3- Pairwise testing strategy better than an orthogonal array strategy.
4- Both Type of testing Technique reduce the number of Test cases.
5- All Pair testing almost requires fewer test cases than orthogonal testing, sometimes both have same number of test cases.
Both type of testing Strategy have almost same features but at the same time the use(which testing Strategy to choose) depends on the requirement.
Please refer this pdf for more details-
http://www.51testing.com/ddimg/uploadsoft/20090113/OATSEN.pdf
Tutorial for Pairwise Testing with examples-
http://www.tutorialspoint.com/software_testing_dictionary/all_pairs_testing.htm

Refer the below link .I think it is useful to you.
http://www.softwaretestinghelp.com/combinational-test-technique/

Orthogonal Array can be of any strength.
when its combination of any 2 Variables its called all pairs
it can be combination of any 3, 4, 5...n variables based on the business need. All pairs is just one of the subclasses of orthogonal testing. Hope this helps

Related

How do you test an implementation of Hyperloglog?

There are so many Hyperloglog implementation out there, but how do you verify / test Hyperloglog implementation? To check it's "accuracy", it's "error" bound behavior? Just throwing some static test cases looks very ineffective.
More concrete, someone changes the random number routine, how do I know that is not a disastrous choice and show with some automated, repeatable tests?
Can anyone point me to any known good tests in github or other place, and may be some explanations?
Good question. First, note that while HyperLogLog's theoretical foundation offers some indication of accuracy, it is critical to test the implementation you are using.
Testing should use random datasets (additional static datasets are also possible), and should be applied across varying set cardinalities. If you have any test automation framework in place, that would be a natural place to ensure avoiding regression, as you suggested above. However, note that to measure accuracy with large cardinalities, test runtime might be prohibitive.
You can use the implementation below for reference. It includes unit tests which draw large numbers of random numbers, and check the accuracy at fixed intervals.
https://github.com/Microsoft/CardinalityEstimation

What are coverage metrics for specification or requirement based testing?

What are the different metrics we use for assuring the quality of test suites written based on only requirements and specifications (black box)?
Simply put, given a set of requirements and a test suite on those requirements, what are different metrics to quantify the quality of specification/requirement based testing (test suite)?
I read through the following articles regarding specification based testing and metrics to define them. These topics are too abstract to digest.
http://link.springer.com/chapter/10.1007%2F978-3-642-21768-5_13#page-1
http://www.worldscientific.com/doi/abs/10.1142/S0218539301000530
Can you please explain in simple terms?
Thanks!
The simplest way to evaluate specification-based testing is to trace each specification to a test (whether manual or automated), count which specifications are tested and which are not, and calculate percent coverage.
The confusion related to the articles you linked to is due to confusion between "specification" used to refer to a human-written, structured but relatively informal document, and "specification" meaning a formal computer-readable specification from which tests can be automatically derived.
It's also possible to measure code coverage during specification-based testing. However, it's very difficult to improve coverage without looking inside the black box. Also, specification-based tests are slow, even when automated, so it's painful to achieve code coverage using only specification-based tests. A better approach is to combine black-box specification-based tests and white-box unit tests and consider overall coverage.

Automatically create test cases for web page?

If someone has a webpage, the usual way of testing the web site for user interaction bugs is to create each test case by hand and use selenium.
Is there a tool to create these testcases automatically? So if I have a webpage that gets altered, new test cases get created automatically?
You can look at a paid product. That type of technology is not being developed as open source and will probably cost a bit. Some of the major test tools get closer to this, but full auto I have not heard of.
If this was the case the role of QA Engineer and especially Automation Engineer would not be as important and the jobs would spike downwards pretty quickly. I would imagine that if such a tool was out there that it would be breaking news to the entire industry and be world wide.
If you go down the artificial intelligence path this is possible in theory and concept, however, usually artificial intelligence development efforts costs more than the app being developed that needs the testing, so...that's not going to happen.
The best to do at this point is separate out as much of the maintenance into a single section from the rest so you limit the maintenance headache when modyfying and keep a core that stays the same. I usually focus on control manipulation as generic and then workflow and specific maps and data change. That will allow it to function against any website...but you still have to write/update the tests and maintain the maps.
I think Growing Test Cases Automatically is more of what your asking. To be more specific I'll try to introduce basics and if you're interested take a closer look at Evolutionary Testing
Usually there is a standard set of constraints we meet like changing functionality of the system under test (SUT), limited timeframe, lack of appropriate test tools and the list goes on… Yet there is another type of challenge which arises as technological solutions progress further – increase of system complexity.
While the typical constraints are solvable through different technical and management approaches, in the case of system complexity we are facing the limit of our capability of defining a straight-forward analytical method for assessing and validating system behavior. Complex system consist of multiple, often heterogeneous components which when working together amplify each other’s statistical and behavioral deviations, resulting in a system which acts in ways that were not part of its initial design. To make matter worse, complex systems increase sensitivity to their environment as well with the help of the same mechanism.
Options for testing complex systems
How can we test a system which behaves differently each time we run a test scenario? How can we reproduce a problem which costs days and millions to recover from, but happens only from time to time under conditions which are known just approximately?
One possible solution which I want to focus on is to embrace our lack of knowledge and work with whatever we have by using evolutionary testing. In this context the evolutionary testing can be viewed as a variant of black-box testing, because we are working with feeding input into and evaluating output from a SUT without focusing on its internal structure. The fine line here is that we are organizing this process of automatic test case generation and execution on a massive scale as an iterative optimization process which mimics the natural evolution.
Evolutionary testing
Elements:
• Population – set of test case executions, participating into the optimization process
• Generation – the part of the Population, involved into given iteration
• Individual – single test case execution and its results, an element from the Population
• Genome – unified definition of all test cases, model describing the Population
• Genotype – a single test case instance, a model describing an Individual, instance of the Genome
• Recombination – transformation of one or more Genotypes into a new Genotype
• Mutation – random change in a Genotype
• Fitness Function – formalized criterion, expressing the suitability of the Individual against the goal of the optimization
How we create these elements?
• Definition of the experiment goal (selection criteria) – sets the direction of the optimization process and is related to the behavior of the SUT. Involves certain characteristics of SUT state or environment during the performed test case experiments. Examples:
o “SUT should complete the test case execution with an error code”
o “The test case should drive the SUT through the largest number of branches in SUT’s logical structure”
o “Ambient temperature in the room where SUT is situated should not exceed 40 ºC during test case execution”
o “CPU utilization on the system, where SUT runs should exceed 80% during test case execution”
Any measurable parameters of SUT and its environment could be used in a goal statement. Knowledge of the relation between the test input and the goal itself is not obligatory. This gives a possibility to cover goals which are derived directly from requirements, rather than based on some late requirement derivative like business, architectural or technical model.
• Definition of the relevant inputs and outputs of the tested system – identification of SUT inputs and outputs, as well as environment parameters, relevant to the experiment goal.
• Formal definition of the experiment genome – encoding the summarized set of test cases into a parameterized model (usually a data structure), expressing relevant SUT input data, environment parameters and action sequences. This definition also needs to comply with the two major operations applied over genome instances – recombination and mutation. The mechanism for those two operations can be predefined for the type of data or action present in the genome or have custom definitions
• Formal definition of the selection criteria (fitness function) – an evaluation mechanism which takes SUT output or environment parameters resulting from a test case execution (Individual) and calculates a number (Fitness), signifying how close is this particular Individual to the experiment goal.
How the process works?
We use the Genome to create a Generation of random Genotypes (test case instances).
We execute the test cases (Genotypes) generating results (Individuals)
We evaluate each execution result (Individual) against our goal using the Fitness Function
We select only those Individuals from given Generation which have Fitness above a given threshold (the top 10 %, above the average, etc.)
We use the selected individuals to produce a new, full Generation set by applying Recombination and Mutation
We repeat the process, returning on step 2
The iteration process usually stops by setting a condition with regard to the evaluated Fitness of a Generation. For example:
• If the top Fitness hasn’t changed with more than 0.1% since the last Iteration
• If the difference between the top and the bottom Fitness in a Generation is less than 0.3%
then probably it is time to stop.
Upsides and downsides
Upsides:
• We can work with limited knowledge for the SUT and goal-oriented test definitions
• We use a test case model (Genome) which allows us to mass-produce a large number of test cases (Genotypes) with little effort
• We can “seed” test cases (Genotypes) in the first iteration instead of generating them at random in order to speed up the optimization process.
• We could run test cases in parallel in order to speed up the process
• We could find multiple solutions which meet our test goal
• If the optimization process in convergent we have a guarantee that each following Generation is a better approximate solution of our test goal. This means that even if we need to stop before we have reached optimal Fitness we will still have better test cases than the one we started with.
• We can achieve replay of very complex, hard to reproduce test scenarios which mimic real life and which are far beyond the reach of any other automated or manual testing technique.
Downsides:
• The process of defining the necessary elements for evolutionary test implementation is non-trivial and requires specific knowledge.
• Implementing such automation approach is time- and resource-consuming and should be employed only when it is justifiable.
• The convergence of the optimization process depends on the smoothness of the Fitness Function. If its definition results in a zones of discontinuity or small/no gradient then we can expect slow or no convergence
Update:
I also recommend you to look at Genetic algorithms and this article about Test data generation can give you approaches and guidelines.
I happen to develop ecFeed - an open-source tool that may assist in test design. It's in pre-release phase and we are going to add better integration with Selenium, but you may have a look at the current snapshot: https://github.com/testify-no/ecFeed/wiki . The next version should arrive in October and will have major improvements in usability. Anyway, I am looking forward for constructive criticism.
In the Microsoft development world there is Visual Studio's Coded UI Test framework. This will record your actions in a web browser and generate test cases to replicate that use case. It won't update test cases with any changes to code though, you would need to update them manually or re-generate.

Orthogonal Array Testing

I'm newbie to Software Testing. Can anyone pls help me to understand
"Orthogonal Array Testing"
I went to some articles but they are just mentioning like , it's a kind of Blackbox Testing Technique". Need more info on it. Pls provide that.
Orthogonal Array Testing Strategy (or "OATS") is a test case selection approach that selects a highly-varied set of test scenarios in order to find as many bugs as possible in as few tests as possible. It is a powerful test design approach that is gaining in popularity because it has proved to increase efficiency and effectiveness of testing in many different types of testing contexts. Disclaimer: I created Hexawise, a tool that generates orthogonal array-like sets of software tests so I may be biased about the benefits of this test design approach).
Using OATS, testers can strategically identify a manageable number of high-priority tests in situations where there might be thousands, millions, billions, or gazillions of possible permutations to choose from. OATS is based on the knowledge that the vast majority of defects in production today can be detected by testing for every possible 2-way (or pairwise) combination of test inputs - and that defects that could only be triggered by interactions involving 3 or more specific inputs are quite rare. (Google reports by Dr. Rick Kuhn for specific data supporting this; he's been involved in many studies; several of them are summarized in the articles below).
Here are some clear introductory materials about OATS (and the extremely-closely-related topic of pairwise test design):
[Pairwise Testing] (http://www.developsense.com/pairwiseTesting.html)
by Michael Bolton describes the concepts quite clearly. Mid-way
through the article, he correctly and clearly draws a distinction
between the very closely-related topics of orthogonal arrays vs. all-pairs AKA "pairwise" testing that
most articles gloss over.
[Combinatorial Software Testing]
(https://hexawise.com/Combinatorial-Software-Testing-Case-Studies-IEEE-Computer-Kuhn-Kacker-Lei-Hunter.pdf)
by Rick Kuhn (NIST), Raghu Kacker (NIST), Yu Lei (UTexas at
Arlington) and Justin Hunter (Hexawise).
A fun image-rich presentation on the subject is [Combinatorial
Software Test Design - Beyond Pairwise Testing]
(http://www.slideshare.net/JustinHunter/combinatorial-software-testdesignbeyondpairwisetesting).
You might also find this related StackExchange question to be of interest. In my answer to the question, I provide an explanation for why pairwise solutions (AKA AllPairs) solutions are usually superior to orthogonal array-based solutions for software testers. When you use a pairwise test generator, you will be able to generate more efficient sets of tests that meet your coverage goal with fewer tests: https://sqa.stackexchange.com/questions/775/systematic-approaches-to-selection-of-test-data/780#780
The above materials will give you a relatively thorough understanding of the basic principles. There is, unfortunately, not enough written by people about how to apply these techniques in different testing contexts; that's where things get interesting and valuable. Applying this test design technique well takes analytical skill, development of some new techniques and strategies, as well as practice. For anyone wanting a deeper dive into the topic, I'd suggest the articles and presentations at pairwisetesting.com as well as help.hexawise.com and training.hexawise.com.

How to test numerical analysis routines?

Are there any good online resources for how to create, maintain and think about writing test routines for numerical analysis code?
One of the limitations I can see for something like testing matrix multiplication is that the obvious tests (like having one matrix being the identity) may not fully test the functionality of the code.
Also, there is the fact that you are usually dealing with large data structures as well. Does anyone have some good ideas about ways to approach this, or have pointers to good places to look?
It sounds as if you need to think about testing in at least two different ways:
Some numerical methods allow for some meta-thinking. For example, invertible operations allow you to set up test cases to see if the result is within acceptable error bounds of the original. For example, matrix M-inverse times the matrix M * random vector V should result in V again, to within some acceptable measure of error.
Obviously, this example exercises matrix inverse, matrix multiplication and matrix-vector multiplication. I like chains like these because you can generate quite a lot of random test cases and get statistical coverage that would be a slog to have to write by hand. They don't exercise single operations in isolation, though.
Some numerical methods have a closed-form expression of their error. If you can set up a situation with a known solution, you can then compare the difference between the solution and the calculated result, looking for a difference that exceeds these known bounds.
Fundamentally, this question illustrates the problem that testing complex methods well requires quite a lot of domain knowledge. Specific references would require a little more specific information about what you're testing. I'd definitely recommend that you at least have Steve Yegge's recommended book list on hand.
If you're going to be doing matrix calculations, use LAPACK. This is very well-tested code. Very smart people have been working on it for decades. They've thought deeply about issues that the uninitiated would never think about.
In general, I'd recommend two kinds of testing: systematic and random. By systematic I mean exploring edge cases etc. It helps if you can read the source code. Often algorithms have branch points: calculate this way for numbers in this range, this other way for numbers in another range, etc. Test values close to the branch points on either side because that's where approximation error is often greatest.
Random input values are important too. If you rationally pick all the test cases, you may systematically avoid something that you don't realize is a problem. Sometimes you can make good use of random input values even if you don't have the exact values to test against. For example, if you have code to calculate a function and its inverse, you can generate 1000 random values and see whether applying the function and its inverse put you back close to where you started.
Check out a book by David Gries called The Science of Programming. It's about proving the correctness of programs. If you want to be sure that your programs are correct (to the point of proving their correctness), this book is a good place to start.
Probably not exactly what you're looking for, but it's the computer science answer to a software engineering question.