I'm newbie to Software Testing. Can anyone pls help me to understand
"Orthogonal Array Testing"
I went to some articles but they are just mentioning like , it's a kind of Blackbox Testing Technique". Need more info on it. Pls provide that.
Orthogonal Array Testing Strategy (or "OATS") is a test case selection approach that selects a highly-varied set of test scenarios in order to find as many bugs as possible in as few tests as possible. It is a powerful test design approach that is gaining in popularity because it has proved to increase efficiency and effectiveness of testing in many different types of testing contexts. Disclaimer: I created Hexawise, a tool that generates orthogonal array-like sets of software tests so I may be biased about the benefits of this test design approach).
Using OATS, testers can strategically identify a manageable number of high-priority tests in situations where there might be thousands, millions, billions, or gazillions of possible permutations to choose from. OATS is based on the knowledge that the vast majority of defects in production today can be detected by testing for every possible 2-way (or pairwise) combination of test inputs - and that defects that could only be triggered by interactions involving 3 or more specific inputs are quite rare. (Google reports by Dr. Rick Kuhn for specific data supporting this; he's been involved in many studies; several of them are summarized in the articles below).
Here are some clear introductory materials about OATS (and the extremely-closely-related topic of pairwise test design):
[Pairwise Testing] (http://www.developsense.com/pairwiseTesting.html)
by Michael Bolton describes the concepts quite clearly. Mid-way
through the article, he correctly and clearly draws a distinction
between the very closely-related topics of orthogonal arrays vs. all-pairs AKA "pairwise" testing that
most articles gloss over.
[Combinatorial Software Testing]
(https://hexawise.com/Combinatorial-Software-Testing-Case-Studies-IEEE-Computer-Kuhn-Kacker-Lei-Hunter.pdf)
by Rick Kuhn (NIST), Raghu Kacker (NIST), Yu Lei (UTexas at
Arlington) and Justin Hunter (Hexawise).
A fun image-rich presentation on the subject is [Combinatorial
Software Test Design - Beyond Pairwise Testing]
(http://www.slideshare.net/JustinHunter/combinatorial-software-testdesignbeyondpairwisetesting).
You might also find this related StackExchange question to be of interest. In my answer to the question, I provide an explanation for why pairwise solutions (AKA AllPairs) solutions are usually superior to orthogonal array-based solutions for software testers. When you use a pairwise test generator, you will be able to generate more efficient sets of tests that meet your coverage goal with fewer tests: https://sqa.stackexchange.com/questions/775/systematic-approaches-to-selection-of-test-data/780#780
The above materials will give you a relatively thorough understanding of the basic principles. There is, unfortunately, not enough written by people about how to apply these techniques in different testing contexts; that's where things get interesting and valuable. Applying this test design technique well takes analytical skill, development of some new techniques and strategies, as well as practice. For anyone wanting a deeper dive into the topic, I'd suggest the articles and presentations at pairwisetesting.com as well as help.hexawise.com and training.hexawise.com.
Everyone knows about some relevant statistics about positive impact of using test/behavior driven development in real projects. I know statistics can be very misleading, but it would be nice to see something like:
"when started using TDD, we rose productivity and reduced bugs introduction by XY %...".
It would be really nice to show this numbers to managers/customers, when explaining need of writing tests (there are still some people thinking we don't have time for this...)
Thanks
I have collected the following resources so far:
Realizing quality improvement through test driven development: results and experiences of four industrial teams (Microsoft Research):
http://research.microsoft.com/en-us/groups/ese/nagappan_tdd.pdf
resp:
http://www.springerlink.com/content/q91566748q234325/?p=7fd98b01480f49e2925f36393c999a72&pi=3
Test driven development: empirical body of evidence (ITEA):
http://www.agile-itea.org/public/deliverables/ITEA-AGILE-D2.7_v1.0.pdf
A Longitudinal Study of the Use of a Test-Driven Development Practice in Industry (IBM):
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.104.6319&rep=rep1&type=pdf
Evaluating Pair Programming with Respect to System Complexity and Programmer Expertise (IEEE):
http://simula.no/research/se/publications/Arisholm.2006.2/simula_pdf_file
There is a discussion on InfoQ:
http://www.infoq.com/news/2009/03/TDD-Improves-Quality
Also check out this question:
Evidence based studies on the topic of best programming practices?
I am interested in studies and papers detailing trials that explore the evidence for different development practices in object-oriented languages. I am particularly keen on studies that measure productivity or consider the influence of modern IDEs. Can you point recommend any good resources for this? Has much work been done in this area of late?
For better or worse, empirically-driven productivity metrics are synonymous with Agile these days.
One that looks interesting for (shockingly) the agile research paper list
http://www.agilealliance.org/index.php/download_file/view/18/
It appears as though this is an ongoing research area.
Today I read this blog entry by Roger Alsing about how to paint a replica of the Mona Lisa using only 50 semi transparent polygons.
I'm fascinated with the results for that particular case, so I was wondering (and this is my question): how does genetic programming work and what other problems could be solved by genetic programming?
There is some debate as to whether Roger's Mona Lisa program is Genetic Programming at all. It seems to be closer to a (1 + 1) Evolution Strategy. Both techniques are examples of the broader field of Evolutionary Computation, which also includes Genetic Algorithms.
Genetic Programming (GP) is the process of evolving computer programs (usually in the form of trees - often Lisp programs). If you are asking specifically about GP, John Koza is widely regarded as the leading expert. His website includes lots of links to more information. GP is typically very computationally intensive (for non-trivial problems it often involves a large grid of machines).
If you are asking more generally, evolutionary algorithms (EAs) are typically used to provide good approximate solutions to problems that cannot be solved easily using other techniques (such as NP-hard problems). Many optimisation problems fall into this category. It may be too computationally-intensive to find an exact solution but sometimes a near-optimal solution is sufficient. In these situations evolutionary techniques can be effective. Due to their random nature, evolutionary algorithms are never guaranteed to find an optimal solution for any problem, but they will often find a good solution if one exists.
Evolutionary algorithms can also be used to tackle problems that humans don't really know how to solve. An EA, free of any human preconceptions or biases, can generate surprising solutions that are comparable to, or better than, the best human-generated efforts. It is merely necessary that we can recognise a good solution if it were presented to us, even if we don't know how to create a good solution. In other words, we need to be able to formulate an effective fitness function.
Some Examples
Travelling Salesman
Sudoku
EDIT: The freely-available book, A Field Guide to Genetic Programming, contains examples of where GP has produced human-competitive results.
Interestingly enough, the company behind the dynamic character animation used in games like Grand Theft Auto IV and the latest Star Wars game (The Force Unleashed) used genetic programming to develop movement algorithms. The company's website is here and the videos are very impressive:
http://www.naturalmotion.com/euphoria.htm
I believe they simulated the nervous system of the character, then randomised the connections to some extent. They then combined the 'genes' of the models that walked furthest to create more and more able 'children' in successive generations. Really fascinating simulation work.
I've also seen genetic algorithms used in path finding automata, with food-seeking ants being the classic example.
Genetic algorithms can be used to solve most any optimization problem. However, in a lot of cases, there are better, more direct methods to solve them. It is in the class of meta-programming algorithms, which means that it is able to adapt to pretty much anything you can throw at it, given that you can generate a method of encoding a potential solution, combining/mutating solutions, and deciding which solutions are better than others. GA has an advantage over other meta-programming algorithms in that it can handle local maxima better than a pure hill-climbing algorithm, like simulated annealing.
I used genetic programming in my thesis to simulate evolution of species based on terrain, but that is of course the A-life application of genetic algorithms.
The problems GA are good at are hill-climbing problems. Problem is that normally it's easier to solve most of these problems by hand, unless the factors that define the problem are unknown, in other words you can't achieve that knowledge somehow else, say things related with societies and communities, or in situations where you have a good algorithm but you need to fine tune the parameters, here GA are very useful.
A situation of fine tuning I've done was to fine tune several Othello AI players based on the same algorithms, giving each different play styles, thus making each opponent unique and with its own quirks, then I had them compete to cull out the top 16 AI's that I used in my game. The advantage was they were all very good players of more or less equal skill, so it was interesting for the human opponent because they couldn't guess the AI as easily.
http://en.wikipedia.org/wiki/Genetic_algorithm#Problem_domains
You should ask yourself : "Can I (a priori) define a function to determine how good a particular solution is relative to other solutions?"
In the mona lisa example, you can easily determine if the new painting looks more like the source image than the previous painting, so Genetic Programming can be "easily" applied.
I have some projects using Genetic Algorithms. GA are ideal for optimization problems, when you cannot develop a fully sequential, exact algorithm do solve a problem. For example: what's the best combination of a car characteristcs to make it faster and at the same time more economic?
At the moment I'm developing a simple GA to elaborate playlists. My GA has to find the better combinations of albums/songs that are similar (this similarity will be "calculated" with the help of last.fm) and suggests playlists for me.
There's an emerging field in robotics called Evolutionary Robotics (w:Evolutionary Robotics), which uses genetic algorithms (GA) heavily.
See w:Genetic Algorithm:
Simple generational genetic algorithm pseudocode
Choose initial population
Evaluate the fitness of each individual in the population
Repeat until termination: (time limit or sufficient fitness achieved)
Select best-ranking individuals to reproduce
Breed new generation through crossover and/or mutation (genetic
operations) and give birth to
offspring
Evaluate the individual fitnesses of the offspring
Replace worst ranked part of population with offspring
The key is the reproduction part, which could happen sexually or asexually, using genetic operators Crossover and Mutation.
I'm interested in knowing how many developers use each of the major languages/platforms, but I haven't been able to find any good recent surveys. If you know of any good data, please provide a link along with a brief synopsis of the survey's methodology (who they surveyed and how etc.) and its conclusions (16% of developers use Java, 12% use RoR etc.).
I have no affiliation with the Tiobe Index: it is cited often for these kinds of questions. Its accuracy and methodology are sometimes questioned as these kinds of metrics must be very difficult.
See this Dr Dobb's article for more...
Probably the nearest to anything objective would be to aggregate the revenues of vendors of development platforms, to the extent that it is possible.
jobs ads can be indicative of what the industry is after. here's some stats for the uk.
though not directly what you're after it might be interesting.