Branch testing - testing

if(condition1)
dosomething1();
if(condition2)
dosomething2();
if(condition3)
dosomething3();
Is it full branch testing if I have two test cases in this example
condition1 = condition2 = condition3 = true;
condition1 = condition2 = condition3 = false;
Or have I misunderstood it?
Trying to figure out the difference between branch and path testing. I get path testing so hope this is correct.

Branch Testing:
Testing in which all branches in the program source code are tested at least once.
Yes; you are performing correct branch testing, since all your branches are hit. In fact you can remove your second test case, since case 1 executes all the branches.
Obviously branch testing is less encompassing than path testing, since it's likelyhood of hitting dependies is low and as such, ought not to be your only form of testing.

As per my understanding, Branch coverage is also known as Decision coverage and it covers both the true and false conditions
unlike the statement coverage. With an IF statement, the exit can either be TRUE or FALSE, depending on the value
of the logical condition that comes after IF.
Let us take one example to explain Branch coverage:
IF "A > B"
PRINT A is greater than B
ENDIF
So the Test Set for 100% branch coverage will be:
Test Case 1: A=5, B=2 which will return true.
Test Case 2: A=2, B=5 which will return false.
So in your case, both the test cases 1 and 2 are required for Branch coverage.
With only Test cases1, it will be statement coverage.

I disagree with the chosen answer that you can remove the second test line!
Wikipedia's definition of Branch testing states:
"Branch coverage – Has each branch (also called DD-path) of each control structure (such as in if and case statements) been executed? For example, given an if statement, have both the true and false branches been executed? Another way of saying this is, has every edge in the program been executed?"Link here: https://en.wikipedia.org/wiki/Code_coverage
Also checkout this video lecture from Georgia Tech's Computer Science program on branch testing where this requirement is demonstrated in action.
Link here: https://www.youtube.com/watch?v=JkJFxPy08rk

To achieve 100% basis path coverage, you need to define your basis set. The cyclomatic complexity of this method is four (one plus the number of decisions), so you need to define four linearly independent paths. To do this, you pick an arbitrary first path as a baseline, and then flip decisions one at a time until you have your basis set.
Path 1: Any path will do for your baseline, so pick true for the decisions' outcomes (represented as TTT). This is the first path in your basis set.
Path 2: To find the next basis path, flip the first decision (only) in your baseline, giving you FTT for your desired decision outcomes.
Path 3: You flip the second decision in your baseline path, giving you TFT for your third basis path. In this case, the first baseline decision remains fixed with the true outcome.
Path 4 : Finally, you flip the third decision in your baseline path, giving you TTF for your fourth basis path. In this case, the first baseline decision remains fixed with the true outcome.
So, your four basis paths are TTT, FTT, TFT, and TTF. Now, make up your tests and see what happens.
Remember, the goal of basis path testing is to test all decision outcomes independently of one another
(Extract from http://www.codign.com/pathbranchcode.html)

If I understand what you are asking, then you may need eight test cases to completely cover the alternatives in the given code. For example, what if dosomething2() relies on some other state set up by dosomething1()? Your test cases would not catch that requirement.

Yes, you understand correctly. Branch testing is just "all branches are executed."

Related

gcov/lcov + googletest create an artificially low branch coverage report

First, I am well aware of the "hidden branch" problem caused by throws/exceptions. This is not that.
What I am observing is:
My test framework (googletest) has testing macros (EXPECT_TRUE for example).
I write passing tests using the macros
Measuring branch coverage now asymptotes at 50% because I have not evaluated that test in both a passing and a failing condition...
Consider the following:
TEST (MyTests, ContrivedTest)
{
EXPECT_TRUE(function_that_always_returns_true());
}
Now assuming that I have every line and every branch perfectly covered in function_that_always_returns_true(), this branch coverage report will asymptote at 50% (because gcov does not observe line 3 evaluating in a failing condition, intentionally)
The only idea that I've had around this issue is that I could exclude the evaluation macros with something like LCOV_EXCL_BR_LINE, but this feels both un-ergonomic and hacky.
TEST (MyTests, ContrivedTest)
{
bool my_value = function_that_always_returns_true();
EXPECT_TRUE(my_value); //LCOV_EXCL_BR_LINE
}
This cannot be a niche problem, and I have to believe that people successfully use googletest with lcov/gcov. What do people do to get around this limitation?
After looking for far too long, I realized that all the testing calls I want to filter out are of the pattern EXPECT_*. So simply adding:
lcov_excl_br_line=LCOV_EXCL_BR_LINE|EXPECT_*
to my lcovrc solved my problem

Unconditional branching and code coverage

So I have learned that branch coverage differs from decision coverage as branch coverage typically includes also unconditional branches, e.g. methods calls, using of throw, break and other keywords in C#.
But I wonder, is this kind of branch coverage actually used in code analyzers? I suspect they use decision coverage, making sure that all decision outcomes (i.e. resulting branches) are covered.
I mean, the following code has 2 conditional, but 5 unconditional branches:
if(A)
B();
C();
D();
E();
else
X();
And I believe that if I write a test to evaluate A to just false, the code analyzers will tell me that the branch coverage is 50%. But from the unconditional branches perspective, more will nto be executed.
Is that correct?
Branch coverage doesn't tell you if a decision has been tested as both true and false.
Example:
if (c) {
x=...
}
y=...
If c evaluates to TRUE, the block containing x=... is executed, and
branch coverage will detect that. It will also detect that the code starting at y has been executed. So you'll get 100% coverage if C is true, without having any idea what happens if C is false.
With decision coverage, you would know that C has been evaluated and produces both TRUE and FALSE, if you had 100% coverage.
If your conditional if has a then block and an else block, then branch coverage and decision coverage will give you the same information.

Coverage Criteria, What are independent conditions exactly?

When speaking of coverage criteria such as MCDC (Modified Condition/Decision Criteria)...
It is stated that "Every point of entry and exit in the program has been invoked at least once, every condition in a decision in the program has taken all possible outcomes at least once, and each condition has been shown to affect that decision outcome independently. A condition is shown to affect a decision's outcome independently by varying just that condition while holding fixed all other possible conditions. [...]"
- https://en.wikipedia.org/wiki/Modified_condition/decision_coverage
This description is rather vague of what constitutes an independent criteria... So, what are they? Examples are helpful in any language (C-family/python/haskell preferred).
The wikipedia definition is an informal statement, a more precise definition of MDCD is:
For each condition c, in each decision d, there is a test such that:
There is a test such that c == true
There is a test such that c == false
If the outcome of d when c == true is x, then the result of d when c == false must be !x.
All other conditions in d evaluate identically in both test cases.
If it is possible to create test set which meet these criteria, then this shows that each condition is not redundant: each condition at least influences the control of the program in some situation (as there is a test case that demonstrates this). This is what is meant by "independently influences the outcome".

In any program doesn't 100% statement coverage imply 100 % branch coverage?

While solving MCQs for a practice test I came across this statement - "In any program 100% statement coverage implies 100 % branch coverage" and it is termed as incorrect. I think its a correct statement because if we cover all the statements then it means we also cover all the paths and hence all the branches. Could someone please shed more light on this one?
Consider this code:
...
if (SomeCondition) DoSomething();
...
If SomeCondition is always true, you can have 100% statement coverage (SomeCondition and DoSomething() will be covered), but you never exercise the case when the condition is false, when you skip DoSomething().
Below example, a = true will cover 100% of statements, but fails to test the branch where a division by zero fault is possible.
int fun(bool a){
int x = 0;
if (a) x =1;
return 100/x;
}
For a test set to achieve 100% branch coverage, every branching point in the code must have been taken in each direction, at least once.
The archetypical example, showing that 100% statement coverage does not imply 100% branch coverage, was already given by Alexey Frunze. It is a consequence of the fact that (at least in the majority of programming languages) it is possible to have branches that do not involve statements (such a branch basically skips the statements in the other branch).
The reason for wanting 100% branch coverage, rather than just 100% statement coverage, is that your tests must also show that skipping some statements works as expected.
My main reason for providing this answer is to point out that the converse, viz. "100% branch coverage implies 100% statement coverage" is correct.
Just because you cover every statement doesnt mean that you covered every branch the program could have taken.
you have to look at every possible branch, not just the statements inside every branch.

What is the difference between an IF, CASE, and WHILE statement

I just want to know what the difference between all the conditional statements in objective-c and which one is faster and lighter.
One piece of advice: stop worrying about which language constructs are microscopically faster or slower than which others, and instead focus on which ones let you express yourself best.
If and case statements described
While statement described
Since these statements do different things, it is unproductive to debate which is faster.
It's like asking whether a hammer is faster than a screwdriver.
The language-agnostic version (mostly, obviously this doesn't count for declarative languages or other weird ones):
When I was taught programming (quite a while ago, I'll freely admit), a language consisted of three ways of executing instructions:
sequence (doing things in order).
selection (doing one of many things).
iteration (doing something zero or more times).
The if and case statements are both variants on selection. If is used to select one of two different options based on a condition (using pseudo-code):
if condition:
do option 1
else:
do option 2
keeping in mind that the else may not be needed in which case it's effectively else do nothing. Also remember that option 1 or 2 may also consist of any of the statement types, including more if statements (called nesting).
Case is slightly different - it's generally meant for more than two choices like when you want to do different things based on a character:
select ch:
case 'a','e','i','o','u':
print "is a vowel"
case 'y':
print "never quite sure"
default:
print "is a consonant"
Note that you can use case for two options (or even one) but it's a bit like killing a fly with a thermonuclear warhead.
While is not a selection variant but an iteration one. It belongs with the likes of for, repeat, until and a host of other possibilities.
As to which is fastest, it doesn't matter in the vast majority of cases. The compiler writers know far more than we mortal folk how to get the last bit of performance out of their code. You either trust them to do their job right or you hand-code it in assembly yourself (I'd prefer the former).
You'll get far more performance by concentrating on the macro view rather than the minor things. That includes selection of appropriate algorithms, profiling, and targeting of hot spots. It does little good to find something that take five minutes each month and get that running in two minutes. Better to get a smaller improvement in something happening every minute.
The language constructs like if, while, case and so on will already be as fast as they can be since they're used heavily and are relative simple. You should be first writing your code for readability and only worrying about performance when it becomes an issue (see YAGNI).
Even if you found that using if/goto combinations instead of case allowed you to run a bit faster, the resulting morass of source code would be harder to maintain down the track.
while isn't a conditional it is a loop. The difference being that the body of a while-loop can be executed many times, the body of a conditional will only be executed once or not at all.
The difference between if and switch is that if accepts an arbitrary expression as the condition and switch just takes values to compare against. Basically if you have a construct like if(x==0) {} else if(x==1) {} else if(x==2) ..., it can be written much more concisely (and effectively) by using switch.
A case statement could be written as
if (a)
{
// Do something
}
else if (b)
{
// Do something else
}
But the case is much more efficient, since it only evaluates the conditional once and then branches.
while is only useful if you want a condition to be evaluated, and the associated code block executed, multiple times. If you expect a condition to only occur once, then it's equivalent to if. A more apt comparison is that while is a more generalized for.
Each condition statement serves a different purpose and you won't use the same one in every situation. Learn which ones are appropriate for which situation and then write your code. If you profile your code and find there's a bottleneck, then you go ahead and address it. Don't worry about optimizing before there's actually a problem.
Are you asking whether an if structure will execute faster than a switch statement inside of a large loop? If so, I put together a quick test, this code was put into the viewDidLoad method of a new view based project I just created in the latest Xcode and iPhone SDK:
NSLog(#"Begin loop");
NSDate *loopBegin = [NSDate date];
int ctr0, ctr1, ctr2, ctr3, moddedNumber;
ctr0 = 0;
ctr1 = 0;
ctr2 = 0;
ctr3 = 0;
for (int i = 0; i < 10000000; i++) {
moddedNumber = i % 4;
// 3.34, 1.23s in simulator
if (moddedNumber == 0)
{
ctr0++;
}
else if (moddedNumber == 1)
{
ctr1++;
}
else if (moddedNumber == 2)
{
ctr2++;
}
else if (moddedNumber == 3)
{
ctr3++;
}
// 4.11, 1.34s on iPod Touch
/*switch (moddedNumber)
{
case 0:
ctr0++;
break;
case 1:
ctr1++;
break;
case 2:
ctr2++;
break;
case 3:
ctr3++;
break;
}*/
}
NSTimeInterval elapsed = [[NSDate date] timeIntervalSinceDate:loopBegin];
NSLog(#"End loop: %f seconds", elapsed );
This code sample is by no means complete, because as pointed out earlier if you have a situation that comes up more times than the others, you would of course want to put that one up front to reduce the total number of comparisons. It does show that the if structure would execute a bit faster in a situation where the decisions are more or less equally divided among the branches.
Also, keep in mind that the results of this little test varied widely in performance between running it on a device vs. running it in the emulator. The times cited in the code comments are running on an actual device. (The first time shown is the time to run the loop the first time the code was run, and the second number was the time when running the same code again without rebuilding.)
There are conditional statements and conditional loops. (If Wikipedia is to be trusted, then simply referring to "a conditional" in programming doesn't cover conditional loops. But this is a minor terminology issue.)
Shmoopty said "Since these statements do different things, it is nonsensical to debate which is faster."
Well... it may be time poorly spent, but it's not nonsensical. For instance, let's say you have an if statement:
if (cond) {
code
}
You can transform that into a loop that executes at most one time:
while (cond) {
code
break;
}
The latter will be slower in pretty much any language (or the same speed, because the optimizer turned it back into the original if behind the scenes!) Still, there are occasions in computer programming where (due to bizarre circumstances) the convoluted thing runs faster
But those incidents are few and far between. The focus should be on your code--what makes it clearest, and what captures your intent.
loops and branches are hard to explain briefly, to get the best code out of a construct in any c-style language depends on the processor used and the local context of the code. The main objective is to reduce the breaking of the execution pipeline -- primarily by reducing branch mispredictions.
I suggest you go here for all your optimization needs. The manuals are written for the c-style programmer and relatively easy to understand if you know some assembly. These manuals should explain to you the subtleties in modern processors, the strategies used by top compilers, and the best way to structure code to get the most out of it.
I just remembered the most important thing about conditionals and branching code. Order your code as follows
if(x==1); //80% of the time
else if(x==2); // 10% of the time
else if(x==3); //6% of the time
else break;
You must use an else sequence... and in this case the prediction logic in your CPU will predict correctly for x==1 and avoid the breaking of your pipeline for 80% of all execution.
More information from intel. Particularly:
In order to effectively write your code to take advantage of these rules, when writing if-else or switch statements, check the most common cases first and work progressively down to the least common. Loops do not necessarily require any special ordering of code for static branch prediction, as only the condition of the loop iterator is normally used.
By following this rule you are flat-out giving the CPU hints about how to bias its prediction logic towards your chained conditionals.