How to measure static test coverage? - testing

So, DLang (effectively) comes with code coverage built in. That's cool.
My issue is, I do a lot of metaprogramming. I tend to test my templates, by static asserts:
template CompileTimeFoo(size_t i)
{
enum CompileTimeFoo = i+3;
}
unittest
{
static assert(CompileTimeFoo!5 == 8);
}
Unfortunately (and obviously), when running tests with coverage, body of CompileTimeFoo will not be counted as "hits" nor "hittable" lines.
Now, let's consider slightly more complicated template:
enum IdentifierLength(alias X) = __traits(identifier, X).length;
void foo(){}
static assert(IdentifierLength!foo == 3);
In this case there still are no "hits", but there is one hittable line (where foo is defined). Because of that my coverage falls to 0% (in this example). If you look at this submodule of my pet project and into it's coverage on codecov, you'll see this exact case - I've prepared a not-that-bad test suite, yet it looks like the whole module is a wildland, because coverage is 0% (or close).
Disclaimer: I want to keep my tests in different source set. This is a matter of taste, but I dislike mixing tests with production code. I don't exactly know what would happen if I wrap foo in version(unittest){...}, but I expect that (since this code will be pushed to compiler) it wouldn't change much.
Again: I DO understand why that happens. I was wondering if there is some trick to work around that? Specifically: is there a way to calculate coverage for things that get called ONLY during compile time?
I can hack testing for coverage sake when I do mixins and just test code-generating things by comparing strings in runtime, but: 1. this would be ugly, and 2. it doesn't cover the case above.

Related

<!WHATVER!> syntax in Kotlin? (Angle brackets wrapping exclamation points)

I saw this syntax I'm not familiar with in the Kotlin compiler test suite.
// !DIAGNOSTICS: +UNUSED_LAMBDA_EXPRESSION, +UNUSED_VARIABLE
fun unusedLiteral(){
<!UNUSED_LAMBDA_EXPRESSION!>{ ->
val <!UNUSED_VARIABLE!>i<!> = 1
}<!>
}
What does <!UNUSED_LAMBDA_EXPRESSION!>...<!> mean?
Found in unusedLiteral.kt
The term UNUSED_LAMBDA_EXPRESSION is declared in Errors.kt to be:
DiagnosticFactory0<KtLambdaExpression> UNUSED_LAMBDA_EXPRESSION = DiagnosticFactory0.create(WARNING);
This syntax is not valid Kotlin. It is only used in the test data files of Kotlin's test pipeline. That is, only the test runners recognises this syntax, not the Kotlin compiler. Specifically, the <!DIAGNOSTIC_NAME!>foo<!> syntax denotes a handler. Handlers do checks on things, or output information to a file. In this case, this syntax checks that there is indeed the specified diagnostic being emitted at that point in the file.
Also note that the // !DIAGNOSTICS comment at the top is not just a comment. It denotes a directive. Directives are like the options for running the test.
I highly recommend you read compiler/testData/diagnostics/ReadMe.md, which explains how diagnostic tests work specifically, and if you're really interested in this stuff, check out compiler/test-infrastructure/ReadMe.md too, which tells you all about how the whole test pipeline works in general.

Gmock - InSequence vs RetiresOnSaturation

I don't understand the following gmock example:
{
InSequence s;
for (int i = 1; i <= n; i++) {
EXPECT_CALL(turtle, GetX())
.WillOnce(Return(10*i))
.RetiresOnSaturation();
}
}
When I remove .RetiresOnSaturation() the above code works the same way - GetX returns 10, 20 and so on. What is the reason to use .RetiresOnSaturation() when we also use InSequence object ? Could you explain that ?
In the exact example given RetiresOnSaturation() doesn't change anything. Once the final expectation in the sequence is saturated that expectation remains active but saturated. A further call would cause the test to fail.
RetiresOnSaturation() is generally used when overlaying expectations. For example:
class Turtle {
public:
virtual int GetX() = 0;
};
class MockTurtle : public Turtle {
public:
MOCK_METHOD0(GetX, int());
};
TEST(GmockStackoverflow, QuestionA)
{
MockTurtle turtle;
// General expectation - Perhaps set on the fixture class?
EXPECT_CALL(turtle, GetX()).WillOnce(Return(0));
// Extra expectation
EXPECT_CALL(turtle, GetX()).WillOnce(Return(10)).RetiresOnSaturation();
turtle.GetX();
turtle.GetX();
}
This property can be used in combination with InSequence when the sequence of expected events overlays another expectation. In this scenario the last expectation in the sequence must be marked RetiresOnSaturation(). Note that only the last expectation needs to be marked because when an expectation in sequence is saturated it retires the prerequisite expectations.
The example below demonstrates how this might work out in practice. Removing RetiresOnSaturation() causes the test to fail.
TEST(GmockStackoverflow, QuestionB)
{
MockTurtle turtle;
EXPECT_CALL(turtle, GetX()).WillOnce(Return(0));
{
InSequence s;
EXPECT_CALL(turtle, GetX()).WillOnce(Return(10));
EXPECT_CALL(turtle, GetX()).WillOnce(Return(10)).RetiresOnSaturation();
}
turtle.GetX();
turtle.GetX();
turtle.GetX();
}
From my experience, some (ok, possibly many) developers have a problem, like a gtest error message, discover that RetiresOnSaturation() makes the problem go away, and then get into the habit of liberally sprinkling RetiresOnSaturation() throughout their unit tests – because it solves problems. This is apparently easier than reasoning about what the test case is supposed to accomplish. On the other hand, I like to think in terms of what has to happen (according to the documented API contract) in what order – which can be a partial order if you use After() or don't have everything in the same sequence – and that makes more expressive constructs like InSequence or After() come naturally to my mind.
So, as Adam Casey stated, there is no technical reason, but IMO there could be an issue of magical thinking or insufficient training.
I recommend avoiding RetiresOnSaturation(). There are some general issues with it (like causing confusing warning messages, see example below), but mostly it is too low level when compared to the alternatives, and is almost never needed if you have clean contracts, and use the previously mentioned alternatives correctly. You could say it's the goto of gtest expectations…
Addendum A: Example of when a gratuitous RetiresOnSaturation() makes for worse messages, and yes, I have seen such code:
EXPECT_CALL(x, foo()).WillOnce(Return(42)).RetiresOnSaturation();
If x.foo() is called more than once, let's say twice, then, without RetiresOnSaturation(), you would have received an error message like "No matching expectation for foo() … Expected: to be called once … Actual: called twice (oversaturated)", which is about as specific as possible. But because of RetiresOnSaturation(), you will only get an "Unexpected function call foo()" warning, which is confusing and meaningless.
Addendum B: In your example, it is also possible that a refactoring to use InSequence was made after the fact, and the person doing the refactoring didn't realize that RetiresOnSaturation() was now redundant. You could do a "blame" in your version control system to check.

Continuous improvement: Is it possible to specify the tests in advance?

I am used to "old fashioned" waterfall development cycles.
For a new project, continuous integration seems to better fit our need.
In waterfall, you have to specify the tests you will to implement in advance.
My questions:
What is the usual way with continuous integration development cycles regarding test specification?
If you don't specify the tests, can you imagine a way to specify them in advance?
Many thanks for your help.
At university we were taught that "test driven development" makes sense, especially if there is a proper coding specification.
If you're not able to write tests before coding -> the coding spec should be more specific / has issues.
I usually write unit-tests based on the coding spec for my java classes, which will afterwards be integrated and executed on our jenkins continuous integration server.
Forgive me if i am wrong but thats what i learned...
It always depends on the complexity of the required java classes, the trivial "domain" classes do not need a big specification info
In most cases we try to specify how the Classes or Methods should work (in words) and also write down the some example values.
Lets say you should write a method that should check if a value is in a specifig range:
// Example Specification:
// the method 'checkIfItsInRange' should return true when : the input lies within the range and it should be devidable by the distance value
// Lets say the range goes from -30,00 to +30,00 with a distance from 0,25
// valid values :30, -30, 15.25, 15.50, 17.75 etc. -> return true
// invalid : -31, -30.01, +30.08, 0.4 etc. -> return false
// MissingParameterException when one of the Parameters is null
public boolean checkIfItsInRange throws MissingParameterException (BigDecimal from, BigDecimal to, BigDecimal distance, BigDecimal input) {
// TODO implement depending on spec.
}
In this case you can already write some Unittests before you started to implement the method itself.
I hope that makes things a bit clearer.

How to execute all the methods sequentially in testng

I have many methods in my class , when i run the code ,methods are called randomly , but in my class ,every method depends on its predeccessor ,ie 2nd method depends on the 1st method , 3rd method depends on the 2nd method and so on .I want to exectue all the methods sequentially
I have tried the below metods and tested the code ,but still the methods are called randomly
//using sequential
#Test(sequential = true)
public void Method1(){
}
#Test(sequential = true)
public void Method2(){
}
//using singleThreaded
#Test(singleThreaded=true)
public void Method1(){
}
#Test(singleThreaded=true)
public void Method2(){
}
I have passed the following parameter in the testng as well
<test name="Test" preserve-order="true" annotations="JDK">
<classes>
<class name="com.test" >
<methods>
<include name="method1"/>
<include name="method2"/>
<include name="method3"/>...
</methods>
</class>
</classes>
</test>
</suite>
When I tested it with #Test(dependsOnMethod="") ,instead of executing the methods sequentially, the methods are getting skipped.
How to execute the test sequentially in testng?
if you want to run your all test method in some specific manner, than just add priority in your #test annotation. see following:-
#test(priority=0)
function1()
{}
#test(priority=1)
function2()
{}
#test(priority=5)
function3()
{}
#test(priority=3)
function4()
{}
#test(priority=2)
function5()
{}
In this case function1 call first than function2, after function 5 will call instead of function3 because of priority.
Dependsonmethods will make your tests as skipped if the method they depend on fails. This is logically correct since if testB depends on testA then if testA fails, then there's no use of running testB. If all you need is testB runs after testA and testB doesn't depend on the result of testA, then add a alwaysrun=true to your #test annotation. These are known as. Soft dependencies. Refer here.
First off, you don't need the sequential or singleThreaded parameters. Writing the single methods in the suite is all you need. Your problem probably lies somewhere else. Make sure you're starting the test using your suite, not the class itself.
In case you don't want to use suites every time (because it's tedious, error-prone and unflexible) here are a few solutions to this problem.
Put dependent methods into one method and make excessive use of Reporter.log(String, boolean). This is similar to System.out.println(String), but also saves the String to TestNG reports. For the second argument you always want to pass true - it tells whether the message should also be printed to STDOUT. When doing this before every step, the test output alone should be enough to identify problematic (read failing) test steps.
Additionally when doing this you have the possibility of using Soft Assertsions. This basically means that you don't have to abort and fail a whole test only because one optional step doesn't work. You can continue until next critical point and abort then or at the end. The errors will still be saved and you can decide whether you want to mark a test run as failed or unstable.
Use #Test(priority = X) where X is a number. Mark all your test methods with a priority and they'll be executed in the order of the priority number from lowest to highest. The upside of this is that you can put methods inbetween steps and the single annotations are independent. The downside however is that this doesn't force any hard dependencies, but only the order. I.e. if method testA with priority=1 fails, method testB with priority=2 will be executed nonetheless.
You probably could work around this using a listener though. Haven't tried this yet.
Use #Test(dependsOnMethods = {"testA"}). Note that the argument here is not a string, but a list of strings (you have that wrong in your post). The upside is hard dependencies, meaning when testB depends on testA, a failure of testA marks testB as skipped. The downside of this annotation is that you have to put all your methods in a very strict chain where every method depends on one other method. If you break this chain, e.g. having multiple methods not depend on anything or having some methods depend on same methods, you'll get into hell's kitchen...
Using both priority and dependsOnMethods together doesn't get you where you'd expect it to unfortunately. Priority is mostly ignored when using hard dependencies.
You can control the execution using #Priority with Listeners. See this link - http://beust.com/weblog/2008/03/29/test-method-priorities-in-testng/
Try to use dependsOnMethods dependency from TestNG framework.
I assume, based on your annotations, that you're using TestNG. I agree with others here that 'sequential' is not the way to go.
Option 1: dependsOnMethods
If your intent is that downstream methods are not even attempted when an upstream dependency fails, try dependsOnMethods instead. With this setup, if an earlier test fails, further tests are skipped (rather than failed).
Like this:
// using dependsOnMethods
#Test
public void method1(){
// this one passes
}
#Test(dependsOnMethods = {"method1"})
public void method2(){
fail("assume this one fails");
}
#Test(dependsOnMethods = {"method1"})
public void method3(){
// this one runs, since it depends on method1
}
#Test(dependsOnMethods = {"method2"})
public void method4(){
// this one is skipped, since it depends on method2
}
Option 2: priority
If your intent is that all tests are executed, regardless of whether upstream test pass, you can use priority.
Like this:
Other notes
I agree wholeheartedly with sircapsalot, that tests should be small and self-contained. If you're using a lot of dependencies, you probably have a problem with your overall test framework.
That said, there are instances where some tests need to run first, others need to run last, etc. And there are instances where tests should be skipped if others fail. For example, if you have tests "updateOneRecord" and "updateManyRecords", you could logically skip multiple if one didn't work.
(And as a final note, your method names should begin with lowercase characters.)
This might not be the best design. I'd advise that you make an attempt to disperse from this design.
Good test architecture requires that each method be SELF-SUFFICIENT and should not depend on other tests to complete before continuing. Why? Because say Test 2 depends on Test 1. Say Test 1 fails.. Now Test 2 will fail.. eventually, you'll have Test 1,2,3,4,5 tests failing, and you don't even know what the cause was.
My recommendation to you would be to create self-sufficient, maintainable, and short tests.
This is a great read which will help you in your endeavours:
http://www.lw-tech.com/q1/ug_concepts.htm

Static testing for Scala

There are some nice libraries for testing in Scala (Specs, ScalaTest, ScalaCheck). However, with Scala's powerful type system, important parts of an API being developed in Scala are expressed statically, usually in the form of some undesirable or disallowed behavior being prevented by the compiler.
So, what is the best way to test whether something is prevented by the compiler when designing an library or other API? It is unsatisfying to comment out code that is supposed to be uncompilable and then uncomment it to verify.
A contrived example testing List:
val list: List[Int] = List(1, 2, 3)
// should not compile
// list.add("Chicka-Chicka-Boom-Boom")
Does one of the existing testing libraries handle cases like this? Is there an approach that people use that works?
The approach I was considering was to embed code in a triple-quote string or an xml element and call the compiler in my test. Calling code looking something like this:
should {
notCompile(<code>
val list: List[Int] = List(1, 2, 3)
list.add("Chicka-Chicka-Boom-Boom")
</code>)
}
Or, something along the lines of an expect-type script called on the interpreter.
I have created some specs executing some code snippets and checking the results of the interpreter.
You can have a look at the Snippets trait. The idea is to store in some org.specs.util.Property[Snippet] the code to execute:
val it: Property[Snippet] = Property(Snippet(""))
"import scala.collection.List" prelude it // will be prepended to any code in the it snippet
"val list: List[Int] = List(1, 2, 3)" snip it // snip some code (keeping the prelude)
"list.add("Chicka-Chicka-Boom-Boom")" add it // add some code to the previously snipped code. A new snip would remove the previous code (except the prelude)
execute(it) must include("error: value add is not a member of List[Int]") // check the interpreter output
The main drawback I found with this approach was the slowness of the interpreter. I don't know yet how this could be sped up.
Eric.