how to run tests in testcase one by one with gtest - googletest

I have three test ,how can I configure to run these test in order of
a.setup()-> a -> a.teardown() ->b.setup()-> b -> b.teardown() ->c.setup()-> c -> c.teardown() ?

The order SetUp() TestBody() TearDown() is always given. So your question is, if you can enforce order A,B,C.
First of all, If your tests rely on execution order, something's fishy. A test should always be self-contained.
To answer your question: GTest allows you to shuffle tests using --gtest_shuffle. This is by default false, i.e. your test execution order will be deterministic. The order is determined by how the tests are registered, i.e. in which order they are written in the code.
If you have them distributed over multiple files it depends on the compilation order.

Related

How to measure static test coverage?

So, DLang (effectively) comes with code coverage built in. That's cool.
My issue is, I do a lot of metaprogramming. I tend to test my templates, by static asserts:
template CompileTimeFoo(size_t i)
{
enum CompileTimeFoo = i+3;
}
unittest
{
static assert(CompileTimeFoo!5 == 8);
}
Unfortunately (and obviously), when running tests with coverage, body of CompileTimeFoo will not be counted as "hits" nor "hittable" lines.
Now, let's consider slightly more complicated template:
enum IdentifierLength(alias X) = __traits(identifier, X).length;
void foo(){}
static assert(IdentifierLength!foo == 3);
In this case there still are no "hits", but there is one hittable line (where foo is defined). Because of that my coverage falls to 0% (in this example). If you look at this submodule of my pet project and into it's coverage on codecov, you'll see this exact case - I've prepared a not-that-bad test suite, yet it looks like the whole module is a wildland, because coverage is 0% (or close).
Disclaimer: I want to keep my tests in different source set. This is a matter of taste, but I dislike mixing tests with production code. I don't exactly know what would happen if I wrap foo in version(unittest){...}, but I expect that (since this code will be pushed to compiler) it wouldn't change much.
Again: I DO understand why that happens. I was wondering if there is some trick to work around that? Specifically: is there a way to calculate coverage for things that get called ONLY during compile time?
I can hack testing for coverage sake when I do mixins and just test code-generating things by comparing strings in runtime, but: 1. this would be ugly, and 2. it doesn't cover the case above.

Seed random number for unit tests in golang

I have functions that uses math/rand to "randomly" sample from a Poisson and another from the binomial distribution. It is often used by other functions that also return random values like h(g(f())) where f() g() and h() are random functions.
I placed a rand.Seed(n) call in main() to pick a different seed every time the program is run and it works fine.
My question is for unittests for these PRNG functions and the functions that use them using the builtin testing package. I would like to remove the randomness so that I can have a predictable value to compare with.
Where is the best place to place my constant value seed to get deterministic output? At the init() of the test file or inside every test function, or somewhere else?
You should certainly not put it in the test init() function. Why? Because execution order (or even if test functions are run) is non-deterministic. For details, see How to run golang tests sequentially?
What does this mean?
If you have 2 test functions (e.g. TestA() and TestB()) both of which test functions that call into math/rand, you don't have guarantees if TestA() is run first or TestB(), or even if any of those will be called. And so random data returned by math/rand will depend on this order.
A better option would be to put seeding into TestA() and TestB(), but this may also be insufficient, as tests may run parallel, so the random data returned by math/rand may also be non-deterministic.
To really have deterministic test results, functions that need random data would need to receive a math.Rand value and use that explicitly, and in tests you can create separate, distinct math.Rand values that will not be used by other tests, so seeding those to constant values and using those in the tested functions, only then can you have deterministic results that will not depend on how and in which order the test functions are called.
As an alternative to passing in a math.Rand you could monkey patch IF you you don't want dependency injection to be part of your package's API e.g.: https://play.golang.org/p/cIGxhO0wSbo
To stay compatible with parallel test execution, create your own rand.Rand.
My example includes Check() from the standard package testing/quick which is often used to execute tests on a hundred of pseudo-random arguments. (Similarly to OP's case, it's a function that makes your tests very much dependent on RNG seeding).
package main
import (
"math/rand"
"testing"
"testing/quick"
)
func TestRandomly(t *testing.T) {
r := rand.New(rand.NewSource(0))
config := &quick.Config{Rand: r}
assertion := func(num uint8) bool {
// fail test when argument is 254
return num != 254
}
if err := quick.Check(assertion, config); err != nil {
t.Error("failed checks", err)
}
}

How do I mark tests as Passed/Skipped/Ignored in serenity?

I keep getting phantomjs error - unreachablebrowserexception.
I want to mark the test as skipped or passed in the catch block of this managed exception. How do I do that?
You can use JUnit's Assume class at the beginning of your test to mark the test as ignored/skipped based on a condition.
For example:
#Test
public void myTest() {
Assume.assumeTrue(yourBooleanCondition);
// continue your test steps...
}
You can read about its different applications here.
However, being able to mark a failed test as passed is against the test mantra and defeats the purpose of developing such code entirely. I do not know any framework that would allow you to do that. If you absolutely have to, my guess is to fiddle with the results.

How to execute all the methods sequentially in testng

I have many methods in my class , when i run the code ,methods are called randomly , but in my class ,every method depends on its predeccessor ,ie 2nd method depends on the 1st method , 3rd method depends on the 2nd method and so on .I want to exectue all the methods sequentially
I have tried the below metods and tested the code ,but still the methods are called randomly
//using sequential
#Test(sequential = true)
public void Method1(){
}
#Test(sequential = true)
public void Method2(){
}
//using singleThreaded
#Test(singleThreaded=true)
public void Method1(){
}
#Test(singleThreaded=true)
public void Method2(){
}
I have passed the following parameter in the testng as well
<test name="Test" preserve-order="true" annotations="JDK">
<classes>
<class name="com.test" >
<methods>
<include name="method1"/>
<include name="method2"/>
<include name="method3"/>...
</methods>
</class>
</classes>
</test>
</suite>
When I tested it with #Test(dependsOnMethod="") ,instead of executing the methods sequentially, the methods are getting skipped.
How to execute the test sequentially in testng?
if you want to run your all test method in some specific manner, than just add priority in your #test annotation. see following:-
#test(priority=0)
function1()
{}
#test(priority=1)
function2()
{}
#test(priority=5)
function3()
{}
#test(priority=3)
function4()
{}
#test(priority=2)
function5()
{}
In this case function1 call first than function2, after function 5 will call instead of function3 because of priority.
Dependsonmethods will make your tests as skipped if the method they depend on fails. This is logically correct since if testB depends on testA then if testA fails, then there's no use of running testB. If all you need is testB runs after testA and testB doesn't depend on the result of testA, then add a alwaysrun=true to your #test annotation. These are known as. Soft dependencies. Refer here.
First off, you don't need the sequential or singleThreaded parameters. Writing the single methods in the suite is all you need. Your problem probably lies somewhere else. Make sure you're starting the test using your suite, not the class itself.
In case you don't want to use suites every time (because it's tedious, error-prone and unflexible) here are a few solutions to this problem.
Put dependent methods into one method and make excessive use of Reporter.log(String, boolean). This is similar to System.out.println(String), but also saves the String to TestNG reports. For the second argument you always want to pass true - it tells whether the message should also be printed to STDOUT. When doing this before every step, the test output alone should be enough to identify problematic (read failing) test steps.
Additionally when doing this you have the possibility of using Soft Assertsions. This basically means that you don't have to abort and fail a whole test only because one optional step doesn't work. You can continue until next critical point and abort then or at the end. The errors will still be saved and you can decide whether you want to mark a test run as failed or unstable.
Use #Test(priority = X) where X is a number. Mark all your test methods with a priority and they'll be executed in the order of the priority number from lowest to highest. The upside of this is that you can put methods inbetween steps and the single annotations are independent. The downside however is that this doesn't force any hard dependencies, but only the order. I.e. if method testA with priority=1 fails, method testB with priority=2 will be executed nonetheless.
You probably could work around this using a listener though. Haven't tried this yet.
Use #Test(dependsOnMethods = {"testA"}). Note that the argument here is not a string, but a list of strings (you have that wrong in your post). The upside is hard dependencies, meaning when testB depends on testA, a failure of testA marks testB as skipped. The downside of this annotation is that you have to put all your methods in a very strict chain where every method depends on one other method. If you break this chain, e.g. having multiple methods not depend on anything or having some methods depend on same methods, you'll get into hell's kitchen...
Using both priority and dependsOnMethods together doesn't get you where you'd expect it to unfortunately. Priority is mostly ignored when using hard dependencies.
You can control the execution using #Priority with Listeners. See this link - http://beust.com/weblog/2008/03/29/test-method-priorities-in-testng/
Try to use dependsOnMethods dependency from TestNG framework.
I assume, based on your annotations, that you're using TestNG. I agree with others here that 'sequential' is not the way to go.
Option 1: dependsOnMethods
If your intent is that downstream methods are not even attempted when an upstream dependency fails, try dependsOnMethods instead. With this setup, if an earlier test fails, further tests are skipped (rather than failed).
Like this:
// using dependsOnMethods
#Test
public void method1(){
// this one passes
}
#Test(dependsOnMethods = {"method1"})
public void method2(){
fail("assume this one fails");
}
#Test(dependsOnMethods = {"method1"})
public void method3(){
// this one runs, since it depends on method1
}
#Test(dependsOnMethods = {"method2"})
public void method4(){
// this one is skipped, since it depends on method2
}
Option 2: priority
If your intent is that all tests are executed, regardless of whether upstream test pass, you can use priority.
Like this:
Other notes
I agree wholeheartedly with sircapsalot, that tests should be small and self-contained. If you're using a lot of dependencies, you probably have a problem with your overall test framework.
That said, there are instances where some tests need to run first, others need to run last, etc. And there are instances where tests should be skipped if others fail. For example, if you have tests "updateOneRecord" and "updateManyRecords", you could logically skip multiple if one didn't work.
(And as a final note, your method names should begin with lowercase characters.)
This might not be the best design. I'd advise that you make an attempt to disperse from this design.
Good test architecture requires that each method be SELF-SUFFICIENT and should not depend on other tests to complete before continuing. Why? Because say Test 2 depends on Test 1. Say Test 1 fails.. Now Test 2 will fail.. eventually, you'll have Test 1,2,3,4,5 tests failing, and you don't even know what the cause was.
My recommendation to you would be to create self-sufficient, maintainable, and short tests.
This is a great read which will help you in your endeavours:
http://www.lw-tech.com/q1/ug_concepts.htm

Unit Testing in Visual Studio

I'm wanting to create a load of unit tests to make sure my stored procedures are working, but i'm failing (i'm new to tests in visual studio).
Basically I want to do the following:
<testclass()>
Dim myglobalvariable as integer
<testmethod()>
Public sub test()
-> use stored procedure to insert a record
set myglobalvariable = result from the sp
end sub
public sub test2()
-> use a stored procedure to modify the record we just added
end sub
public sub test3()
-> use a stored procedure to delete the record we just added
end sub
end class
The problem is because the tests don't run sequentially, tests 2 and 3 fail because the global variable isn't set.
Advise? :'(
The key word here is 'unit'.
A unit test should be self-contained, i.e. be comprised of the code to perform the test, and should not rely on other tests being executed first, or affect the operation of other tests.
See the list of TDD anti-patterns here for things that you should avoid when writing tests.
http://blog.james-carr.org/2006/11/03/tdd-anti-patterns/
Check out the TestInitializeAttribute. You would place this on a method that should run before every test to allocate the appropriate resources.
One side note since it looks like you're misinterpreting how these should work: Unit tests should not require artifacts from other tests. If you're testing modifications, the initialize / setup method(s) should create the space that's to be modified.
Check out Why TestInitialize gets fired for every test in my Visual Studio unit tests?
I think that will point you in the correct direction. Instead of running it as a test, you could run it as a TestInitialize.
There are 'Ordered tests' but it breaks the idea of each test running independent.
First off, the test you describe doesn't sound like a unit test, but more like an integration test. A unit test is typically testing a unit of functionality in your code, isolated from the rest of the system, and runs in memory. An integration test aims at verifying that the components of the system, when assembled together, work as intended.
Then, without going into the details of the system, it looks to me that I would approach it as one single test, calling multiple methods - something along the lines of:
[Test]
public void Verify_CreateUpdateDelete()
{
CreateEntity();
Assert that the entity exists
UpdateEntity();
Assert that the entity has been updated
DeleteEntity();
Assert that the entity has been deleted
}