Prunning tests in Googletest - googletest

When there are two tests with an internal dependency, i.e. Have modules A and B where B uses A,
How can you avoid testing B when A has failed? (there's no point in such test)

By design, the tests should not depend on each others execution, so there's no mechanism in GoogleTest that would allow you to define such dependency.
There's a possibility to just stop tests' execution at first failure, see How to stop GTest test-case execution, when first test failed

Related

How to make CTest run few tests within one executable

We have over 10 Google Test executables which, in sum, provide over 5000 test cases.
So far, our CTest configuration used add_test command which resulted in each test executable being a single test from CTest point of view.
Recently, we used the GoogleTest module to replace the add_test command with the gtest_discover_tests command. This way all individual test cases are now visible to CTest. Now each of those over 5000 test cases is a separate test in CTest.
This worked quite nice enhancing parallel runs (a strong machine runs far more test cases than we have test executables) and allowing us to use CTest command-line interface to filter test cases etc. abstracting away the testing framework used.
However, we hit a major blocker for Valgrind runs! Now each individual test case is run separately, causing the Valgrind machinery to be set up and tear down over 5000 times. The time whooped from around 10 minutes for the full suite to almost 2 hours which is unacceptable.
Now, I'm wondering whether there is any way to make the CTest run tests in batches from the same executable by invoking the executable only once. We would do this for Valgrind runs but not the ordinary runs. I'm afraid there is no way, especially that it would probably require the GoogleTest module to somehow explain how to do it. But maybe someone already had a similar issue and solved it somehow?
I know a workaround would be to skip CTest for Valgrind runs. Just take the test executables and run them "manually" under Valgrind. Doable, probably also in an automated way (so the list of test executables is somehow "queried", perhaps with the --show-only argument to ctest, rather than hardcoded). But it makes the interface (command line, output, etc.) less consistent.

Two tests dependent on same test

I have two tests that are dependent on same test(Using DependsON). Are these dependent test cases will be executed one after other?
I am using testNG version-6.10
Would the dependent testcases run one after the other or together depends on what your parallel execution strategy is.
If you have set parallel='methods' in your suite file, then both the dependent methods will run together.
If you have disabled parallel execution (by setting parallel=false in your suite file), then the dependent methods will run one after the other. Which runs first and which runs next is not determined (since TestNG relies on reflection to query methods from a class)
But in either cases, they will be executed only after the master test (the one on which both your tests depend on) runs successfully.

How to limit the number of test threads in Cargo.toml?

I have tests which share a common resource and can't be executed concurrently. These tests fail with cargo test, but work with RUST_TEST_THREADS=1 cargo test.
I can modify the tests to wait on a global mutex, but I don't want to clutter them if there is any simpler way to force cargo set this environment variable for me.
As of Rust 1.18, there is no such thing. In fact, there is not even a simpler option to disable parallel testing.
Source
However, what might help you is cargo test -- --test-threads=1, which is the recommended way of doing what you are doing over the RUST_TEST_THREADS envvar. Keep in mind that this only sets the number of threads used for testing in addition to the main thread.

Serial execution of package tests

I have implemented several packages for a web API, each with their own test cases. When each package is tested using go test ./api/pkgname the tests pass. If I want to run all tests at once with go test ./api/... test cases always fail.
In each test case, I recreate the entire schema using DROP SCHEMA public CASCADE followed by CREATE SCHEMA public and apply all migrations. The test suite reports errors back at random, saying a relation/table does not exist, so I guess each test suite (per package) is run in parallel somehow, thus messing up the DB state.
I tried to pass along some test flags like go test -cpu 1 -parallel 0 ./src/api/... with no success.
Could the problem here be tests running in parallel, and if yes, how can I force serial execution?
Update:
Currently I use this workaround to run the tests, but I still wonder if there's a better solution
find <dir> -type d -exec go test {} \;
As others have pointed out, -parallel doesn't do the job (it only works within packages). However, you can use the flag -p=1 to run through the package tests in series. This is documented here:
http://golang.org/src/cmd/go/testflag.go
but (afaict) not on the command line, go help, etc. I'm not sure it is meant to stick around (although I'd argue that if it is removed, -parallel should be fixed.)
The go tool is provided to make running unit tests easier using the convention that *_test.go files contain unittests in them. Because it assumes they are unittests it also assumes they are hermetic. It sounds like your tests either aren't unittests or they are but violate the assumptions that a unittest should fulfill.
In the case that you mean for these tests to be unittests then you probably need a mock database for your unittests. A mock, preferrably in memory, of your database will ensure that the unittest is hermetic and can't be interfered with by other unittests.
In the case that you mean for these tests to be integration tests you are probably better off not using the go tool for these tests. What you probably want is to create a seperate test binary whose running you can control and write you integration test scripts in there.
The good news is that creating a mock in Go is insanely easy. Change your code to take an interface with the methods you care about for the databases and then write an in memory implementation of that interface for testing purposes and pass it into your application code that you want to test.
Just to clarify, #Jeremy's answer is still the accepted one:
Since my integration tests were only run on one package (api), I removed the separate test binary in the end and created a pattern to separate test types by:
Unit tests use the normal TestX name
Integration tests use Test_X
I created shell scripts (utest.sh/itest.sh) to run either of those.
For unit tests go test -run="^(Test|Benchmark)[^_](.*)"
For integration tests go test -run"^(Test|Benchmark)_(.*)"
Run both using the normal go test

How to stop further execution of Tests within a TestFixture if one of them fails in NUnit?

I want to stop further execution of Tests within a TestFixture if one of them fails in NUnit.
Of course the common and advised practice is to make tests independent of each other. However, the case I would like to use NUnit for, requires that all tests and test fixtures following one that failed, are not executed. In other words, test failure causes the whole NUnit execution to stop (or proceeds with the next [TestFixture] but both scenarios should be configurable).
The simple, yet not acceptable solution, would be to force NUnit termination by sending a signal of some kind to the NUnit process.
Is there a way to do this in an elegant way?
I believe you can use NAnt to do this. Specifically, the nunit or nunit2 tasks have a haltonfailure parameter that allows the test run to stop if a test fails.