We have over 10 Google Test executables which, in sum, provide over 5000 test cases.
So far, our CTest configuration used add_test command which resulted in each test executable being a single test from CTest point of view.
Recently, we used the GoogleTest module to replace the add_test command with the gtest_discover_tests command. This way all individual test cases are now visible to CTest. Now each of those over 5000 test cases is a separate test in CTest.
This worked quite nice enhancing parallel runs (a strong machine runs far more test cases than we have test executables) and allowing us to use CTest command-line interface to filter test cases etc. abstracting away the testing framework used.
However, we hit a major blocker for Valgrind runs! Now each individual test case is run separately, causing the Valgrind machinery to be set up and tear down over 5000 times. The time whooped from around 10 minutes for the full suite to almost 2 hours which is unacceptable.
Now, I'm wondering whether there is any way to make the CTest run tests in batches from the same executable by invoking the executable only once. We would do this for Valgrind runs but not the ordinary runs. I'm afraid there is no way, especially that it would probably require the GoogleTest module to somehow explain how to do it. But maybe someone already had a similar issue and solved it somehow?
I know a workaround would be to skip CTest for Valgrind runs. Just take the test executables and run them "manually" under Valgrind. Doable, probably also in an automated way (so the list of test executables is somehow "queried", perhaps with the --show-only argument to ctest, rather than hardcoded). But it makes the interface (command line, output, etc.) less consistent.
Related
Because of the resource exhaustion there is a need to run test cases serially, without threads (integration tests for CUDA code). Went through source code (e.g. tweaking GetThreadCount()) and trying to find other ways how to influence gmock/gtest framework to run tests serially but found no way out.
Apparently at first did not find any command line arguments that could influence it. Feel like the only way out is to create many binaries or create a script that utilizes --gtest-filter. I would not like to mess with secret synchronization primitives between test cases.
I wonder if it's possible to configure the number of threads for testing in a lit.cfg file.
lit offers a command-line flag for specifying the number of threads:
llvm/utils/lit/lit.py -j1 <test directory>
However, I'm not sure how to do this in a lit.cfg file. I want to force all tests in a subdirectory to be run with -j1 - not sure if this is possible.
Edit: for reference, I'm working on the Swift codebase which has a large test suite (4000+ tests) with multiple test subdirectories.
I want to run just one subdirectory with -j1 and the rest with the default number of threads (-j12 for my machine).
I was wondering about that too a while back, but I don't think there is one because of this line here. Usually, the main project compilation times dwarf the lit tests execution time.
It is easy to change, but I'd suggest using your build configuration to this (e.g. make or cmake). So, make test could execute something like lit -j $(nproc) underneath.
Edit (after OP update):
I haven't worked with the swift repo, but maybe you could hack your way around. One thing I could see is that you could influence the LIT_ARGS cmake variable with the options you want by appending to it.
Now to force a single process execution for a specific directory, you may add a lit.local.cfg that sets the singleProcess flag. This seems to override multi-thread execution:
config.singleProcess = True
I have tests which share a common resource and can't be executed concurrently. These tests fail with cargo test, but work with RUST_TEST_THREADS=1 cargo test.
I can modify the tests to wait on a global mutex, but I don't want to clutter them if there is any simpler way to force cargo set this environment variable for me.
As of Rust 1.18, there is no such thing. In fact, there is not even a simpler option to disable parallel testing.
Source
However, what might help you is cargo test -- --test-threads=1, which is the recommended way of doing what you are doing over the RUST_TEST_THREADS envvar. Keep in mind that this only sets the number of threads used for testing in addition to the main thread.
I am running GCC testsuite and I want to know time elapsed for each individual test case. GCC uses DejaGnu for its test suite and I know that time can be used in scripts to get the time of a test case. I am wondering if there is any flag that I can pass with runtest that forces timing for all test cases (without changing test scripts).
I don't know of a generic way.
DejaGNU does not really have a built-in notion of the boundaries of a test. For example, it's reasonably common for a single conceptual test to call "pass" or "fail" several times. E.g., in GCC, a compilation test may check for several warnings from a given source file -- but each separate warning, and also the check for excess warnings, would be a separate pass or fail. However, these would all arise from a single invocation of GCC.
I think there are two approaches that you can take.
You can hack the .exp files you care about and use knowledge of what they are doing to track the times you are interested in.
You can run a single .exp file in isolation and time how long it takes. This is less useful in general, but it is what I did when making the GDB test suite more fully parallelizable.
I have implemented several packages for a web API, each with their own test cases. When each package is tested using go test ./api/pkgname the tests pass. If I want to run all tests at once with go test ./api/... test cases always fail.
In each test case, I recreate the entire schema using DROP SCHEMA public CASCADE followed by CREATE SCHEMA public and apply all migrations. The test suite reports errors back at random, saying a relation/table does not exist, so I guess each test suite (per package) is run in parallel somehow, thus messing up the DB state.
I tried to pass along some test flags like go test -cpu 1 -parallel 0 ./src/api/... with no success.
Could the problem here be tests running in parallel, and if yes, how can I force serial execution?
Update:
Currently I use this workaround to run the tests, but I still wonder if there's a better solution
find <dir> -type d -exec go test {} \;
As others have pointed out, -parallel doesn't do the job (it only works within packages). However, you can use the flag -p=1 to run through the package tests in series. This is documented here:
http://golang.org/src/cmd/go/testflag.go
but (afaict) not on the command line, go help, etc. I'm not sure it is meant to stick around (although I'd argue that if it is removed, -parallel should be fixed.)
The go tool is provided to make running unit tests easier using the convention that *_test.go files contain unittests in them. Because it assumes they are unittests it also assumes they are hermetic. It sounds like your tests either aren't unittests or they are but violate the assumptions that a unittest should fulfill.
In the case that you mean for these tests to be unittests then you probably need a mock database for your unittests. A mock, preferrably in memory, of your database will ensure that the unittest is hermetic and can't be interfered with by other unittests.
In the case that you mean for these tests to be integration tests you are probably better off not using the go tool for these tests. What you probably want is to create a seperate test binary whose running you can control and write you integration test scripts in there.
The good news is that creating a mock in Go is insanely easy. Change your code to take an interface with the methods you care about for the databases and then write an in memory implementation of that interface for testing purposes and pass it into your application code that you want to test.
Just to clarify, #Jeremy's answer is still the accepted one:
Since my integration tests were only run on one package (api), I removed the separate test binary in the end and created a pattern to separate test types by:
Unit tests use the normal TestX name
Integration tests use Test_X
I created shell scripts (utest.sh/itest.sh) to run either of those.
For unit tests go test -run="^(Test|Benchmark)[^_](.*)"
For integration tests go test -run"^(Test|Benchmark)_(.*)"
Run both using the normal go test