I'm using CMake (and ctest) 3.11 with Google Test.
Is there any way to benefit from GTEST_SHUFFLE and GTEST_REPEAT while using ctest? ctest runs one unit test per process, so shuffling the tests is pretty much pointless currently. Am I missing anything?
Thanks
Related
Because of the resource exhaustion there is a need to run test cases serially, without threads (integration tests for CUDA code). Went through source code (e.g. tweaking GetThreadCount()) and trying to find other ways how to influence gmock/gtest framework to run tests serially but found no way out.
Apparently at first did not find any command line arguments that could influence it. Feel like the only way out is to create many binaries or create a script that utilizes --gtest-filter. I would not like to mess with secret synchronization primitives between test cases.
We have over 10 Google Test executables which, in sum, provide over 5000 test cases.
So far, our CTest configuration used add_test command which resulted in each test executable being a single test from CTest point of view.
Recently, we used the GoogleTest module to replace the add_test command with the gtest_discover_tests command. This way all individual test cases are now visible to CTest. Now each of those over 5000 test cases is a separate test in CTest.
This worked quite nice enhancing parallel runs (a strong machine runs far more test cases than we have test executables) and allowing us to use CTest command-line interface to filter test cases etc. abstracting away the testing framework used.
However, we hit a major blocker for Valgrind runs! Now each individual test case is run separately, causing the Valgrind machinery to be set up and tear down over 5000 times. The time whooped from around 10 minutes for the full suite to almost 2 hours which is unacceptable.
Now, I'm wondering whether there is any way to make the CTest run tests in batches from the same executable by invoking the executable only once. We would do this for Valgrind runs but not the ordinary runs. I'm afraid there is no way, especially that it would probably require the GoogleTest module to somehow explain how to do it. But maybe someone already had a similar issue and solved it somehow?
I know a workaround would be to skip CTest for Valgrind runs. Just take the test executables and run them "manually" under Valgrind. Doable, probably also in an automated way (so the list of test executables is somehow "queried", perhaps with the --show-only argument to ctest, rather than hardcoded). But it makes the interface (command line, output, etc.) less consistent.
We're configuring build steps in TeamCity. Since we have huge problems with test coverage reports (they were there and then inexplainably vanished), we're trying to find a work-trough (asking and bouting a question directly related to our issue yielded very cold response).
Please note that I'm not looking for an opinion but rather a technical knowledge base to support (or kill) the choice of ours. And yes, I've checked the build logs - these are posted in the other thread. This question is about the (in?)sanity of trying an alternative approach. :)
Is it recommended to run a build step for test and then another build step for test coverage?
Does it even makes sense to run these in separate build steps?!
What gains and disadvantages are there to running coverage bundled with/separately from the tests themselves?
Test coverage reports are generated during unit test runs. Unless your problem is with reading generated reports, it doesn't make sense to "run them in separate build steps". Test coverage tells you what parts of your code were run WHILE the tests were running- I don't see how they could be independent.
It may make more sense to ask for help with the test coverage reports no longer being generated...
I've got about 27 tests in a Unit testing project, one of which is using System.Fakes to fake a System.DateTime call. It seems that the unit test project is recreating the System.Fakes extensions with every build meaning that nCrunch is VERY slow to show unit test results. I've not experienced this when using rhinomocks for mocking interfaces in tests and I was wondering if there was a way to improve this performance that anyone was aware of when using Microsoft.Fakes.
I allocated an additional processor to nCrunch and set the fastlane thread option and this made a huge difference. Tests were as responsive as I've experienced with rhinomocks. I'm happy.
I just started working on an existing grails project where there is a lot of code written and not much is covered by tests. The project is using Hudson with the Cobertura plugin which is nice. As I'm going through things, I'm noticing that even though there are not specific test classes written for code, it is being covered. Is there any easy way to see what tests are covering the code? It would save me a bit of time if I was able to know that information.
Thanks
What you want to do is collect test coverage data per test. Then when some block of code isn't exercised by a test, you can trace it back to the test.
You need a test coverage tool which will do that; AFAIK, this is straightforward to organize. Just run one test and collect test coverage data.
However, most folks also want to know, what is the coverage of the application given all the tests? You could run the tests twice, once to get what-does-this-test-cover information, and then the whole batch to get what-does-the-batch-cover. Some tools ( ours included) will let you combine the coverage from the individual tests, to produce covverage for the set, so you don't have to run them twice.
Our tools have one nice extra: if you collect test-specific coverage, when you modify the code, the tool can tell which individual tests need to be re-run. You need a bit of straightforward scripting for this, to compare the results of the instrumentation data for the changed sources to the results for each test.