In CMake, one can use add_subdirectory(foo EXCLUDE_FROM_ALL) to exclude the targets defined under foo from being built by default unless they are needed by some other target, like this:
add_subdirectory(foo EXCLUDE_FROM_ALL)
add_custom_target(wanted_foo_targets ALL DEPENDS
bar_tool # only build bar_tool from foo
)
Suppose foo defines two targets, bar_tool and foo_tool, and tests for each:
add_test(NAME test_foo_tool COMMAND foo_tool)
add_test(NAME test_bar_tool COMMAND bar_tool)
When you build the overall project, foo_tool is not built, because it is not a dependency of anything, but bar_tool is built because it is a dependency of wanted_foo_targets.
But CTest still registers the test_foo_tool tool, which now fails because foo_tool doesn't exist.
Can I remove/hide/skip tests that refer to targets that weren't built? I know I could make the test command something like [ -f foo_tool ] && foo_tool but this will erroneously state that the test passed.
Bonus points if I can do this from the parent directory, without modifying how the tests are set up inside foo. In my case, foo is a project provided by a third party, and I don't want to have to maintain a patch of their code as they write new tests.
This can be done by using the SKIP_RETURN_CODE test property in combination with a small test driver script (which you have already considered using).
In the source code directory add a test driver script named test_driver.sh with the following contents:
#!/bin/sh
if [ -x "$1" ] ; then exec "$1" ; else exit 127 ; fi
The script checks for existence of a test executable passed as the first argument and exits with error code 127 (aka "command not found") if it does not exist.
In your CMakeLists.txt add the tests in the following way:
add_test(NAME test_foo_tool COMMAND "${CMAKE_SOURCE_DIR}/test_driver.sh" $<TARGET_FILE:foo_tool>)
set_tests_properties(test_foo_tool PROPERTIES SKIP_RETURN_CODE 127)
When you run the tests with the executable foo_tool missing, ctest will report the test_foo_tool as a skipped test but not as a failed test.
$ ctest
...
Start 1: test_foo_tool
1/2 Test #1: test_foo_tool ....................***Skipped 0.01 sec
Start 2: test_bar_tool
2/2 Test #2: test_bar_tool .................... Passed 0.01 sec
100% tests passed, 0 tests failed out of 2
Total Test time (real) = 0.03 sec
The following tests did not run:
1 - test_foo_tool (Skipped)
Tests can be disabled by setting the DISABLED property of the test in question.
You can also get a list of tests defined in a sub-directory by checking the TESTS property.
So you should be able to use get_property to get a lists of all the tests for that specific directory and then disable all of them. If only a sub-set of the tests needs to be disabled then filter the list of all tests, using list(FILTER) to just those that need to be disabled. This is only convenient if there is a naming convention that is followed.
Unfortunately set_property can only modify the property of a test within the same directory not a sub-directory. This means that the CMakeLists.txt file would need be updated.
Alternatively if modifying the CMakeLists.txt is too much of an issue just don't run those tests. That could be done by using the ctest -E regex option to exclude those tests.
Related
I'm trying to build some tests with googleTest which should test some MPI parallel code. Ideally I want our CI server to execute them through ctest.
My naive approach was to simply call ctest with MPI:
mpirun -n 3 ctest
However, this results in even trivial tests failing as long as at least two are executed.
Example of a trivial tests:
TEST(TestSuite, DummyTest1) {
EXPECT_TRUE(true);
}
TEST(TestSuite, DummyTest2) {
EXPECT_TRUE(true);
}
Am I supposed to launch ctest in some other way? Or do I have to approach this completely differently?
Additional info:
The tests are added via gtest_discover_tests().
Launching the test executable directly with MPI (mpirun -n 3 testExe) yields successful tests but I would prefer using ctest.
Version info:
googletest: 1.11.0
MPI: OpenMPI 4.0.3 AND MPICH 3.3.2
According to this CMake Forum post, which links to a Merge Requests and an issue in the CMake GitLab repository, gtest_discover_tests() currently simply doesn't support MPI.
A possible workaround is to abuse the CROSSCOMPILING_EMULATOR property to inject the wrapper into the test command. However, this changes the whole target and not only the tests so your mileage may vary.
set_property(TARGET TheExe PROPERTY CROSSCOMPILING_EMULATOR '${MPIEXEC_EXECUTABLE} ${MPIEXEC_NUMPROC_FLAG} 3')
gtest_discover_tests(TheExe)
See the forum post for a full minimal example.
How can I make ctest run each of my tests in a separate transient/temporary directory each time I run $ make test (or $ctest).
Let's say I have a test executable, mytest.cpp that does two things: 1) It asserts that a file called "foo.txt" does not exist in the current working directory and then 2) creates a file called "foo.txt". Now I want to be able to run make test multiple times without mytest.cpp to fail.
I want to achieve this by asking cmake/ctest to run every test (in this example, one test) in its own temporary directory.
I've searched for solutions online and I've read through the ctest documentation. In particular the add_test docs. I can provide a "WORKING_DIRECTORY" to add_test. This will run my test in that "WORKING_DIRECTORY". However any changes made to this folder persist across multiple make test runs. So the second time I run make test the test fails.
Here's a minimal, reproducible way of triggering the failure. One source file mytest.cpp that defines the test executable and a CMakeLists.txt file to build the code.
# CMakeLists.txt
cmake_minimum_required (VERSION 2.8)
project (CMakeHelloWorld)
enable_testing()
add_executable (mytest mytest.cpp)
add_test( testname mytest)
and
// mytest.cpp
#include <sys/stat.h>
#include <unistd.h>
#include <string>
#include <fstream>
inline bool exists (const std::string& name) {
std::ifstream f(name.c_str());
return f.good();
}
int main() {
assert(exists("foo.txt") == false);
std::ofstream outfile ("foo.txt");
outfile.close();
}
Series of command that generate the failure
$ mkdir build
$ cd build
$ cmake ..
$ make
$ make test
$ make test
This will give
Running tests...
Test project /path/to/project
Start 1: testname
1/1 Test #1: testname .........................***Exception: Other 0.25 sec
0% tests passed, 1 tests failed out of 1
Total Test time (real) = 0.26 sec
The following tests FAILED:
1 - testname (OTHER_FAULT)
Errors while running CTest
make: *** [test] Error 8
Typically, a testing framework provides some kind of pre-test (setup) and post-test (cleanup) tasks. And so does CTest.
Adding the following CTestCustom.ctest file to the build directory of your example makes the test succeed every time:
# CTestCustom.ctest
set(CTEST_CUSTOM_POST_TEST "rm foo.txt")
For more complex tasks, you may want to create a custom script, but that's the way to call it.
Suppose in CMakeLists.txt I have
add_executable(mytarget main.cpp)
enable_testing()
add_test(mytarget_test0 mytarget -option0)
Is there any easy way how can I run mytarget in GDB with all command line options from some particular CTest test? (Other than searching for test in CMakeLists and then copy-pasting add_test parameters to command line manually?)
Real life scenario: I run all tests using ctest, one fails, I want to open it in debugger quickly.
In other build systems there are command line parameters to use gdb, for example in Meson meson test --gdb testname , in bazel bazel --run_under=gdbserver. I did not found anything similar for CTest
It is possible to get test command with arguments:
ctest -R $regex_matching_test -V -N
As output you will get something like:
Test project ../cmake-build-debug-gcc7
Constructing a list of tests
Done constructing a list of tests
1: Test command: ../cmake-build-debug-gcc7/my_tool "-v" "test0"
Test #1: my_tool_test0
Total Tests: 1
Then using regexp it is possible to grab command line args for gdb
I use the following procedure:
make clean
make # Build target and all unit tests
make test # Run all unit tests: make test
You will get an result list and some output, depending of the unit test frame work. Fix the issue and re-run the failed test within gdb until it succeeds. Restart the whole procedure.
In case you are using an IDE (e.g. QtCreator) it is easy to debug through the failed test case.
How do I explicitly say with my go test command to run only tests for the main package and not others in my source directory.
At the moment it's working with $go test -v. But... I am using goconvey as well and it seems to be running recursively. According to this page https://github.com/smartystreets/goconvey/wiki/Profiles I have a file where I can pass arguments into the go test command. I know you can go test -v ./... for recursive or go test -c packagename/... but how do I just do it for the main?
Profiles is one to accomplish this, but you can also specify a 'depth' for the runner:
$ goconvey -depth=0
A value of 0 limits the runner to the working directory.
Run goconvey -help for details.
I have an autotools based project. When the "make check" target fails, I see something like this:
============================================================================
Testsuite summary for
============================================================================
# TOTAL: 1
# PASS: 0
# SKIP: 0
# XFAIL: 0
# FAIL: 1
# XPASS: 0
# ERROR: 0
============================================================================
See tests/python/test-suite.log
============================================================================
make[5]: *** [test-suite.log] Error 1
Which is fine as far it does not happen on a restricted builder (in this case launchpad buildd), where I can see only the build log.
The Makefile.am in the affected directory looks like this:
EXTRA_DIST = test_inetdomain.py test_zone.py test_matcher.py test_dispatch.py test_nat.py test_log.py test_session.py test_stacking.py
noinst_SCRIPTS = runtest.sh
TESTS = runalltests.sh
.PHONY: mkzorp
mkzorp:
make -C ../../zorp
runtest.sh: mkzorp
What should I write to Makefile.am/what parameters should I give to autoreconf/autoconf / what environment variables should I set to see the test output on stdout/stderr?
The "check" target is a bit rigid in automake, since it depends on its behavior for other things.
The easiest solution seems to customize the test driver, since it's the one responsible for redirecting the output to log files. I'm assuming you are using the default test driver, so open it up around line 111 or so, and you should see something like this:
# Report outcome to console.
echo "${col}${res}${std}: $test_name"
Just append those lines somewhere after that (like at the end of the script):
if test $estatus != 0
then
sed -e "s/^/$log_file:\t/" $log_file # but a cat is fine too
fi
Remember to add this customized test driver to your source control.
For the record, here's something that won't work: defining a check-local rule that runs cat $TEST_LOGS from the makefile itself won't work because automake stops as soon as the test fail, and you can only hook up check-local, which is run only if check-TESTS succeeds.
I experienced the same issue with Fedora koji package builder and the solution was simply to print out the file after failed test run.
In case of Fedora RPM it was easy.
%check
make check || cat ./test-suite.log
Since then the contents of the test log was part of the build log whenever the tests failed. So easy and so useful.
I expect the procedure with Launchpad and other builders will be very similar.
Older automake versions used to send output to stdout. I guess this changed when parallel tests were introduced so I ran into the same problem as Árpád did.
You can enable the old behavior using the serial test harness by adding the following to Makefile.am:
AUTOMAKE_OPTIONS = serial-tests