I'm working on an embedded project. As a part of this project, I have unit-tests that use gcc and Gtest. The question what is the best to approach to incorporate these unit-tests. My current implementation is that I have a different build type called unittest. I have a clause of CMAKE_BUILD_TYPE and decide which sources to use which targets to create. I see this is not a good design and this screws up multiconfiguration gnerators. What could be the elegant solution for this?
Thanks in advance for answering.
Create separate executables for testing and use ctest:
add_test to add the combination of executable+command line parameters as a test ctest runs and enable_testing() in the toplevel CMakeLists.txt.
This allows you to simply run ctest in the build dir and you can pass a configuration to test using -C command line option.
add_executable(MyTest test.cpp test_helper.cpp test_helper.h)
target_include_directories(MyTest PRIVATE .)
target_link_libraries(MyTest PRIVATE theLibToTest)
add_test(NAME NameOfTest COMMAND MyTest --gtest_repeat=1000)
enable_testing()
Running ctest from the build directory runs a test named NameOfTest. For multi configuration generators you simply specify the configuration to test with the -C command line option
ctest -C Release
Of course you can use add_test multiple times to add different test executables or the same test executable with different command line options.
Furthermore I recommend figuring out a way of storing results in a file, since this makes via the test parameters, since ctest's output probably won't do the trick. I'm not familiar enough with gtest to give advice on this.
Btw: ctest treats exit code 0 as success and any other exit code as failure, but I guess gtest produces executables that satisfy this property.
If you don't necessarily want to build the tests at the same time as the rest, you could exclude them from all, possibly adding a custom target that depends on all of the unit tests and is also excluded from all to allow building all of them at once.
You could also use a cache variable to toggle testing on and off:
# enabled via -D TEST_MY_PROJECT:BOOL=1 parameter for cmake
set(TEST_MY_PROJECT 0 CACHE BOOL "enable tests for my project")
if (TEST_MY_PROJECT)
# testing setup goes here
endif()
I have a cmake based (C++) project which includes tests created via add_test().
The build process is basically:
cmake
make all testsuite
make test
This has worked well for some time.
I am now trying to get my build to generate a Test.xml to be submitted as part of a pipeline build. It is not working for this project.
Q1 Can I tell cmake that I want a Test.xml to be generated when it (make test) runs the tests?
Currently I believe that a Test.xml can only be created by running ctest.
Presumably ctest aggregates the test results whereas cmake just runs them blindly?
Can someone confirm or refute this?
So I am currently trying to run ctest using:
ctest -vv -debug --output-on-failure -T test
As this does create the Test.xml file I need.
I had to add include(CTest) & include(Dart) to fix "could not find DartConfiguration.tcl".
However, even if I have previously run "make test" which guarantees the build is up to date it still tries to build and I get:
Error: could not load cache
For tests that run a compiled test (script based tests work fine).
The command it is executing is:
/opt/cmake-3.18.1/bin/cmake "--build" "<projectDir>/test/<TestName>" --target "<test_exe>"
The CMakeCache.txt & CMakeLists.txt actually live in "<projectDir>/cmake" which is why the cache cannot be loaded.
The command should be:
/opt/cmake-3.18.1/bin/cmake "--build" "<projectDir>/cmake" --target "<test_exe>"
Q2 Is there some way to tell ctest it does not need to run cmake?
My suspicion is no. Does ctest have to invoke cmake to run the tests? Does it just monitor and aggregate the output?
If I look in DartConfiguration.tcl I have:
SourceDirectory: <projectDir/cmake
BuildDirectory: <projectDir/cmake
ConfigureCommand: "/opt/cmake-3.18.1/bin/cmake" "<projectDir/cmake"
MakeCommand: /opt/cmake-3.18.1/bin/cmake --build . --config "${CTEST_CONFIGURATION_TYPE}" -- -i
I am running ctest -T test rather than ctest -T build-and-test why is ctest trying to build at all?
I've tried setting Ctest_build_command but it seems to have no effect on the DartConfiguration.tcl generated.
Some clarifications:
ctest works for me for other projects which use out of source builds
This one cannot yet use an out of source build without massive refactoring.
The source root is actually "<projectDir>"
The CMakeLists.txt lives in "<projectDir>/cmake"
Program source code lives in "<projectDir>/src"
Test scripts and code live in "<projectDir>/test"
My build worked using:
enable_testing()
I never previously had to use:
include(CTest)
include(Dart)
One or other of these creates the DartConfiguration.tcl which is just a list of "key: value" pairs. I think this is used when ctest is invoked directly which was not the case here.
I'm not entirely clear on why and what are they actually for despite reading this.
Also posted on the cmake discourse channel - https://discourse.cmake.org/t/cmake-ctest-generate-test-xml-without-rebuilding-could-not-load-cache/2025
It turns out I had some pseudo tests used to build the test driver programs via cmake --build. These were explicitly and inexplicably using CMAKE_CURRENT_DIRECTORY instead of CMAKE_SOURCE_DIR. Oops.
I think I do have a CMake order/parallelism problem.
In my build process, I need to build a tool which is then later used by some other target. This tool is a separate project with a CMakeLists.txt file like this:
project (package-tool LANGUAGES CXX)
set (SOURCES package_tool.cpp)
...
Later in the build, this top level target is referenced by some other target:
...
add_custom_command (OUTPUT "${DST_FILE}"
COMMAND ${PACKAGE_COMMAND} "${DST_FILE}"
COMMAND package-tool.exe -e "${DST_FILE}"
DEPENDS ${PACKAGE_DEPENDENCIES} package-tool)
...
I use ninja for building and the dependencies (ninja -t depends) are looking correctly. Also, the build commands (ninja -t commands) are making sense. But: From time to time, the build fails. The message does not make sense, it reads:
This version of package-tool.exe is not compatible with the version
of Windows you're running.
Because the build runs in parallel (32 processes) I suspect that the package-tool target is not completed when the generated exe is being used in the second target, which might lead to this confusing error message. Again, most of the time the build succeeds but every 10th or 20th run, it fails with that message.
So now my question is:
Is there a way to wait for a tool/target having been finished building in a parallel build in CMake/Ninja ?
Or how do I handle the task of building build tools in the same build process correctly ?
Thank you in advance !
Actually depend on the executable file and run the executable file, not the target. Don't concern yourself with any .exe suffix. Also better don't assume add_custom_command will be run in CMAKE_RUNTIME_OUTPUT_DIRECTORY - if you depend on specific directory explicitly set it with WORKING_DIRECTORY.
add_custom_command (
OUTPUT "${DST_FILE}"
COMMAND ${PACKAGE_COMMAND} "${DST_FILE}"
COMMAND $<TARGET_FILE:package-tool> -e "${DST_FILE}"
DEPENDS ${PACKAGE_DEPENDENCIES} $<TARGET_FILE:package-tool>
)
I'm trying CTest in CMake in order to automatically run some of my tests using make test target. The problem is CMake does not "understand" that the test I'm willing to run has to be built since it is part of the project.
So I'm looking for a way to explicitly specify this dependency.
It is arguably a bug in CMake (previously tracked here) that this doesn't work out of the box. A workaround is to do the following:
add_test(TestName ExeName)
add_custom_target(check COMMAND ${CMAKE_CTEST_COMMAND}
DEPENDS ExeName)
Then you can run make check and it will compile and run the test. If you have several tests, then you would have to use DEPENDS exe1 exe2 exe3 ... in the above line.
There is actually a way to use make test. You need to define the build of the test executable as one of the tests and then add dependencies between the tests. That is:
ADD_TEST(ctest_build_test_code
"${CMAKE_COMMAND}" --build ${CMAKE_BINARY_DIR} --target test_code)
ADD_TEST(ctest_run_test_code test_code)
SET_TESTS_PROPERTIES(ctest_run_test_code
PROPERTIES DEPENDS ctest_build_test_code)
I use a variant of richq's answer. In the top-level CMakeLists.txt, I add a custom target, build_and_test, for building and running all tests:
find_package(GTest)
if (GTEST_FOUND)
enable_testing()
add_custom_target(build_and_test ${CMAKE_CTEST_COMMAND} -V)
add_subdirectory(test)
endif()
In the various sub-project CMakeLists.txt files under test/, I add each test executable as a dependency of build_and_test:
include_directories(${CMAKE_SOURCE_DIR}/src/proj1)
include_directories(${GTEST_INCLUDE_DIRS})
add_executable(proj1_test proj1_test.cpp)
target_link_libraries(proj1_test ${GTEST_BOTH_LIBRARIES} pthread)
add_test(proj1_test proj1_test)
add_dependencies(build_and_test proj1_test)
With this approach, I just need to make build_and_test instead of make test (or make all test), and it has the benefit of only building test code (and its dependencies). It's a shame I can't use the target name test. In my case, it's not so bad because I have a top-level script that does out-of-tree debug and release (and cross-compiled) builds by calling cmake and then make, and it translates test into build_and_test.
Obviously, the GTest stuff isn't required. I just happen to use/like Google Test, and wanted to share a complete example of using it with CMake/CTest. IMHO, this approach also has the benefit of allowing me to use ctest -V, which shows the Google Test output while the tests run:
1: Running main() from gtest_main.cc
1: [==========] Running 1 test from 1 test case.
1: [----------] Global test environment set-up.
1: [----------] 1 test from proj1
1: [ RUN ] proj1.dummy
1: [ OK ] proj1.dummy (0 ms)
1: [----------] 1 test from proj1 (1 ms total)
1:
1: [----------] Global test environment tear-down
1: [==========] 1 test from 1 test case ran. (1 ms total)
1: [ PASSED ] 1 test.
1/2 Test #1: proj1_test ....................... Passed 0.03 sec
If you are using CMake >= 3.7, then the recommended approach is to use fixtures:
add_executable(test test.cpp)
add_test(test_build
"${CMAKE_COMMAND}"
--build "${CMAKE_BINARY_DIR}"
--config "$<CONFIG>"
--target test
)
set_tests_properties(test_build PROPERTIES FIXTURES_SETUP test_fixture)
add_test(test test)
set_tests_properties(test PROPERTIES FIXTURES_REQUIRED test_fixture)
This does the following:
Adds a test executable target built from test.cpp
Adds a test_build "test" that runs Cmake to build target test
Marks the test_build test to be a setup task of fixture test_fixture
Add a test test that just runs the test executable
Marks the test test to need fixture test_fixture.
So, every time test test is to be run, it first runs test test_build, which builds the necessary executable.
If you are trying to emulate make check, you may find this wiki entry usefull :
http://www.cmake.org/Wiki/CMakeEmulateMakeCheck
I have just checked that is does what it says with success (CMake 2.8.10).
Save yourself the headache:
make all test
Works out of the box for me and will build dependencies before running the test. Given how simple this is, it almost makes the native make test functionality convenient because it gives you the option of running the last compiling tests even if your code is broken.
For CMake 3.10 or later, another option is to use the TEST_INCLUDE_FILES directory property to set up a script that triggers a build before a test is run. In your outermost CMakeLists.txt add the following code:
set_property(DIRECTORY APPEND
PROPERTY TEST_INCLUDE_FILES "${CMAKE_CURRENT_BINARY_DIR}/BuildTestTarget.cmake")
file(WRITE "${CMAKE_CURRENT_BINARY_DIR}/BuildTestTarget.cmake"
"execute_process(COMMAND \"${CMAKE_COMMAND}\""
" --build \"${CMAKE_BINARY_DIR}\""
" --config \"\$ENV{CMAKE_CONFIG_TYPE}\")")
The actual test configuration is passed through to the build via the environment variable CMAKE_CONFIG_TYPE. Optionally you can add a --target option to only build targets required by the test.
This is what I hammered out and have been using:
set(${PROJECT_NAME}_TESTS a b c)
enable_testing()
add_custom_target(all_tests)
foreach(test ${${PROJECT_NAME}_TESTS})
add_executable(${test} EXCLUDE_FROM_ALL ${test}.cc)
add_test(NAME ${test} COMMAND $<TARGET_FILE:${test}>)
add_dependencies(all_tests ${test})
endforeach(test)
build_command(CTEST_CUSTOM_PRE_TEST TARGET all_tests)
string(CONFIGURE \"#CTEST_CUSTOM_PRE_TEST#\" CTEST_CUSTOM_PRE_TEST_QUOTED ESCAPE_QUOTES)
file(WRITE "${CMAKE_BINARY_DIR}/CTestCustom.cmake" "set(CTEST_CUSTOM_PRE_TEST ${CTEST_CUSTOM_PRE_TEST_QUOTED})" "\n")
YMMV
Derrick's answer, simplified and commented:
# It is impossible to make target "test" depend on "all":
# https://gitlab.kitware.com/cmake/cmake/-/issues/8774
# Set a magic variable in a magic file that tells ctest
# to invoke the generator once before running the tests:
file(WRITE "${CMAKE_BINARY_DIR}/CTestCustom.cmake"
"set(CTEST_CUSTOM_PRE_TEST ${CMAKE_MAKE_PROGRAM})\n"
)
It is not perfectly correct, as it does not solve the concurrency problem of running ninja all test, in case anyone does that. On the contrary, because now, you have two ninja processes.
(Ftr, I also shared this solution here.)
All above answers are perfect. But actually CMake use CTest as its testing tools, so the standard method (I think it is) to do the mission is:
enable_testing ()
add_test (TestName TestCommand)
add_test (TestName2 AnotherTestCommand)
Then run cmake and make to build the targets. After that, you can either run make test, or just run
ctest
you will get the result. This is tested under CMake 2.8 .
Check details at: http://cmake.org/Wiki/CMake/Testing_With_CTest#Simple_Testing
All the answers are good, but they imply a breach of tradition to run a test by command make test. I've done this trick:
add_test(NAME <mytest>
WORKING_DIRECTORY ${CMAKE_BINARY_DIR}
COMMAND sh -c "make <mytarget>; $<TARGET_FILE:<mytarget>>")
This means that the test consists of building (optionally) and running of executable target.
My project links to a third-party library that comes with a valgrind suppression file, as well as a CMake script. The script stores the location of the suppression file in a CMake cache variable.
I have written a CTest script that runs continuous tests on my project and submit to a dashboard. I would like to use the suppression file during the memory-checking stage. Unfortunately, the CTest script does not seem to know anything about the CMake cache. How can I access the CMake cache variable from my CTest script?
You can't directly access CMake cache variables from the ctest -S script.
However, you could possibly:
include the third party CMake script in the ctest -S script with
"include" (after the update step, so the source tree is up to date)
read the CMakeCache.txt file after the configure step to pull out the
cache variable of interest
add code to your CMakeLists.txt file to write out a mini-script that
contains just the information you're looking for
For (1), the code would be something like:
include(${CTEST_SOURCE_DIRECTORY}/path/to/3rdParty/script.cmake)
This would only be realistically possible if the script does only simple things like set variable values that you can then reference. If it does any CMake-configure-time things like find_library or add_executable, then you shouldn't do this.
For (2):
file(STRINGS ${CTEST_BINARY_DIRECTORY}/CMakeCache.txt result
REGEX "^CURSES_LIBRARY:FILEPATH=(.*)$")
message("result='${result}'")
string(REGEX REPLACE "^CURSES_LIBRARY:FILEPATH=(.*)$" "\\1"
filename "${result}")
message("filename='${filename}'")
For (3):
In the CMakeLists.txt file:
file(WRITE "${CMAKE_BINARY_DIR}/mini-script.cmake" "
set(supp_file \"${supp_file_location}\")
")
In your ctest -S script, after the ctest_configure call:
include("${CTEST_BINARY_DIRECTORY}/mini-script.cmake")
message("supp_file='${supp_file}'")
# use supp_file as needed in the rest of your script
The tests should be configured in your CMakeLists.txt file via the command ADD_TEST(). The CTest script then simply calls ctest_test() to run all the configured tests. By doing it this way, you get access to the cache variables, plus you can just run make test at any point to run the tests. Save the CTest script for the nightly testing and/or Dashboard submissions.