Force error when lcov code coverage is not above threshold - cmake

I have a c++ project that uses CMake for generating makefiles. I have added a custom target to run the code coverage as follows:
add_custom_target(
coverage
COMMAND
lcov --rc lcov_branch_coverage=1 --capture --directory
${CMAKE_SOURCE_DIR} --output-file
${CMAKE_BINARY_DIR}/${PROJECT_NAME}_coverage.info --no-external
COMMAND
genhtml --branch-coverage
${CMAKE_BINARY_DIR}/${PROJECT_NAME}_coverage.info --output-directory
${CMAKE_BINARY_DIR}/coverage_report --title
"${PROJECT_NAME} code coverage report" --show-details --legend
COMMAND
echo
"Covverage report in ${CMAKE_BINARY_DIR}/coverage_report/index.html")
(earlier in the project I set the right compilation and linking flags and add different tests using add_test). When I run make, make test, make coverage I get the code coverage report properly. Now I am integrating this into a CI/CD and I want to check the coverage % of the resulting code coverage analysis to be above a threshold. If it is not above I want to fail the pipeline. Is there a way to run some process that will exit with an error code if the coverage is not above a given threshold?

Changing the coverage from lcov to gcovr solves this issue. gcovr has --fail-under-line <min> and --fail-under-branch <min> options that will make the program exit with an error code (2 and 4 respectively) if the min is not reached (min is in the range 0-100).

Related

How to incorporate unittest in cmake project structure

I'm working on an embedded project. As a part of this project, I have unit-tests that use gcc and Gtest. The question what is the best to approach to incorporate these unit-tests. My current implementation is that I have a different build type called unittest. I have a clause of CMAKE_BUILD_TYPE and decide which sources to use which targets to create. I see this is not a good design and this screws up multiconfiguration gnerators. What could be the elegant solution for this?
Thanks in advance for answering.
Create separate executables for testing and use ctest:
add_test to add the combination of executable+command line parameters as a test ctest runs and enable_testing() in the toplevel CMakeLists.txt.
This allows you to simply run ctest in the build dir and you can pass a configuration to test using -C command line option.
add_executable(MyTest test.cpp test_helper.cpp test_helper.h)
target_include_directories(MyTest PRIVATE .)
target_link_libraries(MyTest PRIVATE theLibToTest)
add_test(NAME NameOfTest COMMAND MyTest --gtest_repeat=1000)
enable_testing()
Running ctest from the build directory runs a test named NameOfTest. For multi configuration generators you simply specify the configuration to test with the -C command line option
ctest -C Release
Of course you can use add_test multiple times to add different test executables or the same test executable with different command line options.
Furthermore I recommend figuring out a way of storing results in a file, since this makes via the test parameters, since ctest's output probably won't do the trick. I'm not familiar enough with gtest to give advice on this.
Btw: ctest treats exit code 0 as success and any other exit code as failure, but I guess gtest produces executables that satisfy this property.
If you don't necessarily want to build the tests at the same time as the rest, you could exclude them from all, possibly adding a custom target that depends on all of the unit tests and is also excluded from all to allow building all of them at once.
You could also use a cache variable to toggle testing on and off:
# enabled via -D TEST_MY_PROJECT:BOOL=1 parameter for cmake
set(TEST_MY_PROJECT 0 CACHE BOOL "enable tests for my project")
if (TEST_MY_PROJECT)
# testing setup goes here
endif()

cmake/ctest generate Test.xml (without rebuilding?) / could not load cache

I have a cmake based (C++) project which includes tests created via add_test().
The build process is basically:
cmake
make all testsuite
make test
This has worked well for some time.
I am now trying to get my build to generate a Test.xml to be submitted as part of a pipeline build. It is not working for this project.
Q1 Can I tell cmake that I want a Test.xml to be generated when it (make test) runs the tests?
Currently I believe that a Test.xml can only be created by running ctest.
Presumably ctest aggregates the test results whereas cmake just runs them blindly?
Can someone confirm or refute this?
So I am currently trying to run ctest using:
ctest -vv -debug --output-on-failure -T test
As this does create the Test.xml file I need.
I had to add include(CTest) & include(Dart) to fix "could not find DartConfiguration.tcl".
However, even if I have previously run "make test" which guarantees the build is up to date it still tries to build and I get:
Error: could not load cache
For tests that run a compiled test (script based tests work fine).
The command it is executing is:
/opt/cmake-3.18.1/bin/cmake "--build" "<projectDir>/test/<TestName>" --target "<test_exe>"
The CMakeCache.txt & CMakeLists.txt actually live in "<projectDir>/cmake" which is why the cache cannot be loaded.
The command should be:
/opt/cmake-3.18.1/bin/cmake "--build" "<projectDir>/cmake" --target "<test_exe>"
Q2 Is there some way to tell ctest it does not need to run cmake?
My suspicion is no. Does ctest have to invoke cmake to run the tests? Does it just monitor and aggregate the output?
If I look in DartConfiguration.tcl I have:
SourceDirectory: <projectDir/cmake
BuildDirectory: <projectDir/cmake
ConfigureCommand: "/opt/cmake-3.18.1/bin/cmake" "<projectDir/cmake"
MakeCommand: /opt/cmake-3.18.1/bin/cmake --build . --config "${CTEST_CONFIGURATION_TYPE}" -- -i
I am running ctest -T test rather than ctest -T build-and-test why is ctest trying to build at all?
I've tried setting Ctest_build_command but it seems to have no effect on the DartConfiguration.tcl generated.
Some clarifications:
ctest works for me for other projects which use out of source builds
This one cannot yet use an out of source build without massive refactoring.
The source root is actually "<projectDir>"
The CMakeLists.txt lives in "<projectDir>/cmake"
Program source code lives in "<projectDir>/src"
Test scripts and code live in "<projectDir>/test"
My build worked using:
enable_testing()
I never previously had to use:
include(CTest)
include(Dart)
One or other of these creates the DartConfiguration.tcl which is just a list of "key: value" pairs. I think this is used when ctest is invoked directly which was not the case here.
I'm not entirely clear on why and what are they actually for despite reading this.
Also posted on the cmake discourse channel - https://discourse.cmake.org/t/cmake-ctest-generate-test-xml-without-rebuilding-could-not-load-cache/2025
It turns out I had some pseudo tests used to build the test driver programs via cmake --build. These were explicitly and inexplicably using CMAKE_CURRENT_DIRECTORY instead of CMAKE_SOURCE_DIR. Oops.

cmake add_test() with different runtime_output_directory [duplicate]

I'm trying CTest in CMake in order to automatically run some of my tests using make test target. The problem is CMake does not "understand" that the test I'm willing to run has to be built since it is part of the project.
So I'm looking for a way to explicitly specify this dependency.
It is arguably a bug in CMake (previously tracked here) that this doesn't work out of the box. A workaround is to do the following:
add_test(TestName ExeName)
add_custom_target(check COMMAND ${CMAKE_CTEST_COMMAND}
DEPENDS ExeName)
Then you can run make check and it will compile and run the test. If you have several tests, then you would have to use DEPENDS exe1 exe2 exe3 ... in the above line.
There is actually a way to use make test. You need to define the build of the test executable as one of the tests and then add dependencies between the tests. That is:
ADD_TEST(ctest_build_test_code
"${CMAKE_COMMAND}" --build ${CMAKE_BINARY_DIR} --target test_code)
ADD_TEST(ctest_run_test_code test_code)
SET_TESTS_PROPERTIES(ctest_run_test_code
PROPERTIES DEPENDS ctest_build_test_code)
I use a variant of richq's answer. In the top-level CMakeLists.txt, I add a custom target, build_and_test, for building and running all tests:
find_package(GTest)
if (GTEST_FOUND)
enable_testing()
add_custom_target(build_and_test ${CMAKE_CTEST_COMMAND} -V)
add_subdirectory(test)
endif()
In the various sub-project CMakeLists.txt files under test/, I add each test executable as a dependency of build_and_test:
include_directories(${CMAKE_SOURCE_DIR}/src/proj1)
include_directories(${GTEST_INCLUDE_DIRS})
add_executable(proj1_test proj1_test.cpp)
target_link_libraries(proj1_test ${GTEST_BOTH_LIBRARIES} pthread)
add_test(proj1_test proj1_test)
add_dependencies(build_and_test proj1_test)
With this approach, I just need to make build_and_test instead of make test (or make all test), and it has the benefit of only building test code (and its dependencies). It's a shame I can't use the target name test. In my case, it's not so bad because I have a top-level script that does out-of-tree debug and release (and cross-compiled) builds by calling cmake and then make, and it translates test into build_and_test.
Obviously, the GTest stuff isn't required. I just happen to use/like Google Test, and wanted to share a complete example of using it with CMake/CTest. IMHO, this approach also has the benefit of allowing me to use ctest -V, which shows the Google Test output while the tests run:
1: Running main() from gtest_main.cc
1: [==========] Running 1 test from 1 test case.
1: [----------] Global test environment set-up.
1: [----------] 1 test from proj1
1: [ RUN ] proj1.dummy
1: [ OK ] proj1.dummy (0 ms)
1: [----------] 1 test from proj1 (1 ms total)
1:
1: [----------] Global test environment tear-down
1: [==========] 1 test from 1 test case ran. (1 ms total)
1: [ PASSED ] 1 test.
1/2 Test #1: proj1_test ....................... Passed 0.03 sec
If you are using CMake >= 3.7, then the recommended approach is to use fixtures:
add_executable(test test.cpp)
add_test(test_build
"${CMAKE_COMMAND}"
--build "${CMAKE_BINARY_DIR}"
--config "$<CONFIG>"
--target test
)
set_tests_properties(test_build PROPERTIES FIXTURES_SETUP test_fixture)
add_test(test test)
set_tests_properties(test PROPERTIES FIXTURES_REQUIRED test_fixture)
This does the following:
Adds a test executable target built from test.cpp
Adds a test_build "test" that runs Cmake to build target test
Marks the test_build test to be a setup task of fixture test_fixture
Add a test test that just runs the test executable
Marks the test test to need fixture test_fixture.
So, every time test test is to be run, it first runs test test_build, which builds the necessary executable.
If you are trying to emulate make check, you may find this wiki entry usefull :
http://www.cmake.org/Wiki/CMakeEmulateMakeCheck
I have just checked that is does what it says with success (CMake 2.8.10).
Save yourself the headache:
make all test
Works out of the box for me and will build dependencies before running the test. Given how simple this is, it almost makes the native make test functionality convenient because it gives you the option of running the last compiling tests even if your code is broken.
For CMake 3.10 or later, another option is to use the TEST_INCLUDE_FILES directory property to set up a script that triggers a build before a test is run. In your outermost CMakeLists.txt add the following code:
set_property(DIRECTORY APPEND
PROPERTY TEST_INCLUDE_FILES "${CMAKE_CURRENT_BINARY_DIR}/BuildTestTarget.cmake")
file(WRITE "${CMAKE_CURRENT_BINARY_DIR}/BuildTestTarget.cmake"
"execute_process(COMMAND \"${CMAKE_COMMAND}\""
" --build \"${CMAKE_BINARY_DIR}\""
" --config \"\$ENV{CMAKE_CONFIG_TYPE}\")")
The actual test configuration is passed through to the build via the environment variable CMAKE_CONFIG_TYPE. Optionally you can add a --target option to only build targets required by the test.
This is what I hammered out and have been using:
set(${PROJECT_NAME}_TESTS a b c)
enable_testing()
add_custom_target(all_tests)
foreach(test ${${PROJECT_NAME}_TESTS})
add_executable(${test} EXCLUDE_FROM_ALL ${test}.cc)
add_test(NAME ${test} COMMAND $<TARGET_FILE:${test}>)
add_dependencies(all_tests ${test})
endforeach(test)
build_command(CTEST_CUSTOM_PRE_TEST TARGET all_tests)
string(CONFIGURE \"#CTEST_CUSTOM_PRE_TEST#\" CTEST_CUSTOM_PRE_TEST_QUOTED ESCAPE_QUOTES)
file(WRITE "${CMAKE_BINARY_DIR}/CTestCustom.cmake" "set(CTEST_CUSTOM_PRE_TEST ${CTEST_CUSTOM_PRE_TEST_QUOTED})" "\n")
YMMV
Derrick's answer, simplified and commented:
# It is impossible to make target "test" depend on "all":
# https://gitlab.kitware.com/cmake/cmake/-/issues/8774
# Set a magic variable in a magic file that tells ctest
# to invoke the generator once before running the tests:
file(WRITE "${CMAKE_BINARY_DIR}/CTestCustom.cmake"
"set(CTEST_CUSTOM_PRE_TEST ${CMAKE_MAKE_PROGRAM})\n"
)
It is not perfectly correct, as it does not solve the concurrency problem of running ninja all test, in case anyone does that. On the contrary, because now, you have two ninja processes.
(Ftr, I also shared this solution here.)
All above answers are perfect. But actually CMake use CTest as its testing tools, so the standard method (I think it is) to do the mission is:
enable_testing ()
add_test (TestName TestCommand)
add_test (TestName2 AnotherTestCommand)
Then run cmake and make to build the targets. After that, you can either run make test, or just run
ctest
you will get the result. This is tested under CMake 2.8 .
Check details at: http://cmake.org/Wiki/CMake/Testing_With_CTest#Simple_Testing
All the answers are good, but they imply a breach of tradition to run a test by command make test. I've done this trick:
add_test(NAME <mytest>
WORKING_DIRECTORY ${CMAKE_BINARY_DIR}
COMMAND sh -c "make <mytarget>; $<TARGET_FILE:<mytarget>>")
This means that the test consists of building (optionally) and running of executable target.

Adding Condition to CMake add_definitions() command

I have two targets for my CMake file. After I run cmake .. , I can either say make rsutest for unit tests and build a test executable or say make rsu to get a normal executable. If I am building my test target I want to add a define to the test code with add_definitions(-DRSU_TEST) command so that some of the lines in the actual code can be ignored . How can I write my CMake file so #define RSU_TEST line is only active when I build the test target ?
add_custom_target(rsutest COMMAND ${CMAKE_CTEST_COMMAND} DEPENDS rsu_test)
add_custom_target(rsu COMMAND ${CMAKE_CTEST_COMMAND} DEPENDS rsu_agent)
if(rsutest) // this if else statement doesn't work and I need a a condition
add_definitions(-DRSU_TEST)// which will only be true when I build rsutest
endif(rsutest)
option(WITH_TEST "BUILD THE TEST CODE" OFF)
if(WITH_TEST)
add_definitions(-DRSU_TEST)
endif()
Then if I want to run the test code I use cmake .. -DWITH_TEST=ON instead of just cmake ... And turn it off by cmake .. -DWITH_TEST=OFF when I want my normal build.

Report target failure in addition to translation unit compilation failure

Suppose I have hundreds of targets and some of them are not critical for the build to succeed (and for example I am using --keep-going on make or -k 9000 on ninja) and I need to figure out which cmake targets failed.
With add_custom_command() a post-build command can be added to a cmake target that prints its name like this:
success: myTarget.dll
But what about failures?
If linking fails then I can parse the verbose output of whatever I am using (ninja/make/msbuild) and see which target has failed.
However if compilation of a translation unit fails the only error I get is that a particular source file does not compile and figuring out which cmake target exactly has failed is harder.
The only thing I have come up with is running this after the build has failed: ninja -nv which will make a verbose dry run and I can intercept the link commands and parse the cmake targets that have failed that way...
Any other ideas?
I ended up using dry runs of make/ninja and parsing their output