When we try running ctest with Catch2 test cases, we got the Errors while running CTest in the last line but the test cases ran properly.
After adding the ParseAndAddCatchTests.cmake in CMakeLists.txt.
enable_testing()
include(ParseAndAddCatchTests.cmake)
ParseAndAddCatchTests(TauTest)
Run the test case using 'ctest' command. We got the following error after execution of tests.
Errors while running CTest
It seems test cases failing. That's why I got the error Errors while running CTest
Related
In the build definition in TFS 2015, I've got a Command Line step that runs the following command:
xunit.console.exe \PathToTests\Tests.dll -xml \PathToResultsFolder\Results.xml
During the build, I can see the tests are being discovered and executing and everything's looking good.
But if I don't check "Continue on error" in the Command Line step, after the tests run and the result XML file has been saved, the step fails with the following error:
Task CmdLine failed. This caused the job to fail. Look at the logs for the task for more details.
But there's actually no error or anything I can see. The tests have run and the XML file has saved properly and is able to published to TFS. And I don't see an error like this if I run the command from the build machine.
Any ideas?
Suppose in CMakeLists.txt I have
add_executable(mytarget main.cpp)
enable_testing()
add_test(mytarget_test0 mytarget -option0)
Is there any easy way how can I run mytarget in GDB with all command line options from some particular CTest test? (Other than searching for test in CMakeLists and then copy-pasting add_test parameters to command line manually?)
Real life scenario: I run all tests using ctest, one fails, I want to open it in debugger quickly.
In other build systems there are command line parameters to use gdb, for example in Meson meson test --gdb testname , in bazel bazel --run_under=gdbserver. I did not found anything similar for CTest
It is possible to get test command with arguments:
ctest -R $regex_matching_test -V -N
As output you will get something like:
Test project ../cmake-build-debug-gcc7
Constructing a list of tests
Done constructing a list of tests
1: Test command: ../cmake-build-debug-gcc7/my_tool "-v" "test0"
Test #1: my_tool_test0
Total Tests: 1
Then using regexp it is possible to grab command line args for gdb
I use the following procedure:
make clean
make # Build target and all unit tests
make test # Run all unit tests: make test
You will get an result list and some output, depending of the unit test frame work. Fix the issue and re-run the failed test within gdb until it succeeds. Restart the whole procedure.
In case you are using an IDE (e.g. QtCreator) it is easy to debug through the failed test case.
I am using DejaGNU to test a compiler toolchain.
I need to skip a bunch of execution tests - which try and run the compiled executable - but only when running those tests run on a particular emulator (QEMU). It is still relevant to run those executables on hardware so I don't want to simply remove the tests from the testsuite.
The DejaGNU documentation is pretty sparse on that topic. man runtest mentions a --ignore switch:
--ignore test1.exp test2.exp ...
Do not run the specified tests.
I just cant work out which .exp i need to exclude from looking at the test results log. Does anyone know how to figure that out?
Easy peasy.
First, the test log will tell which .exp failed. for instance:
Running ${NEWLIB_PATH}/testsuite/newlib.locale/UTF-8.exp ...
Executing on host: (bla bla bla)
PASS: newlib.locale/UTF-8.c compilation
spawn (bla bla bla)
Failed to set C-UTF-8 locale.
newlib.locale/UTF-8.c: Expected: Set C-UTF-8 locale. Got: Failed to set C-UTF-8 locale.
FAIL: newlib.locale/UTF-8.c output
Notice the first line of this log entry says Running UTF-8.exp .... Now, to skip it, simply run DejaGnu as follows:
runtest --ignore UTF-8.exp
Is it expected that benchmarks don't run unless all tests in the package have passed.
I've looked at the testing package doc and the testing flags and I can't find it documented that benchmarks run only after all tests pass.
Is there a way to force benchmark functions to run even when some tests in the package have failed ?
You can skip the failing tests using the -run flag, or choose to run none at all
go test -bench . -run NONE
In our Yii project we're using Jenkins CI and Codeception for different types of tests. The problem is, that codeception report is empty, which causes whole Jenkins build failure.
All tests are running without errors. Jenkins execute shell for codeception:
php codecept.phar run --xml --html
Console output error line which causes failure:
[xUnit] [ERROR] - The result file '/var/lib/jenkins/jobs/project/workspace/code/protected/tests/_output/report.xml' for the metric 'PHPUnit' is empty. The result file has been skipped.
I understand simple logic, if report is empty -> build failed. But why is report empty? Is that a bug or can I do something about this?
The problem was, that in one of our tearDowns was the following line:
Yii::app()->end();
which makes Yii-Application die. For some reasons this caused that codeception has not generated the report.