autotools output test-suite.log contents on test failure - testing

I have an autotools based project. When the "make check" target fails, I see something like this:
============================================================================
Testsuite summary for
============================================================================
# TOTAL: 1
# PASS: 0
# SKIP: 0
# XFAIL: 0
# FAIL: 1
# XPASS: 0
# ERROR: 0
============================================================================
See tests/python/test-suite.log
============================================================================
make[5]: *** [test-suite.log] Error 1
Which is fine as far it does not happen on a restricted builder (in this case launchpad buildd), where I can see only the build log.
The Makefile.am in the affected directory looks like this:
EXTRA_DIST = test_inetdomain.py test_zone.py test_matcher.py test_dispatch.py test_nat.py test_log.py test_session.py test_stacking.py
noinst_SCRIPTS = runtest.sh
TESTS = runalltests.sh
.PHONY: mkzorp
mkzorp:
make -C ../../zorp
runtest.sh: mkzorp
What should I write to Makefile.am/what parameters should I give to autoreconf/autoconf / what environment variables should I set to see the test output on stdout/stderr?

The "check" target is a bit rigid in automake, since it depends on its behavior for other things.
The easiest solution seems to customize the test driver, since it's the one responsible for redirecting the output to log files. I'm assuming you are using the default test driver, so open it up around line 111 or so, and you should see something like this:
# Report outcome to console.
echo "${col}${res}${std}: $test_name"
Just append those lines somewhere after that (like at the end of the script):
if test $estatus != 0
then
sed -e "s/^/$log_file:\t/" $log_file # but a cat is fine too
fi
Remember to add this customized test driver to your source control.
For the record, here's something that won't work: defining a check-local rule that runs cat $TEST_LOGS from the makefile itself won't work because automake stops as soon as the test fail, and you can only hook up check-local, which is run only if check-TESTS succeeds.

I experienced the same issue with Fedora koji package builder and the solution was simply to print out the file after failed test run.
In case of Fedora RPM it was easy.
%check
make check || cat ./test-suite.log
Since then the contents of the test log was part of the build log whenever the tests failed. So easy and so useful.
I expect the procedure with Launchpad and other builders will be very similar.

Older automake versions used to send output to stdout. I guess this changed when parallel tests were introduced so I ran into the same problem as Árpád did.
You can enable the old behavior using the serial test harness by adding the following to Makefile.am:
AUTOMAKE_OPTIONS = serial-tests

Related

Building a kernel module on Centos 7 with a CMake file

Sorry for the length. I have tried to include as much information as possible.
A device I work with randomly fails to start at boot - this is a well known issue with the device and there are lots of posts on the web with no known solution except reboot.
So the task is to look in dmesg for a certain string that if present means the device has failed to start and the system needs rebooting. A simple call to system() with boot seems to do the job.
A unit test that proves this would be nice. The idea is to look for a non-existant uuid in the dmesg log to prove that it fails to find one and then to write a different uuid to the log and then search for that. Proving it works in both cases.
First thing was to hit up google: Find you can write to the kernel log with # echo '<4>Foo: Message' | sudo tee /dev/kmsg which works from terminal but the sudo may cause issue in the unit test.
The next thing I looked at was accessing it via code. The unit tests are written in C++ and the library is googletest.
Most posts talk about writing a Makefile and kbuild. I am working in a build system where we have cmake called from a shell script.
After several hours of searching and trying things, I decided to ask here.
I have installed
kernel.x86_64 3.10.0-1062.el7 #anaconda
kernel.x86_64 3.10.0-1160.21.1.el7 #updates
kernel-devel.x86_64 3.10.0-1160.21.1.el7 #updates
kernel-devel.x86_64 3.10.0-1160.24.1.el7 #updates
kernel-headers.x86_64 3.10.0-1160.21.1.el7 #updates
kernel-tools.x86_64 3.10.0-1160.21.1.el7 #updates
kernel-tools-libs.x86_64 3.10.0-1160.21.1.el7
uname -r gives 3.10.0-1160.21.1.el7.x86_64 which seems to suggest I have the kernel headers and devel files installed.
Doing a find /. -name module.h lists:
...
/usr/src/kernels/3.10.0-1160.24.1.el7.x86_64/arch/x86/include/asm/module.h
/usr/src/kernels/3.10.0-1160.24.1.el7.x86_64/include/asm-generic/module.h
/usr/src/kernels/3.10.0-1160.24.1.el7.x86_64/include/linux/module.h
/usr/src/kernels/3.10.0-1160.24.1.el7.x86_64/include/trace/events/module.h
/usr/src/kernels/3.10.0-1160.24.1.el7.x86_64/include/uapi/linux/module.h
/usr/src/kernels/3.10.0-1160.21.1.el7.x86_64/arch/x86/include/asm/module.h
/usr/src/kernels/3.10.0-1160.21.1.el7.x86_64/include/asm-generic/module.h
/usr/src/kernels/3.10.0-1160.21.1.el7.x86_64/include/linux/module.h
/usr/src/kernels/3.10.0-1160.21.1.el7.x86_64/include/trace/events/module.h
/usr/src/kernels/3.10.0-1160.21.1.el7.x86_64/include/uapi/linux/module.h
...
It maybe that I am trying to link files in /3.10.0-1160.24.1.el7.x86_64/ when I should be linking to 3.10.0-1160.21.1.el7.x86_64/. Listing installed yum packages via sudo yum list | grep linux-d returns
libselinux-devel.x86_64 2.5-15.el7 #base
libhbalinux-devel.i686 1.0.17-2.el7 base
libhbalinux-devel.x86_64 1.0.17-2.el7 base
libselinux-devel.i686 2.5-15.el7 base
syslinux-devel.x86_64 4.05-15.el7 base
My CMakeFiles.txt looks like
project( X_test )
set( TEST_SOURCE
X_test.cpp
)
execute_process(COMMAND uname -r OUTPUT_VARIABLE uname_r OUTPUT_STRIP_TRAILING_WHITESPACE)
include_directories(/usr/src/kernels/${uname_r}/include)
link_directories(/lib/modules/${uname_r}/build)
add_library(source-lib STATIC source.c)
Anything else in there has been commented out to prevent confusion.
Without the lines include_directories or link_directories I get the error
#include <linux/module.h>
With those lines in I get the error:
In file included from /usr/src/kernels/3.10.0-1160.21.1.el7.x86_64/include/linux/kernel.h:6:0,
from /usr/src/kernels/3.10.0-1160.21.1.el7.x86_64/include/linux/cache.h:4,
from /usr/src/kernels/3.10.0-1160.21.1.el7.x86_64/include/linux/time.h:4,
from /usr/src/kernels/3.10.0-1160.21.1.el7.x86_64/include/linux/stat.h:18,
from /usr/src/kernels/3.10.0-1160.21.1.el7.x86_64/include/linux/module.h:10,
from /home/user/git/asdo/Services/DCO-3303/test/source.c:1:
/usr/src/kernels/3.10.0-1160.21.1.el7.x86_64/include/linux/linkage.h:7:25: fatal error: asm/linkage.h: No such file or directory
#include <asm/linkage.h>
The code I am compiling is the standard printk(KERN_INFO "Hello world\n"); which you can see here.
How do I go about compiling code that uses a kernel call through CMake?

Skipping a CTest if the target was not built

In CMake, one can use add_subdirectory(foo EXCLUDE_FROM_ALL) to exclude the targets defined under foo from being built by default unless they are needed by some other target, like this:
add_subdirectory(foo EXCLUDE_FROM_ALL)
add_custom_target(wanted_foo_targets ALL DEPENDS
bar_tool # only build bar_tool from foo
)
Suppose foo defines two targets, bar_tool and foo_tool, and tests for each:
add_test(NAME test_foo_tool COMMAND foo_tool)
add_test(NAME test_bar_tool COMMAND bar_tool)
When you build the overall project, foo_tool is not built, because it is not a dependency of anything, but bar_tool is built because it is a dependency of wanted_foo_targets.
But CTest still registers the test_foo_tool tool, which now fails because foo_tool doesn't exist.
Can I remove/hide/skip tests that refer to targets that weren't built? I know I could make the test command something like [ -f foo_tool ] && foo_tool but this will erroneously state that the test passed.
Bonus points if I can do this from the parent directory, without modifying how the tests are set up inside foo. In my case, foo is a project provided by a third party, and I don't want to have to maintain a patch of their code as they write new tests.
This can be done by using the SKIP_RETURN_CODE test property in combination with a small test driver script (which you have already considered using).
In the source code directory add a test driver script named test_driver.sh with the following contents:
#!/bin/sh
if [ -x "$1" ] ; then exec "$1" ; else exit 127 ; fi
The script checks for existence of a test executable passed as the first argument and exits with error code 127 (aka "command not found") if it does not exist.
In your CMakeLists.txt add the tests in the following way:
add_test(NAME test_foo_tool COMMAND "${CMAKE_SOURCE_DIR}/test_driver.sh" $<TARGET_FILE:foo_tool>)
set_tests_properties(test_foo_tool PROPERTIES SKIP_RETURN_CODE 127)
When you run the tests with the executable foo_tool missing, ctest will report the test_foo_tool as a skipped test but not as a failed test.
$ ctest
...
Start 1: test_foo_tool
1/2 Test #1: test_foo_tool ....................***Skipped 0.01 sec
Start 2: test_bar_tool
2/2 Test #2: test_bar_tool .................... Passed 0.01 sec
100% tests passed, 0 tests failed out of 2
Total Test time (real) = 0.03 sec
The following tests did not run:
1 - test_foo_tool (Skipped)
Tests can be disabled by setting the DISABLED property of the test in question.
You can also get a list of tests defined in a sub-directory by checking the TESTS property.
So you should be able to use get_property to get a lists of all the tests for that specific directory and then disable all of them. If only a sub-set of the tests needs to be disabled then filter the list of all tests, using list(FILTER) to just those that need to be disabled. This is only convenient if there is a naming convention that is followed.
Unfortunately set_property can only modify the property of a test within the same directory not a sub-directory. This means that the CMakeLists.txt file would need be updated.
Alternatively if modifying the CMakeLists.txt is too much of an issue just don't run those tests. That could be done by using the ctest -E regex option to exclude those tests.

How to run the gem5 unit tests?

Gem5 has several tests in the source tree, and there is some documentation at: http://www.gem5.org/Regression_Tests but those docs are not very clear.
What tests are there and how to run them?
Unit vs regression tests
gem5 has two kinds of tests:
regression: run some workload (full system or syscall emulation) on the entire simulator
unit: test only a tiny part of the simulator, without running the entire simulator binary
We will cover both on this answer.
Regression tests
2019 regression tests
A new testing framework was added in 2019 and it is documented at: https://gem5.googlesource.com/public/gem5/+/master/TESTING.md
Before sending patches, you basically want to run:
cd tests
./main.py run -j `nproc` -t `nproc`
This will:
build gem5 for the actively supported ISAs: X86, ARM, RISCV with nproc threads due to j
download binaries required to run tests from gem5.org, e.g. http://www.gem5.org/dist/current/arm/ see also: http://gem5.org/Download It was not currently possible to download out or the source tree, which is bad if you have a bunch of git worktrees lying around.
run the quick tests on nproc threads due to -t, which should finish in a few minutes
You can achieve the same as the previous command without cd by passing the tests/ directory as an argument:
./main.py run -j `nproc` -t `nproc` tests
but I wish neither were necessary: https://gem5.atlassian.net/browse/GEM5-397
This is exactly what the automated upstream precommit tests are running as can be seen from tests/jenkins/presubmit.sh.
Stdout contains clear result output of form:
Test: cpu_test_DerivO3CPU_FloatMM-ARM-opt Passed
Test: cpu_test_DerivO3CPU_FloatMM-ARM-opt-MatchStdout Passed
Test: realview-simple-atomic-ARM-opt Failed
Test: realview-simple-atomic-dual-ARM-opt Failed
and details about each test can be found under:
tests/.testing-results/
e.g.:
.testing-results/SuiteUID:tests-gem5-fs-linux-arm-test.py:realview-simple-atomic-ARM-opt/TestUID:tests-gem5-fs-linux-arm-test.py:realview-simple-atomic-ARM-opt:realview-simple-atomic-ARM-opt/
although we only see some minimal stdout/stderr output there which don't even show the gem5 stdout. The stderr file does however contain the full command:
CalledProcessError: Command '/path/to/gem5/build/ARM/gem5.opt -d /tmp/gem5outJtSLQ9 -re '/path/to/gem5/tests/gem5/fs/linux/arm/run.py /path/to/gem5/master/tests/configs/realview-simple-atomic.py' returned non-zero exit status 1
so you can remove -d and -re and re-run that to see what is happening, which is potentially slow, but I don't see another way.
If a test gets stuck running forever, you can find its raw command with Linux' ps aux command since processes are forked for each run.
Request to make it easier to get the raw run commands from stdout directly: https://gem5.atlassian.net/browse/GEM5-627
Request to properly save stdout: https://gem5.atlassian.net/browse/GEM5-608
To further stress test a single ISA, you can run all tests for one ISA with:
cd tests
./main.py run -j `nproc` -t `nproc` --isa ARM --length long --length quick
Each test is classified as either long or quick, and using both --length runs both.
long tests are typically very similar to the default quick ones, but using more detailed and therefore slower models, e.g.
tests/quick/se/10.mcf/ref/arm/linux/simple-atomic/ is quick with a faster atomic CPU
tests/long/se/10.mcf/ref/arm/linux/minor-timing/ is long with a slower Minor CPU
Tested in gem5 69930afa9b63c25baab86ff5fbe632fc02ce5369.
2019 regression tests run just one test
List all available tests:
./main.py list --length long --length quick
This shows both suites and tests, e.g.:
SuiteUID:tests/gem5/cpu_tests/test.py:cpu_test_AtomicSimpleCPU_Bubblesort-ARM-opt
TestUID:tests/gem5/cpu_tests/test.py:cpu_test_AtomicSimpleCPU_Bubblesort-ARM-opt:cpu_test_AtomicSimpleCPU_Bubblesort-ARM-opt
TestUID:tests/gem5/cpu_tests/test.py:cpu_test_AtomicSimpleCPU_Bubblesort-ARM-opt:cpu_test_AtomicSimpleCPU_Bubblesort-ARM-opt-MatchStdout
And now you can run just one test with --uid:
./main.py run -j `nproc` -t `nproc` --isa ARM --uid SuiteUID:tests/gem5/cpu_tests/test.py:cpu_test_AtomicSimpleCPU_FloatMM-ARM
A bit confusingly, --uid must point to a SuiteUID, not TestUID.
Then, when you run the tests, and any of them fails, and you want to run just the failing one, the test failure gives you a line like:
Test: cpu_test_DerivO3CPU_FloatMM-ARM-opt Passed
and the only way to run just the test is to grep for that string in the output of ./main.py list since cpu_test_DerivO3CPU_FloatMM-ARM-opt is not a full test ID, which is very annoyng.
2019 regression tests out of tree
By default, tests/main.py places the build at gem5/build inside the source tree. Testing an out of tree build is possible with --build-dir:
./main.py run -j `nproc` -t `nproc` --isa ARM --length quick --build-dir path/to/my/build
which places the build in path/to/my/build/ARM/gem5.opt instead for example.
If your build is already done, save a few scons seconds with --skip-build option as well:
./main.py run -j `nproc` -t `nproc` --isa ARM --length quick --build-dir path/to/my/build --skip-build
Note however that --skip-build also skips the downloading of test binaries. TODO patch that.
2019 regression tests custom binary download director
Since https://gem5-review.googlesource.com/c/public/gem5/+/24525 you can use the --bin-path option to specify where the test binaries are downloaded, otherwise they just go into the source tree.
This allows you to reuse the large binaries such as disk images across tests in multiple worktrees in a single machine, saving time and space.
Pre-2019 regression tests
This way of running tests is deprecated and will be removed.
The tests are run directly with scons.
But since the test commands are a bit long, there is even an in-tree utility generate tests commands for you.
For example, to get the command to run X86 and ARM quick tests, run:
./util/regress -n --builds X86,ARM quick
The other options besides quick are long or all to do both long and quick at the same time.
With -n it just prints the test commands, and without it it actually runs them.
This outputs something like:
scons \
--ignore-style \
--no-lto \
build/X86/gem5.debug \
build/ARM/gem5.debug \
build/X86/gem5.fast \
build/ARM/gem5.fast \
build/X86/tests/opt/quick/se \
build/X86/tests/opt/quick/fs \
build/ARM/tests/opt/quick/se \
build/ARM/tests/opt/quick/fs
TODO: why does it build gem5.debug and gem5.fast, but then runs an /opt/ test?
So note how this would both:
build the gem5 executables, e.g. build/X86/gem5.debug
run the tests, e.g. build/X86/tests/opt/quick/fs
Or get the command to run all tests for all archs:
./util/regress -n all
Then, if you just want to run one of those types of tests, e.g. the quick X86 ones you can copy paste the scons just for that tests:
scons --ignore-style build/X86/tests/opt/quick/se
Running the tests with an out of tree build works as usual by magically parsing the target path: How to build gem5 out of tree?
scons --ignore-style /any/path/that/you/want/build/X86/tests/opt/quick/se
or you can pass the --build-dir option to util/regress:
./util/regress --build-dir /any/path/that/you/want all
The tests that boot Linux on the other hand require a Linux image with a specific name in the M5_PATH, which is also annoying.
This would however be very slow, not something that you can run after each commit: you are more likely to want to run just the quick tests for your ISA of interest.
Pre-2019 regression tests: Run just one test
If you just append the path under tests in the source tree to the test commands, it runs all the tests under a given directory.
For example, we had:
scons --ignore-style build/X86/tests/opt/quick/se
and we notice that the following path exists under tests in the source tree:
quick/se/00.hello/ref/x86/linux/simple-atomic/
so we massage the path by removing ref to get the final command:
scons build/X86/tests/opt/quick/se/00.hello/x86/linux/simple-atomic
Pre-2019 regression tests: Find out the exact gem5 CLI of the command run
When you run the tests, they output to stdout the m5out path.
Inside the m5out path, there is a simout with the emulator stdout, which contains the full gem5 command line used.
For example:
scons --ignore-style build/X86/tests/opt/quick/se
outputs:
Running test in /any/path/that/you/want/build/ARM/tests/opt/quick/se/00.hello/arm/linux/simple-atomic.
and the file:
/any/path/that/you/want/build/ARM/tests/opt/quick/se/00.hello/arm/linux/simple-atomic
contains:
command line: /path/to/mybuild/build/ARM/gem5.opt \
-d /path/to/mybuild/build/ARM/tests/opt/quick/fs/10.linux-boot/arm/linux/realview-simple-atomic \
--stats-file 'text://stats.txt?desc=False' \
-re /path/to/mysource/tests/testing/../run.py \
quick/fs/10.linux-boot/arm/linux/realview-simple-atomic
Pre-2019 regression tests: Re-run just one test
If you just run a test twice, e.g. with:
scons build/ARM/tests/opt/quick/fs/10.linux-boot/arm/linux/realview-simple-atomic
scons build/ARM/tests/opt/quick/fs/10.linux-boot/arm/linux/realview-simple-atomic
the second run will not really re-run the test, but rather just compare the stats from the previous run.
To actually re-run the test, you must first clear the stats generated from the previous run before re-running:
rm -rf build/ARM/tests/opt/quick/fs/10.linux-boot/arm/linux/realview-simple-atomic
Pre-2019 regression tests: Get test results
Even this is messy... scons does not return 0 success and 1 for failure, so you have to parse the logs. One easy way it to see:
scons --ignore-style build/X86/tests/opt/quick/se |& grep -E '^\*\*\*\*\* '
which contains three types of results: PASSSED, CHANGED or FAILED
CHANGED is mostly for stat comparisons that had a great difference, but those are generally very hard to maintain and permanently broken, so you should focus on FAILED
Note that most tests currently rely on SPEC2000 and fail unless you have access to this non-free benchmark...
Unit tests
The unit tests, which compile to separate executables from gem5, and just test a tiny bit of the code.
There are currently two types of tests:
UnitTest: old and deprecated, should be converted to GTest
GTest: new and good. Uses Google Test.
Placed next to the class that they test, e.g.:
src/base/cprintf.cc
src/base/cprintf.hh
src/base/cprintftest.cc
Compile and run all the GTest unit tests:
scons build/ARM/unittests.opt
Sample output excerpt:
build/ARM/base/cprintftest.opt --gtest_output=xml:build/ARM/unittests.opt/base/cprintftest.xml
Running main() from gtest_main.cc
[==========] Running 4 tests from 1 test case.
[----------] Global test environment set-up.
[----------] 4 tests from CPrintf
[ RUN ] CPrintf.Misc
[ OK ] CPrintf.Misc (0 ms)
[ RUN ] CPrintf.FloatingPoint
[ OK ] CPrintf.FloatingPoint (0 ms)
[ RUN ] CPrintf.Types
[ OK ] CPrintf.Types (0 ms)
[ RUN ] CPrintf.SpecialFormatting
[ OK ] CPrintf.SpecialFormatting (0 ms)
[----------] 4 tests from CPrintf (0 ms total)
[----------] Global test environment tear-down
[==========] 4 tests from 1 test case ran. (0 ms total)
[ PASSED ] 4 tests.
Compile and run just one test file:
scons build/ARM/base/cprintf.test.opt
./build/ARM/base/cprintf.test.opt
List available test functions from a test file, and run just one of them:
./build/ARM/base/cprintftest.opt --gtest_list_tests
./build/ARM/base/cprintftest.opt SpecialFormatting
Tested on gem5 200281b08ca21f0d2678e23063f088960d3c0819, August 2018.
Unit tests with SimObjects
As of 2019, the unit tests are quite limited, because devs haven't yet found a proper way to test SimObjects in isolation, which make up the bulk of the simulator, and are tightly bound to the rest of the simulator. This unmerged patch attempted to address that: https://gem5-review.googlesource.com/c/public/gem5/+/15315
It might be possible to work around it with Google Mock, which is already present in-tree, but it is not clear if anyone has the patience to mock out enough of SimObject to actually make such tests.
I believe the only practical solution is to embed all tests into gem5.opt, and then have a --test <testname> option that runs tests instead of running simulating. This way we get a single binary without duplicating binary sizes, but can still access everything.
Related issue: https://gem5.atlassian.net/browse/GEM5-433
Continuous integration
20.1 Nightlies enabled
As mentioned at: https://www.gem5.org/project/2020/10/01/gem5-20-1.html a Jenkins that runs the long regressions was added at: https://jenkins.gem5.org/job/Nightly/
2019 CI
Around 2019-04 a precommit CI that runs after every pull request after the maintainer gives +1.
It uses a magic semi-internal Google provided Jenkins setup called Kokoro which provides low visibility on configuration.
See this for example: https://gem5-review.googlesource.com/c/public/gem5/+/18108 That server does not currently run nightlies. The entry point is tests/jenkins/presubmit.sh.
Nightlies were just disabled to start with.
What is the environment of the 2019 CI?
In-tree Docker images are used: https://askubuntu.com/questions/350475/how-can-i-install-gem5/1275773#1275773
Pre-2019 CI update
here was a server running somewhere that runs the quick tests for all archs nightly and posts them on the dev mailing list, adding to the endless noise of that enjoyable list :-)
Here is a sample run: https://www.mail-archive.com/gem5-dev#gem5.org/msg26855.html
As of 2019Q1, gem5 devs are trying to setup an automated magic Google Jenkins to run precommit tests, a link to a prototype can be found at: https://gem5-review.googlesource.com/c/public/gem5/+/17456/1#message-e9dceb1d3196b49f9094a01c54b06335cea4ff88 This new setup uses the new testing system in tests/main.py.
Pre-2019 CI: Why so many tests are CHANGED all the time?
As of August 2018, many tests have been CHANGED for a long time.
This is because stats can vary due to a very wide number of complex
factors. Some of those may be more accurate, others no one knows,
others just bugs.
Changes happen so often that devs haven't found the time to properly
understand and justify them.
If you really care about why they changed, the best advice I have is to bisect them.
But generally your best bet is to just re-run your old experiments on the newer gem5 version, and compare everything there.
gem5 is not a cycle accurate system simulator, so absolute values or
small variations are not meaningful in general.
This also teaches us that results obtained with small margins are
generally not meaningful for publication since the noise is too great.
What that error margin is, I don't know.

How to run CTest test in debugger

Suppose in CMakeLists.txt I have
add_executable(mytarget main.cpp)
enable_testing()
add_test(mytarget_test0 mytarget -option0)
Is there any easy way how can I run mytarget in GDB with all command line options from some particular CTest test? (Other than searching for test in CMakeLists and then copy-pasting add_test parameters to command line manually?)
Real life scenario: I run all tests using ctest, one fails, I want to open it in debugger quickly.
In other build systems there are command line parameters to use gdb, for example in Meson meson test --gdb testname , in bazel bazel --run_under=gdbserver. I did not found anything similar for CTest
It is possible to get test command with arguments:
ctest -R $regex_matching_test -V -N
As output you will get something like:
Test project ../cmake-build-debug-gcc7
Constructing a list of tests
Done constructing a list of tests
1: Test command: ../cmake-build-debug-gcc7/my_tool "-v" "test0"
Test #1: my_tool_test0
Total Tests: 1
Then using regexp it is possible to grab command line args for gdb
I use the following procedure:
make clean
make # Build target and all unit tests
make test # Run all unit tests: make test
You will get an result list and some output, depending of the unit test frame work. Fix the issue and re-run the failed test within gdb until it succeeds. Restart the whole procedure.
In case you are using an IDE (e.g. QtCreator) it is easy to debug through the failed test case.

skip a subset of DejaGNU testsuite conditionally

I am using DejaGNU to test a compiler toolchain.
I need to skip a bunch of execution tests - which try and run the compiled executable - but only when running those tests run on a particular emulator (QEMU). It is still relevant to run those executables on hardware so I don't want to simply remove the tests from the testsuite.
The DejaGNU documentation is pretty sparse on that topic. man runtest mentions a --ignore switch:
--ignore test1.exp test2.exp ...
Do not run the specified tests.
I just cant work out which .exp i need to exclude from looking at the test results log. Does anyone know how to figure that out?
Easy peasy.
First, the test log will tell which .exp failed. for instance:
Running ${NEWLIB_PATH}/testsuite/newlib.locale/UTF-8.exp ...
Executing on host: (bla bla bla)
PASS: newlib.locale/UTF-8.c compilation
spawn (bla bla bla)
Failed to set C-UTF-8 locale.
newlib.locale/UTF-8.c: Expected: Set C-UTF-8 locale. Got: Failed to set C-UTF-8 locale.
FAIL: newlib.locale/UTF-8.c output
Notice the first line of this log entry says Running UTF-8.exp .... Now, to skip it, simply run DejaGnu as follows:
runtest --ignore UTF-8.exp