How do you run doc tests in feature gated implementations? - testing

I noticed that if you have a feature gate like #[feature(cfg = "nightly")] around a trait implementation, the doctest is skipped by a call to cargo test, even on a nightly rustc. I tried cargo test --all-features, but the results were the same. (Commenting out the gate results in the tests being run, of course.) I didn't see anything in the Rust Reference about this, either.
How do you ensure tests on feature gated implementations run?
For reference, here's my working Rust version.
rustc 1.17.0-nightly (c0b7112ba 2017-03-02)
binary: rustc
commit-hash: c0b7112ba246d96f253ba845d91f36c0b7398e42
commit-date: 2017-03-02

Features that are not part of the default set must always be explicitly opted into. You do that by passing an argument at the command line or by listing them in the [dependencies] section if they are dependencies.
src/lib.rs
/// ```
/// assert!(false);
/// ```
#[cfg(feature = "nightly")]
trait Foo {}
Cargo.toml
[package]
name = "wuzzy"
version = "0.1.0"
authors = ["An Devloper <an.devloper#example.com>"]
[features]
nightly = []
Running cargo test --features=nightly causes the assertion to fire, and cargo test ignores the test. This all works on stable Rust.
$ cargo test --features=nightly
Doc-tests wuzzy
running 1 test
test src/lib.rs - Foo (line 1) ... FAILED
test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured
$ cargo test
Doc-tests wuzzy
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured

Related

How to run test cases for a binary in Makefile

There is a small project which produces a binary application. The source code is C, I'm using autotools to create the Makefile and build the binary - it works as well.
I would like to run tests cases with that binary. Here is what I did:
SUBDIRS = src
dist_doc_DATA = README
TESTS=
TESTS+=tests/config1.conf
TESTS+=tests/config2.conf
TESTS+=tests/config3.conf
TESTS+=tests/config4.conf
TESTS+=tests/config5.conf
TESTS+=tests/config6.conf
TESTS+=tests/config7.conf
TESTS+=tests/config8.conf
TESTS+=tests/config9.conf
TESTS+=tests/config10.conf
TESTS+=tests/config11.conf
I would like to run these cases as argument with the tool. When I run make check, I got:
make[3]: Entering directory '/home/airween/src/mytool'
FAIL: tests/config1.conf
FAIL: tests/config2.conf
FAIL: tests/config3.conf
which is correct, because those files are simple configurations files.
How can I solve that make check runs my tool with the scripts above, and finally I get a list with number of success, failed, ... tests, like in that case:
============================================================================
Testsuite summary for mytool 0.1
============================================================================
# TOTAL: 11
# PASS: 0
# SKIP: 0
# XFAIL: 0
# FAIL: 11
# XPASS: 0
# ERROR: 0
Edit: so I would like to emulate these runs:
for f in `ls -1 tests/*.conf; do src/mytool ${f}; done
but - of course - I want to see the summary at the end.
Thanks.
The Autotools' built-in test runner expects you to specify the names of executable tests via the make variable TESTS. You cannot just put random filenames in there and expect make or Automake to know what to do with them.
The tests can be built programs, generated scripts, static scripts distributed with the project, or any combination of the above.
How can I solve that make check runs my tool with the scripts above, and finally I get a [test summary report]?
You have acknowledged that your configuration files are not scripts, so stop calling them that! This is in fact the crux of the problem. The easiest solution is probably to create actual executable scripts, one for each case, and name those in your TESTS variable. Each one would run the binary under test with the appropriate configuration file (that is, you're responsible for making them do that if those are the tests you want to perform).
See also the Automake Manual's chapter on tests.
Okay, the solution from here:
tests/Makefile.am:
==================
TEST_EXTENSIONS = .conf
CONF_LOG_COMPILER = ./test-suit.sh
TESTS=
TESTS+=config1.conf
TESTS+=config2.conf
TESTS+=config3.conf
TESTS+=config4.conf
TESTS+=config5.conf
TESTS+=config6.conf
TESTS+=config7.conf
TESTS+=config8.conf
TESTS+=config9.conf
TESTS+=config10.conf
TESTS+=config11.conf
test/test-suit.sh:
==================
#!/bin/sh
CONF=$1
exec ../src/mytool $CONF
And the result:
make check
...
PASS: config1.conf
PASS: config2.conf
PASS: config3.conf
PASS: config4.conf
PASS: config5.conf
PASS: config6.conf
PASS: config7.conf
PASS: config8.conf
PASS: config9.conf
PASS: config10.conf
PASS: config11.conf
============================================================================
Testsuite summary for mytool 0.1
============================================================================
# TOTAL: 11
# PASS: 11
# SKIP: 0
# XFAIL: 0
# FAIL: 0
# XPASS: 0
# ERROR: 0
============================================================================
This is what I expected.

Running unit test target on XCode9 returns "Early unexpected exit" error

I'm learning how to add unit tests to an objective-c project using XCode9. So I've created a command line project from scratch called Foo and afterwards I've added a new target to the project called FooTests. Afterwards I've edited Foo's scheme to add FooTests. However, whenever I run the tests (i.e., menu "Product" -> "Tests" ) XCode9 throws the following error:
Showing All Messages
Test target FooTests encountered an error (Early unexpected exit, operation never finished bootstrapping - no restart will be attempted)
However, when I try to run tests by calling xcode-build from the command line, it seems that all unit tests are executed correctly. Here's the output;
a483e79a7057:foo ram$ xcodebuild test -project foo.xcodeproj -scheme foo
2020-05-15 17:39:30.496 xcodebuild[53179:948485] IDETestOperationsObserverDebug: Writing diagnostic log for test session to:
/var/folders/_z/q35r6n050jz5fw662ckc_kqxbywcq0/T/com.apple.dt.XCTest/IDETestRunSession-E7DD2270-C6C2-43ED-84A9-6EBFB9A4E853/FooTests-8FE46058-FC4A-47A2-8E97-8D229C5678E1/Session-FooTests-2020-05-15_173930-Mq0Z8N.log
2020-05-15 17:39:30.496 xcodebuild[53179:948484] [MT] IDETestOperationsObserverDebug: (324DB265-AD89-49B6-9216-22A6F75B2EDF) Beginning test session FooTests-324DB265-AD89-49B6-9216-22A6F75B2EDF at 2020-05-15 17:39:30.497 with Xcode 9F2000 on target <DVTLocalComputer: 0x7f90b2302ef0 (My Mac | x86_64h)> (10.14.6 (18G4032))
=== BUILD TARGET foo OF PROJECT foo WITH CONFIGURATION Debug ===
Check dependencies
=== BUILD TARGET FooTests OF PROJECT foo WITH CONFIGURATION Debug ===
Check dependencies
Test Suite 'All tests' started at 2020-05-15 17:39:30.845
Test Suite 'FooTests.xctest' started at 2020-05-15 17:39:30.846
Test Suite 'FooTests' started at 2020-05-15 17:39:30.846
Test Case '-[FooTests testExample]' started.
Test Case '-[FooTests testExample]' passed (0.082 seconds).
Test Case '-[FooTests testPerformanceExample]' started.
/Users/ram/development/objective-c/foo/FooTests/FooTests.m:36: Test Case '-[FooTests testPerformanceExample]' measured [Time, seconds] average: 0.000, relative standard deviation: 84.183%, values: [0.000006, 0.000002, 0.000001, 0.000002, 0.000001, 0.000001, 0.000001, 0.000001, 0.000001, 0.000001], performanceMetricID:com.apple.XCTPerformanceMetric_WallClockTime, baselineName: "", baselineAverage: , maxPercentRegression: 10.000%, maxPercentRelativeStandardDeviation: 10.000%, maxRegression: 0.100, maxStandardDeviation: 0.100
Test Case '-[FooTests testPerformanceExample]' passed (0.660 seconds).
Test Suite 'FooTests' passed at 2020-05-15 17:39:31.589.
Executed 2 tests, with 0 failures (0 unexpected) in 0.742 (0.743) seconds
Test Suite 'FooTests.xctest' passed at 2020-05-15 17:39:31.589.
Executed 2 tests, with 0 failures (0 unexpected) in 0.742 (0.744) seconds
Test Suite 'All tests' passed at 2020-05-15 17:39:31.590.
Executed 2 tests, with 0 failures (0 unexpected) in 0.742 (0.745) seconds
** TEST SUCCEEDED **
Does anyone know how to add unit tests to an xcode9 project for a command line application? If you happen to know, what's the right way of doing this and what am I doing wrong?

ifort -coarray=shared produces incorrect exit status

If I compile and run
! main.f90
print*, 1/0
end program
with ifort then I get a division by zero error with an exit status of 2 (echo $?), as expected. However, If I compile using ifort -coarray=shared then I still get the error but now the exit status is 0. The problem is that CTest is unable to catch the error. This is my CMakeLists.txt:
cmake_minimum_required(VERSION 2.8)
project(testing Fortran)
enable_testing()
add_executable(main EXCLUDE_FROM_ALL main.f90)
add_test(main main)
add_custom_target(check COMMAND ctest DEPENDS main)
If I run make check then the output is
100% tests passed, 0 tests failed out of 1
even though the test actually failed. If I remove -coarray=shared or use gfortran then I get the correct output
0% tests passed, 1 tests failed out of 1
How can I make CTest report that the test failed? Am I doing something wrong or is this a compiler bug?
I'm testing with ifort 14.0.1 with the example code
program test
implicit none
print *, 1/0
end program
I can't quite replicate your initial return value of 3. In my testing the result of echo $? is 71, which correlates to the message thrown from the runtime error (note: gfortran won't even compile the example):
forrtl: severe (71): integer divide by zero
I am however replicating your return code of 0 from the coarray version. What is happening here is a bit more complicated, as ifort implements coarrays through MPI calls and for -coarray=shared basically wraps your program in a wrapper so you do not have to call mpirun, mpiexec or have to run intel's mpd to handle MPI communications. It is clear that the coarray images (MPI ranks) are all returning with error code 3:
application called MPI_Abort(comm=0x84000000, 3) - process 4
and
exit status of rank 1: return code 3
is emitted by each MPI rank, but the executable itself always returns with exit code 0. Whether this is the intended behavior or a bug is not clear to me, but it seems likely that the wrapper code to launch the MPI processes doesn't look at the MPI return codes from each rank. As pointed out in the comments, how would we expect different return values from different mpi ranks to be handled? It doesn't seem you are doing anything wrong.
Contrast this with a normal MPI example:
program mpitest
use mpi
implicit none
integer :: rank, msize, merror, mstatus(MPI_STATUS_SIZE)
call MPI_INIT(merror)
call MPI_COMM_SIZE(MPI_COMM_WORLD, msize, merror)
call MPI_COMM_RANK(MPI_COMM_WORLD, rank, merror)
print *, 1/0
call MPI_FINALIZE(merror)
end program
compiled with
mpiifort -o testmpi testmpi.f90
and run as (with mpd running):
mpirun -np 4 ./testmpi
This produced a severe runtime error 71 for integer divide by zero for each rank as before, but the error code is propagated back to the shell:
$ echo $?
71

Gitlab-CI runner hangs after makefile test fails

I am using Gitlab-CI for my build tests. I have a very simple test which compares the output of the test install/build with the known output. I put the test in a makefile.
The Makefile entry looks like this:
test:clean
make install DESTDIR=$(TEST_DIR)
$(TEST_DIR)/path/to/executable > $(TEST_DIR)/tmp.out
diff test/test.result $(TEST_DIR)/tmp.out
When the diff passes, an exit code of 0 is returned, a exit code of 1 is returned if the diff shows a difference in the files.
What I've tried:
Running make test from any shell runs the tests and exits, regardless of diff result
Running make test from the shell as gitlab_ci_runner runs the tests and exists regardless of diff result
When ran from Gitlab-CI, and the diff exit status is 0, the build returns success
The problem:
When ran in the Gitlab-CI and the diff exit status is non-0, the build hangs.
The output on the build screen is the output of the diff, and the last line is the expected error: make: *** [test] Error 1
After that, the cycle symbol keeps on, the runner does not exit with a build fail.
Any ideas? I thought that it might be something with Makefiles, but the Gitlab-CI will exit with a fail status if the Make exits with Error 1 for any other test. I can only see it happening on the output of the diff.
Thanks!
Also posted this to the GitLab mailinglist https://groups.google.com/d/msgid/gitlabhq/77e82813-b98e-4abe-9755-f39e07043384%40googlegroups.com?utm_medium=email&utm_source=footer

How to pass quoted parameters to add_test in cmake?

I'm trying to pass parameters to a gtest test suite from cmake:
add_test(NAME craft_test
COMMAND craft --gtest_output='xml:report.xml')
The issue is that these parameters are being passed surrounded by quotes, why? It looks like a bug, is there a good way for avoiding it?
$ ctest -V
UpdateCTestConfiguration from :/usr/local/src/craft/build-analyze/DartConfiguration.tcl
UpdateCTestConfiguration from :/usr/local/src/craft/build-analyze/DartConfiguration.tcl
Test project /usr/local/src/craft/build-analyze
Constructing a list of tests
Done constructing a list of tests
Checking test dependency graph...
Checking test dependency graph end
test 1
Start 1: craft_test
1: Test command: /usr/local/src/craft/build-analyze/craft "--gtest_output='xml:report.xml'"
1: Test timeout computed to be: 9.99988e+06
1: WARNING: unrecognized output format "'xml" ignored.
1: [==========] Running 1 test from 1 test case.
1: [----------] Global test environment set-up.
1: [----------] 1 test from best_answer_test
1: [ RUN ] best_answer_test.test_sample
1: [ OK ] best_answer_test.test_sample (0 ms)
1: [----------] 1 test from best_answer_test (0 ms total)
1:
1: [----------] Global test environment tear-down
1: [==========] 1 test from 1 test case ran. (0 ms total)
1: [ PASSED ] 1 test.
1/1 Test #1: craft_test ....................... Passed 0.00 sec
100% tests passed, 0 tests failed out of 1
Total Test time (real) = 0.00 sec
It's not the quotes that CMake adds that is the problem here; it's the single quotes in 'xml:report.xml' that are at fault.
You should do:
add_test(NAME craft_test
COMMAND craft --gtest_output=xml:report.xml)