Why does my Bazel test report failure when all individual tests are passing? - googletest

When running my unit test (gtest) through Bazel, I'm seeing a failure reported. However, the logs indicate that my test is running successfully and passing.
Other tests in my project are passing and the only difference between the deviant test and the others is that the deviant test is multithreaded.
I've run the test binary found in bazel-bin by itself and it passes and returns successfully.
Bazel version:
Build label: 0.26.0
Build target: bazel-out/k8-opt/bin/src/main/java/com/google/devtools/build/lib/bazel/BazelServer_deploy.jar
Build time: Tue May 28 08:35:14 2019 (1559032514)
Build timestamp: 1559032514
Build timestamp as int: 1559032514
The relevant block in my BUILD file:
cc_test(
name = "DBControllerIntegration",
srcs = ["dbcontroller_integration_test.cc"],
deps = [
"//src:db_ctl_lib",
"//test/mocks:sstable_mock_lib",
"#boost//:filesystem",
"#com_google_protobuf//:protobuf",
"#glog//:glog",
"#googletest//:gtest_main",
],
copts = ["-std=c++17"],
)
Bazel test failure output:
>> bazel test //test:DBControllerIntegration --test_output=errors
INFO: Invocation ID: ccca8fa7-27a5-4c8c-badf-3f342934e4e5
INFO: Analysed target //test:DBControllerIntegration (0 packages loaded, 0 targets configured).
INFO: Found 1 test target...
FAIL: //test:DBControllerIntegration (see /home/tallen/.cache/bazel/_bazel_tallen/f087948e065d612174d90a43a5740198/execroot/diodb/bazel-out/k8-dbg/testlogs/test/DBControllerIntegration/test.log)
INFO: From Testing //test:DBControllerIntegration:
==================== Test output for //test:DBControllerIntegration:
Running main() from gmock_main.cc
[==========] Running 1 test from 1 test case.
[----------] Global test environment set-up.
[----------] 1 test from DBControllerIntegrationTest
[ RUN ] DBControllerIntegrationTest.Basic
... <omitting my application's logs> ...
[ OK ] DBControllerIntegrationTest.Basic (4000 ms)
[----------] 1 test from DBControllerIntegrationTest (4000 ms total)
[----------] Global test environment tear-down
[==========] 1 test from 1 test case ran. (4001 ms total)
[ PASSED ] 1 test.
================================================================================
Target //test:DBControllerIntegration up-to-date:
bazel-bin/test/DBControllerIntegration
INFO: Elapsed time: 4.706s, Critical Path: 4.57s
INFO: 1 process: 1 processwrapper-sandbox.
INFO: Build completed, 1 test FAILED, 2 total actions
//test:DBControllerIntegration FAILED in 4.6s
/home/tallen/.cache/bazel/_bazel_tallen/f087948e065d612174d90a43a5740198/execroot/diodb/bazel-out/k8-dbg/testlogs/test/DBControllerIntegration/test.log
INFO: Build completed, 1 test FAILED, 2 total actions
Running the test binary by itself:
>> ./bazel-bin/test/DBControllerIntegration
Running main() from gmock_main.cc
[==========] Running 1 test from 1 test case.
[----------] Global test environment set-up.
[----------] 1 test from DBControllerIntegrationTest
[ RUN ] DBControllerIntegrationTest.Basic
... <omitting my application's logs> ...
[ OK ] DBControllerIntegrationTest.Basic (4001 ms)
[----------] 1 test from DBControllerIntegrationTest (4001 ms total)
[----------] Global test environment tear-down
[==========] 1 test from 1 test case ran. (4001 ms total)
[ PASSED ] 1 test.
I would expect the Bazel test result to be reported as passing since the actual test is passing, but I'm seeing the test as failed.

This was resolved when gracefully terminating all threads spawned by my project.

Related

junit 5 console launcher throwing an error "Caused by: java.lang.ClassNotFoundException"

I'm using the complied library of jars of a java project and trying to run a junit 5 test via console launcher. Same test works when i right click and run as junit test but via console launcher it throws the below error.
This is what i'm using to compile the program and it complies without errors
javac -encoding UTF8 -cp /abc/Junit/lib/*:/var/Unit/lib/* /abc/Unit/ /Junit/test/BuildTest.java
and this is how I run the tests
java -jar /abc/Junit/lib/junit-platform-console-standalone-1.6.0.jar --classpath /abc/Junit/lib/*:/abc/Unit/lib/*:/abc/Junit/test --include-classname ".*" --scan-classpath
This is error I'm getting
Thanks for using JUnit! Support its development at https://junit.org/sponsoring
.
+-- JUnit Jupiter [OK]
| '-- buildTests [X] com.build.Info
'-- JUnit Vintage [OK]
Failures (1):
JUnit Jupiter:buildTests
ClassSource [className = 'com.buildTests', filePosition = null]
=> java.lang.NoClassDefFoundError: com.build.VersionInfo
java.lang.Class.getDeclaredFields(Class.java:868)
org.junit.platform.commons.util.ReflectionUtils.getDeclaredFields(ReflectionUtils.java:1334)
org.junit.platform.commons.util.ReflectionUtils.findAllFieldsInHierarchy(ReflectionUtils.java:1092)
org.junit.platform.commons.util.ReflectionUtils.findFields(ReflectionUtils.java:1080)
org.junit.platform.commons.util.AnnotationUtils.findAnnotatedFields(AnnotationUtils.java:371)
[...]
Caused by: java.lang.ClassNotFoundException: com.build.VersionInfo
java.net.URLClassLoader.findClass(URLClassLoader.java:610)
java.lang.ClassLoader.loadClassHelper(ClassLoader.java:935)
java.lang.ClassLoader.loadClass(ClassLoader.java:880)
java.net.FactoryURLClassLoader.loadClass(URLClassLoader.java:1225)
java.lang.ClassLoader.loadClass(ClassLoader.java:863)
[...]
Test run finished after 73 ms
[ 3 containers found ]
[ 0 containers skipped ]
[ 3 containers started ]
[ 0 containers aborted ]
[ 2 containers successful ]
[ 1 containers failed ]
[ 1 tests found ]
[ 0 tests skipped ]
[ 0 tests started ]
[ 0 tests aborted ]
[ 0 tests successful ]
[ 0 tests failed ]
Finally, was able to figure out why the error is thrown. Issue was not giving the actual jar and using wild card while running the test. It has to be blah/lib/core.jar not blah/lib/*. Hope this helps someone in the future.

CTest: How to skip other tests if the "precheck" test fails?

I'm adding tests to my project and all the tests have some prerequisites so I add a precheck test which all other tests depend on. If the precheck fails then I'd like the other tests to stop immediately.
add_test(
NAME precheck
COMMAND false
)
add_test(
NAME test-1
COMMAND true
)
add_test(
NAME test-2
COMMAND true
)
set_tests_properties(
test-1 test-2
PROPERTIES
DEPENDS precheck
)
But seems like the DEPENDS property only impact the order of tests:
$ make test
Running tests...
Test project /root/ibvq/frkcrpg/b
Start 1: precheck
1/3 Test #1: precheck .........................***Failed 0.00 sec
Start 2: test-1
2/3 Test #2: test-1 ........................... Passed 0.00 sec
Start 3: test-2
3/3 Test #3: test-2 ........................... Passed 0.00 sec
67% tests passed, 1 tests failed out of 3
Total Test time (real) = 0.02 sec
The following tests FAILED:
1 - precheck (Failed)
Errors while running CTest
Makefile:83: recipe for target 'test' failed
make: *** [test] Error 8
So how can I make the failed precheck stop other tests?
If you are using CMake version 3.7 or later, you can use the test fixture related properties.
For earlier versions of CMake, have your precheck test create a dummy file on success that your other tests depend on by setting the REQUIRED_FILES property.

How to force cmake to write test output after make test

I'm building fortran project with cmake and I can't find solution to print to console FRUIT test results, they look something like these:
Test module initialized
. : successful assert, F : failed assert
7.00000000000000 -3.60000000000000 7.00000000000000
FFF
Start of FRUIT summary:
Some tests failed!
-- Failed assertion messages:
[_not_set_]:Expected [7.00000000000000], Got [1.00000000000000]
[_not_set_]:Expected [-3.60000000000000], Got [2.00000000000000]
[_not_set_]:Expected [7.00000000000000], Got [6.00000000000000]
-- end of failed assertion messages.
Total asserts : 3
Successful : 0
Failed : 3
Successful rate: 0.00%
Successful asserts / total asserts : [ 0 / 3 ]
Successful cases / total cases : [ 0 / 0 ]
-- end of FRUIT summary
The output I'm getting with make test looks like:
make test
Running tests...
Test project /home/konrad/Desktop/fortran
Start 1: unit_tests
1/1 Test #1: unit_tests ....................... Passed 0.01 sec
100% tests passed, 0 tests failed out of 1
Total Test time (real) = 0.01 sec
And since passing cmake tests doesn't mean passing FRUIT ones, I want to print FRUIT file everytime I run tests (just for the sake of making it work). I've tried adding printing commands at the end of test command (like this less), adding
-P ${CMAKE_TEST_DIR}/unit_tests.txt
at the end of add_test, building custom after build commands (which I can't make to run after make test so if you knew how to do that would solve it as well, seems like make test or test is not really a target)
Last part of my cmake file with all the testing code:
add_executable(task ${TASK_SOURCES})
add_executable(tests ${TEST_SOURCES})
enable_testing()
set(run_command "${CMAKE_BINARY_DIR}/tests")
set(UNIT_TEST_NAME "unit_tests.txt")
file(MAKE_DIRECTORY ${CMAKE_TEST_DIR})
add_test( NAME unit_tests
COMMAND sh -c
"rm -f ${CMAKE_TEST_DIR}/${UNIT_TEST_NAME} \
&& ${run_command} \
>> ${CMAKE_TEST_DIR}/${UNIT_TEST_NAME} \
&& less ${CMAKE_TEST_DIR}/${UNIT_TEST_NAME}"
)
I have solved a lack of tests output with custom CMake target that will invoke ctest in verbose mode etc.
e.g.
enable_testing()
add_custom_target(check COMMAND ${CMAKE_CTEST_COMMAND}
--force-new-ctest-process
--verbose
--output-on-failure
)
The output of cmake tests is captured to a file (Testing/Temporary/LastTest.log in my current project).
cmake tests rely on the return code of the test program, with one test program per test.
If you wish to run a program that is a "driver" for your tests, I recommend to use add_custom_target. This commands will add a target that runs a command of your choice.
For instance:
add_custom_target(Name unit_tests tests)
add_dependencies(unit_tests tests)
I am not sure whether the add_dependencies line is needed in this case though (as tests is a target managed by cmake).
Then, you can run
make unit_tests
and it will run your test driver.

gulp-npm-test running Jest tests (all passing), but claiming to fail

I've installed gulp-npm-test following their documentation, that is, in my gulp directory I've got a file test.js that looks like this:
var gulp = require('gulp')
require('gulp-npm-test')(gulp)
var gulp = require('gulp-npm-test')(gulp, {
withoutNpmRun: false
})
But when I run gulp test I get the output like the following:
[17:12:41] Using gulpfile ~/my-project/gulpfile.js
[17:12:41] Starting 'test'...
> my-project#0.0.1 test /Users/wogsland/my-project
> jest
PASS frontend/tests/components/atoms/InputText.test.js
.
.
(many more Jest test passes, no fails)
.
.
FAIL gulp/tasks/test.js
● Test suite failed to run
Your test suite must contain at least one test.
at onResult (node_modules/jest/node_modules/jest-cli/build/TestRunner.js:192:18)
Test Suites: 1 failed, 135 passed, 136 total
Tests: 135 passed, 135 total
Snapshots: 209 passed, 209 total
Time: 92.237s
Ran all test suites.
npm ERR! Test failed. See above for more details.
What am I missing here? I've go a bunch of tests
Looks like the matcher that figures out which files are tests, matches everything that includes in the filename test.js. The default is [ '**/__tests__/**/*.js?(x)', '**/?(*.)(spec|test).js?(x)' ]. So either adapt this one, its testMatch in your jest settings. Or use testPathIgnorePatterns to exclude the /gulp folder.

Calling Unittest++ from cmake created makefile

I recently started learning cmake, and have run into a small issue. I got both my executable and the unit tests to compile from the generated makefile without issue. If I run ./test in the build directory, the tests created in UnitTest++ run and complete as expected, printing the results. Is there any way to get make test to simply run the test executable rather than running it inside ctest framework or should I go about this a different way?
Here is a minimal working example of my code:
src/main/main.c is a simple empty main function
src/test/testMain.cpp:
#include <UnitTest++/UnitTest++.h>
TEST(FailSpect)
{
CHECK(false);
}
int main()
{
UnitTest::RunAllTests();
}
CMakeLists.txt:
cmake_minimum_required( VERSION 2.6 )
project( myProject)
enable_testing()
set( myProjectMain
src/main/main.c
)
set( myProjectSrc
)
set( myProjectTestSrc
src/test/testMain.cpp
)
add_executable( myExecutable ${myProjectMain} ${myProjectSrc} )
add_executable( testSuite ${myProjectTestSrc} ${myProjectSrc} )
target_link_libraries( testSuite UnitTest++ )
add_test( testExe testSuite )
make test output:
Running tests...
Start processing tests
Test project /myProjectDir/build
1/ 1 Testing testExe Passed
100% tests passed, 0 tests failed out of 1
./testSuite output:
/myProjectDir/src/test/testMain.cpp:5: error: Failure in FailSpect: false
FAILURE: 1 out of 1 tests failed (1 failures).
Test time: 0.00 seconds.
I have sorted out how to do this. First remove the lines:
enable_testing()
and
add_test(testExe testSuite)
and replace them by the line:
add_custom_target(test ./testExe
DEPENDS ./testExe)
at the end of the CMakeLists.txt file. Now make (all) builds both the tests and the main program. If everything is built already, then make test will just check that the tests are built and run them, producing:
[100%] Built target testExe
/myProjectDir/src/test/testMain.cpp:5: error: Failure in FailSpect: false
FAILURE: 1 out of 1 tests failed (1 failures).
Test time: 0.00 seconds.
[100%] Built target test
If the tests are out of date (after a make clean for instance), then make test will produce:
[100%] Building CXX object CMakeFiles/testExe.dir/src/test/testMain.cpp.o
Linking CXX executable testExe
[100%] Built target testExe
/myProjectDir/src/test/testMain.cpp:5: error: Failure in FailSpect: false
FAILURE: 1 out of 1 tests failed (1 failures).
Test time: 0.00 seconds.
[100%] Built target test