Run additional tests by using a feature flag to "cargo test" - testing

I have some tests that I would like to ignore when using cargo test and only run when explicitly passed a feature flag. I know this can be done by using #[ignore] and cargo test -- --ignored, but I'd like to have multiple sets of ignored tests for other reasons.
I have tried this:
#[test]
#[cfg_attr(not(feature = "online_tests"), ignore)]
fn get_github_sample() {}
This is ignored when I run cargo test as desired, but I can't get it to run.
I have tried multiple ways of running Cargo but the tests continue to be ignored:
cargo test --features "online_tests"
cargo test --all-features
I then added the feature definition into my Cargo.toml as per this page, but they continue to be ignored.
I am using workspaces in Cargo. I tried adding the feature definition in both Cargo.toml files with no difference.

Without a workspace
Cargo.toml
[package]
name = "feature-tests"
version = "0.1.0"
authors = ["An Devloper <an.devloper#example.com>"]
[features]
network = []
filesystem = []
[dependencies]
src/lib.rs
#[test]
#[cfg_attr(not(feature = "network"), ignore)]
fn network() {
panic!("Touched the network");
}
#[test]
#[cfg_attr(not(feature = "filesystem"), ignore)]
fn filesystem() {
panic!("Touched the filesystem");
}
Output
$ cargo test
running 2 tests
test filesystem ... ignored
test network ... ignored
$ cargo test --features network
running 2 tests
test filesystem ... ignored
test network ... FAILED
$ cargo test --features filesystem
running 2 tests
test network ... ignored
test filesystem ... FAILED
(some output removed to better show effects)
With a workspace
Layout
.
├── Cargo.toml
├── feature-tests
│   ├── Cargo.toml
│   ├── src
│   │   └── lib.rs
├── src
│   └── lib.rs
feature-tests contains the files from the first section above.
Cargo.toml
[package]
name = "workspace"
version = "0.1.0"
authors = ["An Devloper <an.devloper#example.com>"]
[features]
filesystem = ["feature-tests/filesystem"]
network = ["feature-tests/network"]
[workspace]
[dependencies]
feature-tests = { path = "feature-tests" }
Output
$ cargo test --all
running 2 tests
test filesystem ... ignored
test network ... ignored
$ cargo test --all --features=network
running 2 tests
test filesystem ... ignored
test network ... FAILED
(some output removed to better show effects)
With a virtual workspace
Virtual workspaces do not support specifying features (Cargo issue #4942). You will need to run the tests from within the sub project or specify the path to the appropriate Cargo.toml
Layout
.
├── Cargo.toml
└── feature-tests
├── Cargo.toml
└── src
└── lib.rs
feature-tests contains the files from the first section above.
Cargo.toml
[workspace]
members = ["feature-tests"]
Output
$ cargo test --all --manifest-path feature-tests/Cargo.toml --features=network
running 2 tests
test filesystem ... ignored
test network ... FAILED
$ cargo test --all --manifest-path feature-tests/Cargo.toml
running 2 tests
test filesystem ... ignored
test network ... ignored
(some output removed to better show effects)

Related

cc_import for debug and release versions?

My toolset:
Windows 10 x64 (1909)
Bazel 3.1.0
Visual Studio 2019 (16.6)
Powershell
I need to use a prebuild third-party C++ DLL. The third-party lib looks like this:
<directory> third-party-lib
├── <directory> bin
| ├── <file> third_party_lib.dll
| └── <file> third_party_libd.dll
├── <directory> lib
| ├── <file> third_party_lib.lib
| └── <file> third_party_libd.lib
└── <directory> includes
└── <file> third_party_lib.h
So there are two versions a release and a debug version. Filenames ending with "d" indicate the debug version.
To consume this library I am using a cc_import target:
cc_import(
name = "third-party-lib",
interface_library = "lib/third_party_lib.lib",
shared_library = "bin/third_party_lib.dll",
)
My build target depends on the third-party-lib. Building in release (opt) mode works without any problems:
bazel build //:MyBuildTarget
But if I try to do a debug build I run into linker problems:
bazel build --compilation_mode=dbg //:MyBuildTarget
Is there any possibility to specify debug and release DLLs in cc_import rule? Or is there any other rule that can I use for this propose?
You can use select() to switch between library variants:
cc_import(
name = "third-party-lib",
interface_library = "lib/third_party_lib.lib",
shared_library = select({
":debug_build": "third_party_libd.dll",
"//conditions:default": "third_party_lib.dll",
}),
)
config_setting(
name = "debug_build",
values = {
"compilation_mode": "dbg",
},
)

Correct way to configure interdependent projects (e.g. tensorflow) in bazel build system so proto imports work as is?

As the title suggests, I'm running into an issue where proto import statements do not seem to be relative to the correct path. For concreteness, consider the directory structure in a dir (let's call it ~/base):
`>> tree -L 1
├── models
├── my-lib
| ├── nlp
| ├── BUILD
| └── nlp_parser.cc
| └── WORKSPACE
├── serving
└── tensorflow
For those not familiar, models (as in https://github.com/tensorflow/models/) has tensorflow (https://github.com/tensorflow/tensorflow) as a git submodule, as does serving. Because of this coupled with the fact that the git submodules of tensorflow where on different commits and sometimes incompatible, I have removed the gitsubmodule from the projects and symlinked them to the tensorflow repo on the top most directory so that I can manage only one tensor flow repo instead of 3. That is I have done the following:
`cd models/syntaxnet; rm -rf tensorflow; ln -s ../../tensorflow/ .; cd -`
`cd serving; rm -rf tensorflow tf_models; ln -s ../tensorflow/ .; ln -s ../models .`
Now I want to build a target within my-lib that depends on serving, tensorflow, and models. I added these as local repositories in my WORKSPACE as follows (cat my-lib/WORKSPACE):
workspace(name = "myworkspace")
local_repository(
name = "org_tensorflow",
path = __workspace_dir__ + "/../tensorflow",
)
local_repository(
name = "syntaxnet",
path = __workspace_dir__ + "/../models/syntaxnet",
)
local_repository(
name = "tf_serving",
path = __workspace_dir__ + "/../serving",
)
load('#org_tensorflow//tensorflow:workspace.bzl', 'tf_workspace')
tf_workspace("~/base/tensorflow", "#org_tensorflow")
# ===== gRPC dependencies =====
bind(
name = "libssl",
actual = "#boringssl_git//:ssl",
)
bind(
name = "zlib",
actual = "#zlib_archive//:zlib",
)
Here is my BUILD file (cat my-lib/nlp/BUILD):
load("#tf_serving//tensorflow_serving:serving.bzl", "serving_proto_library")
cc_binary(
name = "nlp_parser",
srcs = [ "nlp_parser.cc" ],
linkopts = ["-lm"],
deps = [
"#org_tensorflow//tensorflow/core:core_cpu",
"#org_tensorflow//tensorflow/core:framework",
"#org_tensorflow//tensorflow/core:lib",
"#org_tensorflow//tensorflow/core:protos_all_cc",
"#org_tensorflow//tensorflow/core:tensorflow",
"#syntaxnet//syntaxnet:parser_ops_cc",
"#syntaxnet//syntaxnet:sentence_proto",
"#tf_serving//tensorflow_serving/servables/tensorflow:session_bundle_config_proto",
"#tf_serving//tensorflow_serving/servables/tensorflow:session_bundle_factory",
"#org_tensorflow//tensorflow/contrib/session_bundle",
"#org_tensorflow//tensorflow/contrib/session_bundle:signature",
],
)
Lastly, here is the output of the build (cd my-lib; bazel build nlp/nlp_parser --verbose_failures):
INFO: Found 1 target...
ERROR: /home/blah/blah/external/org_tensorflow/tensorflow/core/debug/BUILD:33:1: null failed: linux-sandbox failed: error executing command
(cd /home/blah/blah/execroot/my-lib && \
exec env - \
/home/blah/blah/execroot/my-lib/_bin/linux-sandbox #/home/blah/blah/execroot/my-lib/bazel-sandbox/c65fa6b6-9b7d-4710-b19c-4d42a3e6a667-31.params -- bazel-out/host/bin/external/protobuf/protoc '--cpp_out=bazel-out/local-fastbuild/genfiles/external/org_tensorflow' '--plugin=protoc-gen-grpc=bazel-out/host/bin/external/grpc/grpc_cpp_plugin' '--grpc_out=bazel-out/local-fastbuild/genfiles/external/org_tensorflow' -Iexternal/org_tensorflow -Ibazel-out/local-fastbuild/genfiles/external/org_tensorflow -Iexternal/protobuf/src -Ibazel-out/local-fastbuild/genfiles/external/protobuf/src external/org_tensorflow/tensorflow/core/debug/debug_service.proto).
bazel-out/local-fastbuild/genfiles/external/protobuf/src: warning: directory does not exist.
tensorflow/core/util/event.proto: File not found.
tensorflow/core/debug/debug_service.proto: Import "tensorflow/core/util/event.proto" was not found or had errors.
tensorflow/core/debug/debug_service.proto:38:25: "Event" is not defined.
Target //nlp:nlp_parser failed to build
INFO: Elapsed time: 0.776s, Critical Path: 0.42s
What is the correct way to add the modules as local_repository in WORKSPACE so that the proto imports work?
I was having a similar problem after trying to build a project of mine depending on tensorflow on Ubuntu after getting it building on OS X. What ended up working for me was disabling sandboxing with --spawn_strategy=standalone

How to add a CMake custom target to test all sub-projects?

I have a repository with several projects inside. The resulting directory structure is like this :
Repository
--CMakeLists.txt
--Project A
--CMakeLists.txt
--Project B
--CMakeLists.txt
In projects A and B I have tests that I add using add_test. When inside a project I can do make test.
How can I add a target "test" to the top CMakeLists.txt to be able to call make test from the top directory and get the tests of all projects executed ?
I have just tried to reproduce your setup with the following files:
/tmp/test $ tree
.
├── a
│   └── CMakeLists.txt
├── b
│   └── CMakeLists.txt
└── CMakeLists.txt
/tmp/test $ cat CMakeLists.txt
cmake_minimum_required(VERSION 3.0)
project(Foo)
enable_testing()
add_subdirectory(a)
add_subdirectory(b)
/tmp/test $ cat a/CMakeLists.txt
project(A)
add_test(NAME atest COMMAND echo "hello from a")
/tmp/test $ cat b/CMakeLists.txt
project(B)
add_test(NAME btest COMMAND echo "hello from b")
/tmp/test/build $ mkdir build && cd build
/tmp/test/build $ cmake .. && make test
# Remove some output
Running tests...
Test project /tmp/test/build
Start 1: atest
1/2 Test #1: atest ............................ Passed 0.00 sec
Start 2: btest
2/2 Test #2: btest ............................ Passed 0.00 sec

GTest's output has no colors when built with cmake+ninja and executed automatically

I'm trying to configure CMake and ninja as a build system for my project. Except the app itself I have an extra executable for unit tests powered by gtest. I thought it would be nice to have them executed automatically whenever they are built. Here's how I made it:
├── build
└── source
├── CMakeLists.txt
├── main.cc
└── ut
├── CMakeLists.txt
├── gtest
│   ├── ...
└── ut.cc
source/CMakeLists.txt...
cmake_minimum_required (VERSION 2.6)
project (trial)
add_subdirectory(ut)
add_executable(trial main.cc)
...and source/ut/CMakeLists.txt:
add_subdirectory(gtest)
include_directories ("gtest/include")
add_executable(ut ut.cc)
target_link_libraries(ut LINK_PUBLIC gtest_main)
add_custom_target(run_uts
COMMAND ut
DEPENDS ut
WORKING_DIRECTORY ${CMAKE_PROJECT_DIR}
)
Now when I build it, i.e.:
cd build
cmake -GNinja ../source
ninja run_uts
It works fine except that the output is colorless. When I run the ut binary by hand, i.e. build/ut/ut I get nice green and red colors. The colors are also there when I use Unix Makefiles as a genrator for CMake.
Since I'm only learning CMake, is there something I missed or is it an issue with Ninja?
I assume your automated code runs a gtest executable and directs the output to a file. By default, gtest adds color sequences only when sending output to a terminal. In order to force it to add color sequences to output sent to a file or a pipe, run your test executable with the --gtest_color=yes option.

No tests found when using gtest with cmake/ctest

I have a project with the following structure:
linalg
├── build
├── CMakeLists.txt
├── docs
│   └── Doxyfile
├── include
│   └── linalg
│   └── vector3.hpp
├── src
│   ├── CMakeLists.txt
│   └── linalg
│   └── vector3.cpp
└── test
├── CMakeLists.txt
└── linalg
└── test_vector3.cpp
The file test_vector3.cpp is a gtest unit test file which provides two simple tests. The top level CMakeLists.txt simply sets up the includes and adds the src and test subdirectories:
cmake_minimum_required(VERSION 2.8)
project(linalg)
include_directories(include)
add_subdirectory(src)
add_subdirectory(test)
The src/CMakeLists.txt file compiles vector3.cpp into a static library:
cmake_minimum_required(VERSION 2.8)
add_library(linalg linalg/vector3.cpp)
The test/CMakeLists.txt file is based on the example provided in /usr/share/cmake-2.8/Modules/FindGTest.cmake:
cmake_minimum_required(VERSION 2.8)
enable_testing()
find_package(GTest REQUIRED)
include_directories(${GTEST_INCLUDE_DIRS})
add_executable(test_vector3 linalg/test_vector3.cpp)
target_link_libraries(test_vector3 linalg ${GTEST_BOTH_LIBRARIES} pthread)
add_test(test_vector3 test_vector3)
I then run the following:
cd build
cmake ..
make
I get the liblinalg.a library compiled correctly in to build/src and I get the test_vector3 executable compiled correctly in to build/test. I can run the test_vector3 executable and I get the output from googletest saying that all tests have passed, however if I run make test I get no output whatsoever and if I run ctest .. I get a message saying:
Test project /home/ryan/GitHub/linalg/build
No tests were found!!!
Is there something I am missing? Or have I just misunderstood how ctest works with gtest?
The crux of the problem is that enable_testing should be called from your top-level CMakeLists.txt in this case. Adding include(CTest) to your top-level CMakeLists.txt should fix this for you.
This would allow you to remove the enable_testing call in test/CMakeLists.txt, since the CTest submodule calls enable_testing internally.
Just to update this.
cmake in version 3.9 added support for GoogleTest integration with CTest.
So you can now get CTest to scrape all of the test macros in your test executable, not just the whole executable.
Example here:
https://gist.github.com/johnb003/65982fdc7a1274fdb023b0c68664ebe4