I have made a python testsuite to test my project. I have added in Makefile.am the variable:
TESTS = ./launcher.sh
launcher.sh contains: tests/testsuite.py
When I do ./launcher.sh, my testsuite is correctly executed.
However, when I do make check, I get the following output:
PASS: launcher.sh
============================================================================
Testsuite summary for spider 1.0
============================================================================
# TOTAL: 1
# PASS: 1
# SKIP: 0
# XFAIL: 0
# FAIL: 0
# XPASS: 0
# ERROR: 0
============================================================================
How can I hide the default output and use the output of my testsuite ?
The Automake manual contains a whole chapter on testing, which would be helpful for understanding the context of Automake's test suite support. Moreover, it is important to understand that part of the bargain you enter into by using Automake to generate makefiles for you is to accept some limitations on the form and behavior of the resulting build system.
How can I hide the default output and use the output of my testsuite ?
To the best of my knowledge, you cannot hide the default output of make check, but you can cause the output of your test program to be emitted to make's standard output instead of redirected to a file. The easiest way to do this would be to enable the serial test harness by turning on Automake's serial-tests option. That would ordinarily be expressed via the argument to the AM_INIT_AUTOMAKE macro in your configure.ac:
AM_INIT_AUTOMAKE([serial-tests])
Note also that it should not be necessary to wrap your tests/testsuite.py in a shell script. Just make sure it is executable (which it sounds like you have already done), and name it directly, relative path included, in the value of the TESTS variable.
Related
I'm using gcov and I'd like to gather coverage results on a per test case basis.
QUESTION
When I execute the googletest executable, is it possible to pass in an argument on command line that says execute only the Nth test case?
I'd even be ok with passing in the name of the fixture+case; I can just use Python with regex to pull out all the tests.
Obviously I can accomplish the same thing by having one test case per .cpp file but that sounds ... stupid.
googletest allows to run a single test case or even the subset of the tests. You can specify a filter string in the GTEST_FILTER environment variable or in the --gtest_filter command line option and googletest will only run the tests whose full names (in the form of TestSuiteName.TestName) match the filter. More information about the format of the filter string can be found in Running a Subset of the Tests section. Also googletest supports --gtest_list_tests command line option to print the list of all test cases. It can be useful in your case.
Is there a straightforward way when using ctest to get the number of tests passed (and/or failed) within a script, e.g., BASH, without grep-ping through a generated output file?
a straightforward way ... without grep-ping
No, I believe there is not.
You can also "grep" the count the lines Test failed. and Test passed. from CMake the_build_dir/Testing/Temporary/LastTest.log.
You could potentially generate ctest XML report to a dashboard and then parse the XML reports (instead of sending them). It's nowhere as straightforward, as ctest script has to be written that configures, builds and tests the project and then separate XML tool needs to parse the result.
You can also run a cdash server and let that ctest script upload the results to cdash and then query cdash server with simple curl 'https://your.cdash.server/api/v1/index.php?project=TheProjectName' | jq '.buildgroups[] | select(.id == 2).builds[] | { "pass": .test.pass, "fail": .test.fail, }. The querying is simple, but.. it needs to run a cdash server and also test with ctest script, it's not near straightforward..
Btw, it's easy to get the number of failed tests - it's just wc -l the_build_dir/Testing/Temporary/LastTestsFailed.log.
When defining a Bamboo plan variable, the page has this.
For task configuration fields, use the syntax
${bamboo.myvariablename}. For inline scripts, variables are exposed as
shell environment variables which can be accessed using the syntax
$BAMBOO_MY_VARIABLE_NAME (Linux/Mac OS X) or %BAMBOO_MY_VARIABLE_NAME%
(Windows).
However, that doesn't work in my Linux inline script. For example, I have the following defined a a plan variable
name: my_plan_var value: some_string
My inline script is simply...
PLAN_VAR=$BAMBOO_MY_PLAN_VAR
echo "Plan var: $PLAN_VAR"
and I just get a blank string.
I've tried this
PLAN_VAR=${bamboo.my_plan_var}
But I get
${bamboo.my_plan_var}: bad substitution
on the log viewer window.
Any pointers?
I tried the following and it works:
On the plan, I set my_plan_var to "it works" (w/o quotes)
In the inline script (don't forget the first line):
#/bin/sh
PLAN_VAR=$bamboo_my_plan_var
echo "testing: $PLAN_VAR"
And I got the expected result:
testing: it works
I also wanted to create a Bamboo variable and the only thing I've found to share it between scripts is with inject-variables like following:
Add to your bamboo-spec.yaml the following after your script that will create the variable:
Build:
tasks:
- script: create-bamboo-var.sh
- inject-variables:
file: bamboo-specs/vars.yaml
scope: RESULT
# namespace: plan
- script: echo ${bamboo.inject.GIT_VERSION} # just for testing
Note: Namespace defaults to inject.
In create-bamboo-var.sh create the file bamboo-specs/vars.yaml:
#!bin/bash
versionStr=$(git describe --tags --always --dirty --abbrev=4)
echo "GIT_VERSION: ${versionStr}" > ./bamboo-specs/vars.yaml
Or for multiple lines you can use:
SW_NUMBER_DIGITS=${1} # Passed as first parameter to build script
cat <<EOT > ./bamboo-specs/vars.yaml
GIT_VERSION: ${versionStr}
SW_NUMBER_APP: ${SW_NUMBER_DIGITS}
EOT
Scope can be local or result. Local means it's only available for current job and result means it can be used in subsequent stages of this plan and releases that are created from the result.
Namespace is just used to avoid naming collisions with other variables.
With the above you can use that variable in later scripts with ${bamboo.inject.GIT_VERSION}. The last script task is just to see that it is working in other scripts. You can also see the variables in the web app as build meta data.
I'm using the above script before the build (in my case compiling C-Code) takes place so I can also create a version.h file that can be used by the source code.
This is still a bit cumbersome but I'm happy with it and I hope it will help others to configure Bamboo. Bamboo documentation could be better. (Still a lot try and error)
I'm working on a project using CMake and just integrated some CppUnit tests. I would like to use CTest and thus I used add_test in my CMakeLists.txt files to have the tests executed when typing make test.
Yet I observe that, when typing make test, it says that all the tests passed even if I make a test with trivial errors. Erroneous tests report these errors when executed manually (e.g. ./my_test) but not when executed using make test.
Here is the content of my CMakeLists.txt in the test directory:
add_executable(TestDataSpace TestDataSpace.cpp)
target_link_libraries(TestDataSpace ${DEP_LIBRARIES} ${CPPUNIT_LIBRARIES})
add_executable(TestVariableManager TestVariableManager.cpp)
target_link_libraries(TestVariableManager ${DEP_LIBRARIES} ${CPPUNIT_LIBRARIES})
add_executable(TestLayoutManager TestLayoutManager.cpp)
target_link_libraries(TestLayoutManager ${DEP_LIBRARIES} ${CPPUNIT_LIBRARIES})
add_test(NAME "TestDataSpace" COMMAND ${MY_PROJECT_SOURCE_DIR}/test/TestDataSpace)
add_test(NAME "TestVariableManager" COMMAND ${MY_PROJECT_SOURCE_DIR}/test/TestVariableManager)
add_test(NAME "TestLayoutManager" COMMAND ${MY_PROJECT_SOURCE_DIR}/test/TestLayoutManager)
CTest does find the executables, since putting a wrong path for the command makes CMake complain that it doesn't find them.
make test outputs the following:
Running tests... Test project
Start 1: TestDataSpace 1/3 Test #1: TestDataSpace .................... Passed 0.01 sec
Start 2: TestVariableManager 2/3 Test #2: TestVariableManager .............. Passed 0.02 sec
Start 3: TestLayoutManager 3/3 Test #3: TestLayoutManager ................ Passed 0.01 sec
100% tests passed, 0 tests failed out of 3
What am I missing?
I'm not familiar with CppUnit, but I suspect your executables are always returning 0, even if the test fails. CTest takes a return of 0 to indicate success.
If you change your return value when the test fails to a non-zero number, you should see the expected output from CTest.
Alternatively, you can modify CTest's behaviour by using set_tests_properties to set the values of PASS_REGULAR_EXPRESSION and/or FAIL_REGULAR_EXPRESSION. If either of these are set, the return value is ignored. So for example, you could do:
set_tests_properties(
TestDataSpace
TestVariableManager
TestLayoutManager
PROPERTIES PASS_REGULAR_EXPRESSION "TEST PASSED;Pass")
As an aside, you can avoid passing the full path to the test executables in your case since they are actual CMake targets defined in the same CMakeLists.txt:
add_test(NAME TestDataSpace COMMAND TestDataSpace)
add_test(NAME TestVariableManager COMMAND TestVariableManager)
add_test(NAME TestLayoutManager COMMAND TestLayoutManager)
I have to do the verification of DPRAM.
Each test case is written in different file named test1.v,test2.v etc.
I want to write a script(unix) such that when I type run test1.v then only that test case will run.
Note :- test1.v contents only task which includes read assert,write assert etc.
The test bench is a separate file which includes clock and component instantiation.
when run test1.v is done then it should link the test1.v task to the testbench and then output is obtained.
I have done the coding in verilog
How to do this?
So, as far as I can make out, your different tests, or 'testcases' are in files named test<n>.v. And I'll assume that each of these testcases has a task that has the same name in all files, say run_testcase. This means that your testbench (testbench.v, say) must look something like:
module testbench();
...
`include "test.v" // <- problem is this line
...
initial begin
// Some setup
run_testcase();
//
$finish;
end
endmodule
So your problem is the include line - a different file needs to be included depending on the testcase. I can think of two ways of solving this first one is as toolic suggested - using a symbolic link to 'rename' the testcase file. So an example wrapper script (run_sim1) to launch your sim might look a bit like:
#! /usr/bin/env sh
testcase=$1
ln -sf ${testcase} test.v
my_simulator testbench.v
Another way is to use a macro, and define this in the wrapper script for your simulation. Your testbench would be modified to look like:
...
`include `TESTCASE
...
And the wrapper script (run_sim2):
#! /usr/bin/env sh
testcase=$1
my_simulator testbench.v +define+TESTCASE=\"${testcase}\"
The quotes are important here, as the verilog include directive expects them. Unfortunately, we can't leave the quotes in the testbench because it will then look like a string to verilog, and the TESTCASE macro won't be expanded.
One way to do it is to have the testbench file include a test file with a generic name:
`include "test.v"
Then, have your script create a symbolic link to the test you want to run. For example, in a shell script or Makefile, to run test1.v:
ln -sf test1.v test.v
run_sim
To run test2.v, your script would substitute test2 for test1, etc.