Running python unit tests as part of bamboo build - bamboo

I need to run python units test cases as part of bamboo build step and the build needs to fail if unit tests fail.
For this, I have a Script step in bamboo build and i am trying to run the following in it:
python -m unittest discover /test
Here, /test folder has all the unit tests.
The output of the above script it
Ran (0) tests
So the problem is that bamboo isn't able to discover these tests. Bamboo agent is linux.
Wondering if anyone has done such a thing before and has any suggestions.

The following worked. Used -p (pattern) attribute to discover/run the unit tests in bamboo (unix build agent)
python -m unittest discover -s test -p "T*.py"
Note: 1. all my test cases start with "T" e.g. Test_check.py
2. "test" is the package where all my test cases are.

If you haven't figured it out, likely because in windows filenames aren't case sensitive but in Linux they are...
And you're test file named Test_xxxx.py isn't the same as test_xxxx.py which is the pattern that discovery is trying to use...

Related

How to run CI selenium side runner tests on Jenkins

I have a .side file generated by the Selenium IDE, which I need to run on CI using Jenkins.
I am running it as a build step with the following shell command:
selenium-side-runner /path/to/file.ide
The problem arises due to the fact that no matter if the selenium test fails, Jenkins always shows is as success.
In this thread it's suggested to upload the file as generic, but still, the commands to execute it are missing
How to upload a generic file into a Jenkins job?
I've found a possible solution to it on this posts, but I would appreciate having a cleaner way to solve this instead of parsing the results checking for errors.
How to mark a build unstable in Jenkins when running shell scripts
Is there a plugin able to run selenium .side files on Jenkins and this one showing the success/failures of the test?
You can generate a Junit test report file and then use the Jenkins Junit plugin after your tests execution.
selenium-side-runner --output-directory=results --output-format=junit
# Outputs results in `junit` frormat in `./results/projectName.xml'
Check the official documentation for more details.

How to run the gem5 unit tests?

Gem5 has several tests in the source tree, and there is some documentation at: http://www.gem5.org/Regression_Tests but those docs are not very clear.
What tests are there and how to run them?
Unit vs regression tests
gem5 has two kinds of tests:
regression: run some workload (full system or syscall emulation) on the entire simulator
unit: test only a tiny part of the simulator, without running the entire simulator binary
We will cover both on this answer.
Regression tests
2019 regression tests
A new testing framework was added in 2019 and it is documented at: https://gem5.googlesource.com/public/gem5/+/master/TESTING.md
Before sending patches, you basically want to run:
cd tests
./main.py run -j `nproc` -t `nproc`
This will:
build gem5 for the actively supported ISAs: X86, ARM, RISCV with nproc threads due to j
download binaries required to run tests from gem5.org, e.g. http://www.gem5.org/dist/current/arm/ see also: http://gem5.org/Download It was not currently possible to download out or the source tree, which is bad if you have a bunch of git worktrees lying around.
run the quick tests on nproc threads due to -t, which should finish in a few minutes
You can achieve the same as the previous command without cd by passing the tests/ directory as an argument:
./main.py run -j `nproc` -t `nproc` tests
but I wish neither were necessary: https://gem5.atlassian.net/browse/GEM5-397
This is exactly what the automated upstream precommit tests are running as can be seen from tests/jenkins/presubmit.sh.
Stdout contains clear result output of form:
Test: cpu_test_DerivO3CPU_FloatMM-ARM-opt Passed
Test: cpu_test_DerivO3CPU_FloatMM-ARM-opt-MatchStdout Passed
Test: realview-simple-atomic-ARM-opt Failed
Test: realview-simple-atomic-dual-ARM-opt Failed
and details about each test can be found under:
tests/.testing-results/
e.g.:
.testing-results/SuiteUID:tests-gem5-fs-linux-arm-test.py:realview-simple-atomic-ARM-opt/TestUID:tests-gem5-fs-linux-arm-test.py:realview-simple-atomic-ARM-opt:realview-simple-atomic-ARM-opt/
although we only see some minimal stdout/stderr output there which don't even show the gem5 stdout. The stderr file does however contain the full command:
CalledProcessError: Command '/path/to/gem5/build/ARM/gem5.opt -d /tmp/gem5outJtSLQ9 -re '/path/to/gem5/tests/gem5/fs/linux/arm/run.py /path/to/gem5/master/tests/configs/realview-simple-atomic.py' returned non-zero exit status 1
so you can remove -d and -re and re-run that to see what is happening, which is potentially slow, but I don't see another way.
If a test gets stuck running forever, you can find its raw command with Linux' ps aux command since processes are forked for each run.
Request to make it easier to get the raw run commands from stdout directly: https://gem5.atlassian.net/browse/GEM5-627
Request to properly save stdout: https://gem5.atlassian.net/browse/GEM5-608
To further stress test a single ISA, you can run all tests for one ISA with:
cd tests
./main.py run -j `nproc` -t `nproc` --isa ARM --length long --length quick
Each test is classified as either long or quick, and using both --length runs both.
long tests are typically very similar to the default quick ones, but using more detailed and therefore slower models, e.g.
tests/quick/se/10.mcf/ref/arm/linux/simple-atomic/ is quick with a faster atomic CPU
tests/long/se/10.mcf/ref/arm/linux/minor-timing/ is long with a slower Minor CPU
Tested in gem5 69930afa9b63c25baab86ff5fbe632fc02ce5369.
2019 regression tests run just one test
List all available tests:
./main.py list --length long --length quick
This shows both suites and tests, e.g.:
SuiteUID:tests/gem5/cpu_tests/test.py:cpu_test_AtomicSimpleCPU_Bubblesort-ARM-opt
TestUID:tests/gem5/cpu_tests/test.py:cpu_test_AtomicSimpleCPU_Bubblesort-ARM-opt:cpu_test_AtomicSimpleCPU_Bubblesort-ARM-opt
TestUID:tests/gem5/cpu_tests/test.py:cpu_test_AtomicSimpleCPU_Bubblesort-ARM-opt:cpu_test_AtomicSimpleCPU_Bubblesort-ARM-opt-MatchStdout
And now you can run just one test with --uid:
./main.py run -j `nproc` -t `nproc` --isa ARM --uid SuiteUID:tests/gem5/cpu_tests/test.py:cpu_test_AtomicSimpleCPU_FloatMM-ARM
A bit confusingly, --uid must point to a SuiteUID, not TestUID.
Then, when you run the tests, and any of them fails, and you want to run just the failing one, the test failure gives you a line like:
Test: cpu_test_DerivO3CPU_FloatMM-ARM-opt Passed
and the only way to run just the test is to grep for that string in the output of ./main.py list since cpu_test_DerivO3CPU_FloatMM-ARM-opt is not a full test ID, which is very annoyng.
2019 regression tests out of tree
By default, tests/main.py places the build at gem5/build inside the source tree. Testing an out of tree build is possible with --build-dir:
./main.py run -j `nproc` -t `nproc` --isa ARM --length quick --build-dir path/to/my/build
which places the build in path/to/my/build/ARM/gem5.opt instead for example.
If your build is already done, save a few scons seconds with --skip-build option as well:
./main.py run -j `nproc` -t `nproc` --isa ARM --length quick --build-dir path/to/my/build --skip-build
Note however that --skip-build also skips the downloading of test binaries. TODO patch that.
2019 regression tests custom binary download director
Since https://gem5-review.googlesource.com/c/public/gem5/+/24525 you can use the --bin-path option to specify where the test binaries are downloaded, otherwise they just go into the source tree.
This allows you to reuse the large binaries such as disk images across tests in multiple worktrees in a single machine, saving time and space.
Pre-2019 regression tests
This way of running tests is deprecated and will be removed.
The tests are run directly with scons.
But since the test commands are a bit long, there is even an in-tree utility generate tests commands for you.
For example, to get the command to run X86 and ARM quick tests, run:
./util/regress -n --builds X86,ARM quick
The other options besides quick are long or all to do both long and quick at the same time.
With -n it just prints the test commands, and without it it actually runs them.
This outputs something like:
scons \
--ignore-style \
--no-lto \
build/X86/gem5.debug \
build/ARM/gem5.debug \
build/X86/gem5.fast \
build/ARM/gem5.fast \
build/X86/tests/opt/quick/se \
build/X86/tests/opt/quick/fs \
build/ARM/tests/opt/quick/se \
build/ARM/tests/opt/quick/fs
TODO: why does it build gem5.debug and gem5.fast, but then runs an /opt/ test?
So note how this would both:
build the gem5 executables, e.g. build/X86/gem5.debug
run the tests, e.g. build/X86/tests/opt/quick/fs
Or get the command to run all tests for all archs:
./util/regress -n all
Then, if you just want to run one of those types of tests, e.g. the quick X86 ones you can copy paste the scons just for that tests:
scons --ignore-style build/X86/tests/opt/quick/se
Running the tests with an out of tree build works as usual by magically parsing the target path: How to build gem5 out of tree?
scons --ignore-style /any/path/that/you/want/build/X86/tests/opt/quick/se
or you can pass the --build-dir option to util/regress:
./util/regress --build-dir /any/path/that/you/want all
The tests that boot Linux on the other hand require a Linux image with a specific name in the M5_PATH, which is also annoying.
This would however be very slow, not something that you can run after each commit: you are more likely to want to run just the quick tests for your ISA of interest.
Pre-2019 regression tests: Run just one test
If you just append the path under tests in the source tree to the test commands, it runs all the tests under a given directory.
For example, we had:
scons --ignore-style build/X86/tests/opt/quick/se
and we notice that the following path exists under tests in the source tree:
quick/se/00.hello/ref/x86/linux/simple-atomic/
so we massage the path by removing ref to get the final command:
scons build/X86/tests/opt/quick/se/00.hello/x86/linux/simple-atomic
Pre-2019 regression tests: Find out the exact gem5 CLI of the command run
When you run the tests, they output to stdout the m5out path.
Inside the m5out path, there is a simout with the emulator stdout, which contains the full gem5 command line used.
For example:
scons --ignore-style build/X86/tests/opt/quick/se
outputs:
Running test in /any/path/that/you/want/build/ARM/tests/opt/quick/se/00.hello/arm/linux/simple-atomic.
and the file:
/any/path/that/you/want/build/ARM/tests/opt/quick/se/00.hello/arm/linux/simple-atomic
contains:
command line: /path/to/mybuild/build/ARM/gem5.opt \
-d /path/to/mybuild/build/ARM/tests/opt/quick/fs/10.linux-boot/arm/linux/realview-simple-atomic \
--stats-file 'text://stats.txt?desc=False' \
-re /path/to/mysource/tests/testing/../run.py \
quick/fs/10.linux-boot/arm/linux/realview-simple-atomic
Pre-2019 regression tests: Re-run just one test
If you just run a test twice, e.g. with:
scons build/ARM/tests/opt/quick/fs/10.linux-boot/arm/linux/realview-simple-atomic
scons build/ARM/tests/opt/quick/fs/10.linux-boot/arm/linux/realview-simple-atomic
the second run will not really re-run the test, but rather just compare the stats from the previous run.
To actually re-run the test, you must first clear the stats generated from the previous run before re-running:
rm -rf build/ARM/tests/opt/quick/fs/10.linux-boot/arm/linux/realview-simple-atomic
Pre-2019 regression tests: Get test results
Even this is messy... scons does not return 0 success and 1 for failure, so you have to parse the logs. One easy way it to see:
scons --ignore-style build/X86/tests/opt/quick/se |& grep -E '^\*\*\*\*\* '
which contains three types of results: PASSSED, CHANGED or FAILED
CHANGED is mostly for stat comparisons that had a great difference, but those are generally very hard to maintain and permanently broken, so you should focus on FAILED
Note that most tests currently rely on SPEC2000 and fail unless you have access to this non-free benchmark...
Unit tests
The unit tests, which compile to separate executables from gem5, and just test a tiny bit of the code.
There are currently two types of tests:
UnitTest: old and deprecated, should be converted to GTest
GTest: new and good. Uses Google Test.
Placed next to the class that they test, e.g.:
src/base/cprintf.cc
src/base/cprintf.hh
src/base/cprintftest.cc
Compile and run all the GTest unit tests:
scons build/ARM/unittests.opt
Sample output excerpt:
build/ARM/base/cprintftest.opt --gtest_output=xml:build/ARM/unittests.opt/base/cprintftest.xml
Running main() from gtest_main.cc
[==========] Running 4 tests from 1 test case.
[----------] Global test environment set-up.
[----------] 4 tests from CPrintf
[ RUN ] CPrintf.Misc
[ OK ] CPrintf.Misc (0 ms)
[ RUN ] CPrintf.FloatingPoint
[ OK ] CPrintf.FloatingPoint (0 ms)
[ RUN ] CPrintf.Types
[ OK ] CPrintf.Types (0 ms)
[ RUN ] CPrintf.SpecialFormatting
[ OK ] CPrintf.SpecialFormatting (0 ms)
[----------] 4 tests from CPrintf (0 ms total)
[----------] Global test environment tear-down
[==========] 4 tests from 1 test case ran. (0 ms total)
[ PASSED ] 4 tests.
Compile and run just one test file:
scons build/ARM/base/cprintf.test.opt
./build/ARM/base/cprintf.test.opt
List available test functions from a test file, and run just one of them:
./build/ARM/base/cprintftest.opt --gtest_list_tests
./build/ARM/base/cprintftest.opt SpecialFormatting
Tested on gem5 200281b08ca21f0d2678e23063f088960d3c0819, August 2018.
Unit tests with SimObjects
As of 2019, the unit tests are quite limited, because devs haven't yet found a proper way to test SimObjects in isolation, which make up the bulk of the simulator, and are tightly bound to the rest of the simulator. This unmerged patch attempted to address that: https://gem5-review.googlesource.com/c/public/gem5/+/15315
It might be possible to work around it with Google Mock, which is already present in-tree, but it is not clear if anyone has the patience to mock out enough of SimObject to actually make such tests.
I believe the only practical solution is to embed all tests into gem5.opt, and then have a --test <testname> option that runs tests instead of running simulating. This way we get a single binary without duplicating binary sizes, but can still access everything.
Related issue: https://gem5.atlassian.net/browse/GEM5-433
Continuous integration
20.1 Nightlies enabled
As mentioned at: https://www.gem5.org/project/2020/10/01/gem5-20-1.html a Jenkins that runs the long regressions was added at: https://jenkins.gem5.org/job/Nightly/
2019 CI
Around 2019-04 a precommit CI that runs after every pull request after the maintainer gives +1.
It uses a magic semi-internal Google provided Jenkins setup called Kokoro which provides low visibility on configuration.
See this for example: https://gem5-review.googlesource.com/c/public/gem5/+/18108 That server does not currently run nightlies. The entry point is tests/jenkins/presubmit.sh.
Nightlies were just disabled to start with.
What is the environment of the 2019 CI?
In-tree Docker images are used: https://askubuntu.com/questions/350475/how-can-i-install-gem5/1275773#1275773
Pre-2019 CI update
here was a server running somewhere that runs the quick tests for all archs nightly and posts them on the dev mailing list, adding to the endless noise of that enjoyable list :-)
Here is a sample run: https://www.mail-archive.com/gem5-dev#gem5.org/msg26855.html
As of 2019Q1, gem5 devs are trying to setup an automated magic Google Jenkins to run precommit tests, a link to a prototype can be found at: https://gem5-review.googlesource.com/c/public/gem5/+/17456/1#message-e9dceb1d3196b49f9094a01c54b06335cea4ff88 This new setup uses the new testing system in tests/main.py.
Pre-2019 CI: Why so many tests are CHANGED all the time?
As of August 2018, many tests have been CHANGED for a long time.
This is because stats can vary due to a very wide number of complex
factors. Some of those may be more accurate, others no one knows,
others just bugs.
Changes happen so often that devs haven't found the time to properly
understand and justify them.
If you really care about why they changed, the best advice I have is to bisect them.
But generally your best bet is to just re-run your old experiments on the newer gem5 version, and compare everything there.
gem5 is not a cycle accurate system simulator, so absolute values or
small variations are not meaningful in general.
This also teaches us that results obtained with small margins are
generally not meaningful for publication since the noise is too great.
What that error margin is, I don't know.

How to run CTest test in debugger

Suppose in CMakeLists.txt I have
add_executable(mytarget main.cpp)
enable_testing()
add_test(mytarget_test0 mytarget -option0)
Is there any easy way how can I run mytarget in GDB with all command line options from some particular CTest test? (Other than searching for test in CMakeLists and then copy-pasting add_test parameters to command line manually?)
Real life scenario: I run all tests using ctest, one fails, I want to open it in debugger quickly.
In other build systems there are command line parameters to use gdb, for example in Meson meson test --gdb testname , in bazel bazel --run_under=gdbserver. I did not found anything similar for CTest
It is possible to get test command with arguments:
ctest -R $regex_matching_test -V -N
As output you will get something like:
Test project ../cmake-build-debug-gcc7
Constructing a list of tests
Done constructing a list of tests
1: Test command: ../cmake-build-debug-gcc7/my_tool "-v" "test0"
Test #1: my_tool_test0
Total Tests: 1
Then using regexp it is possible to grab command line args for gdb
I use the following procedure:
make clean
make # Build target and all unit tests
make test # Run all unit tests: make test
You will get an result list and some output, depending of the unit test frame work. Fix the issue and re-run the failed test within gdb until it succeeds. Restart the whole procedure.
In case you are using an IDE (e.g. QtCreator) it is easy to debug through the failed test case.

Display selenese-runner results in Jenkins

As I am implementing an automated way to GUI test our webapplication with selenium I ran into some issues.
I am using selenese-runner to execute our Selenium test suites, created with Selenium IDE as a post build action in Jenkins.
This works perfeclty fine, as the build fails when something is wrong, and the build succeeds if all tests are passed. And the results are stored on a per build basis as HTML files, generated be selenese-runner.
My problem is however, that I seem to be unable to find a way, how to display these results in the respective jenkins build.
Does anyone have an idea how to solve this issue. Or maybe I am on the wrong path at all?
Your help is highly appreciated!
I believe the JUnit plugin should do what you want, but it doesn't work for me.
My config uses this shell script to run the tests (you can see the names of all my test suites):
/usr/bin/Xvfb &
export DISPLAY=localhost:0.0
cd ${WORKSPACE}
java -jar ./test/selenium/bin/selenese-runner.jar --baseurl http://${testenvironment} --screenshot-on-fail ./seleniumResults/ --html-result ./seleniumResults/ ./test/selenium/Search_TestSuite.html ./test/selenium/Admin_RegisteredUser_Suite.html ./test/selenium/Admin_InternalUser_Suite.html ./test/selenium/PortfolioAgency_Suite.html ./test/selenium/FOAdmin_Suite.html ./test/selenium/PublicWebsite_Suite.html ./test/selenium/SystemAdmin_Content_Suite.html ./test/selenium/SystemAdmin_MetaData_Suite.html
killall Xvfb
And I can see the result of the most recent test (you can see the name of my jenkins task folder)
http://<JENKINS.MY.COMPANY>/job/seleniumRegressionTest/ws/seleniumResults/index.html
Earlier tests are all saved on the Jenkins server, so I can view them if I need to.

Explicitly specifying the main package to run tests for in golang with goconvey

How do I explicitly say with my go test command to run only tests for the main package and not others in my source directory.
At the moment it's working with $go test -v. But... I am using goconvey as well and it seems to be running recursively. According to this page https://github.com/smartystreets/goconvey/wiki/Profiles I have a file where I can pass arguments into the go test command. I know you can go test -v ./... for recursive or go test -c packagename/... but how do I just do it for the main?
Profiles is one to accomplish this, but you can also specify a 'depth' for the runner:
$ goconvey -depth=0
A value of 0 limits the runner to the working directory.
Run goconvey -help for details.