I have 3 test suites: test1.robot (10 TCs inside), test2.robot(3 TCs inside), test3.robot(2TCs inside).
I run all test suites by shell script: robot --variable:ABC --name Testing --outputdir /perf-logs/Testing test1.robot test2.robot test3.robot
I found that we have 2 ways to rerun:
--rerunfailed (for tests) and --rerunfailedsuites (for testsuites)
I have some question:
1/ What is different between them (--rerunfailed vs --retunfailedsuites)
2/ Assumpting I have 2 TCs failed in test suite (test1.robot) and 1 TCs failed in testsuite test2.robot, so Which re-run I should use?
3/ Assumpting first time run 3 testsuites I have 1 output.xml. After re-running TCs failed (for 2 testsuites) I have another output2.xml. Could I merge them?
4/ In case I only re-run 1 TCs failed (in test1.robot) and get result in output3.xml. Could I merge output3.xml with first output.xml?
Many thanks
Difference:
https://robotframework.org/robotframework/latest/RobotFrameworkUserGuide.html
-R, --rerunfailed <file>
Selects failed tests from an earlier output file to be re-executed.
-S, --rerunfailedsuites <file>
Selects failed test suites from an earlier output file to be re-executed.
Which to use:
if you want to rerun an entire suite use rerunfailedsuite if you want to rerun failed test cases only not the passed tests in the suite then use rerunfailed ( if test are independent)
to combine files
rebot --outputdir . --output final_output.xml output.xml output.xml
4)same as above
Related
There is a small project which produces a binary application. The source code is C, I'm using autotools to create the Makefile and build the binary - it works as well.
I would like to run tests cases with that binary. Here is what I did:
SUBDIRS = src
dist_doc_DATA = README
TESTS=
TESTS+=tests/config1.conf
TESTS+=tests/config2.conf
TESTS+=tests/config3.conf
TESTS+=tests/config4.conf
TESTS+=tests/config5.conf
TESTS+=tests/config6.conf
TESTS+=tests/config7.conf
TESTS+=tests/config8.conf
TESTS+=tests/config9.conf
TESTS+=tests/config10.conf
TESTS+=tests/config11.conf
I would like to run these cases as argument with the tool. When I run make check, I got:
make[3]: Entering directory '/home/airween/src/mytool'
FAIL: tests/config1.conf
FAIL: tests/config2.conf
FAIL: tests/config3.conf
which is correct, because those files are simple configurations files.
How can I solve that make check runs my tool with the scripts above, and finally I get a list with number of success, failed, ... tests, like in that case:
============================================================================
Testsuite summary for mytool 0.1
============================================================================
# TOTAL: 11
# PASS: 0
# SKIP: 0
# XFAIL: 0
# FAIL: 11
# XPASS: 0
# ERROR: 0
Edit: so I would like to emulate these runs:
for f in `ls -1 tests/*.conf; do src/mytool ${f}; done
but - of course - I want to see the summary at the end.
Thanks.
The Autotools' built-in test runner expects you to specify the names of executable tests via the make variable TESTS. You cannot just put random filenames in there and expect make or Automake to know what to do with them.
The tests can be built programs, generated scripts, static scripts distributed with the project, or any combination of the above.
How can I solve that make check runs my tool with the scripts above, and finally I get a [test summary report]?
You have acknowledged that your configuration files are not scripts, so stop calling them that! This is in fact the crux of the problem. The easiest solution is probably to create actual executable scripts, one for each case, and name those in your TESTS variable. Each one would run the binary under test with the appropriate configuration file (that is, you're responsible for making them do that if those are the tests you want to perform).
See also the Automake Manual's chapter on tests.
Okay, the solution from here:
tests/Makefile.am:
==================
TEST_EXTENSIONS = .conf
CONF_LOG_COMPILER = ./test-suit.sh
TESTS=
TESTS+=config1.conf
TESTS+=config2.conf
TESTS+=config3.conf
TESTS+=config4.conf
TESTS+=config5.conf
TESTS+=config6.conf
TESTS+=config7.conf
TESTS+=config8.conf
TESTS+=config9.conf
TESTS+=config10.conf
TESTS+=config11.conf
test/test-suit.sh:
==================
#!/bin/sh
CONF=$1
exec ../src/mytool $CONF
And the result:
make check
...
PASS: config1.conf
PASS: config2.conf
PASS: config3.conf
PASS: config4.conf
PASS: config5.conf
PASS: config6.conf
PASS: config7.conf
PASS: config8.conf
PASS: config9.conf
PASS: config10.conf
PASS: config11.conf
============================================================================
Testsuite summary for mytool 0.1
============================================================================
# TOTAL: 11
# PASS: 11
# SKIP: 0
# XFAIL: 0
# FAIL: 0
# XPASS: 0
# ERROR: 0
============================================================================
This is what I expected.
I'm learning how to add unit tests to an objective-c project using XCode9. So I've created a command line project from scratch called Foo and afterwards I've added a new target to the project called FooTests. Afterwards I've edited Foo's scheme to add FooTests. However, whenever I run the tests (i.e., menu "Product" -> "Tests" ) XCode9 throws the following error:
Showing All Messages
Test target FooTests encountered an error (Early unexpected exit, operation never finished bootstrapping - no restart will be attempted)
However, when I try to run tests by calling xcode-build from the command line, it seems that all unit tests are executed correctly. Here's the output;
a483e79a7057:foo ram$ xcodebuild test -project foo.xcodeproj -scheme foo
2020-05-15 17:39:30.496 xcodebuild[53179:948485] IDETestOperationsObserverDebug: Writing diagnostic log for test session to:
/var/folders/_z/q35r6n050jz5fw662ckc_kqxbywcq0/T/com.apple.dt.XCTest/IDETestRunSession-E7DD2270-C6C2-43ED-84A9-6EBFB9A4E853/FooTests-8FE46058-FC4A-47A2-8E97-8D229C5678E1/Session-FooTests-2020-05-15_173930-Mq0Z8N.log
2020-05-15 17:39:30.496 xcodebuild[53179:948484] [MT] IDETestOperationsObserverDebug: (324DB265-AD89-49B6-9216-22A6F75B2EDF) Beginning test session FooTests-324DB265-AD89-49B6-9216-22A6F75B2EDF at 2020-05-15 17:39:30.497 with Xcode 9F2000 on target <DVTLocalComputer: 0x7f90b2302ef0 (My Mac | x86_64h)> (10.14.6 (18G4032))
=== BUILD TARGET foo OF PROJECT foo WITH CONFIGURATION Debug ===
Check dependencies
=== BUILD TARGET FooTests OF PROJECT foo WITH CONFIGURATION Debug ===
Check dependencies
Test Suite 'All tests' started at 2020-05-15 17:39:30.845
Test Suite 'FooTests.xctest' started at 2020-05-15 17:39:30.846
Test Suite 'FooTests' started at 2020-05-15 17:39:30.846
Test Case '-[FooTests testExample]' started.
Test Case '-[FooTests testExample]' passed (0.082 seconds).
Test Case '-[FooTests testPerformanceExample]' started.
/Users/ram/development/objective-c/foo/FooTests/FooTests.m:36: Test Case '-[FooTests testPerformanceExample]' measured [Time, seconds] average: 0.000, relative standard deviation: 84.183%, values: [0.000006, 0.000002, 0.000001, 0.000002, 0.000001, 0.000001, 0.000001, 0.000001, 0.000001, 0.000001], performanceMetricID:com.apple.XCTPerformanceMetric_WallClockTime, baselineName: "", baselineAverage: , maxPercentRegression: 10.000%, maxPercentRelativeStandardDeviation: 10.000%, maxRegression: 0.100, maxStandardDeviation: 0.100
Test Case '-[FooTests testPerformanceExample]' passed (0.660 seconds).
Test Suite 'FooTests' passed at 2020-05-15 17:39:31.589.
Executed 2 tests, with 0 failures (0 unexpected) in 0.742 (0.743) seconds
Test Suite 'FooTests.xctest' passed at 2020-05-15 17:39:31.589.
Executed 2 tests, with 0 failures (0 unexpected) in 0.742 (0.744) seconds
Test Suite 'All tests' passed at 2020-05-15 17:39:31.590.
Executed 2 tests, with 0 failures (0 unexpected) in 0.742 (0.745) seconds
** TEST SUCCEEDED **
Does anyone know how to add unit tests to an xcode9 project for a command line application? If you happen to know, what's the right way of doing this and what am I doing wrong?
I am using Gitlab-CI for my build tests. I have a very simple test which compares the output of the test install/build with the known output. I put the test in a makefile.
The Makefile entry looks like this:
test:clean
make install DESTDIR=$(TEST_DIR)
$(TEST_DIR)/path/to/executable > $(TEST_DIR)/tmp.out
diff test/test.result $(TEST_DIR)/tmp.out
When the diff passes, an exit code of 0 is returned, a exit code of 1 is returned if the diff shows a difference in the files.
What I've tried:
Running make test from any shell runs the tests and exits, regardless of diff result
Running make test from the shell as gitlab_ci_runner runs the tests and exists regardless of diff result
When ran from Gitlab-CI, and the diff exit status is 0, the build returns success
The problem:
When ran in the Gitlab-CI and the diff exit status is non-0, the build hangs.
The output on the build screen is the output of the diff, and the last line is the expected error: make: *** [test] Error 1
After that, the cycle symbol keeps on, the runner does not exit with a build fail.
Any ideas? I thought that it might be something with Makefiles, but the Gitlab-CI will exit with a fail status if the Make exits with Error 1 for any other test. I can only see it happening on the output of the diff.
Thanks!
Also posted this to the GitLab mailinglist https://groups.google.com/d/msgid/gitlabhq/77e82813-b98e-4abe-9755-f39e07043384%40googlegroups.com?utm_medium=email&utm_source=footer
I'm trying to pass parameters to a gtest test suite from cmake:
add_test(NAME craft_test
COMMAND craft --gtest_output='xml:report.xml')
The issue is that these parameters are being passed surrounded by quotes, why? It looks like a bug, is there a good way for avoiding it?
$ ctest -V
UpdateCTestConfiguration from :/usr/local/src/craft/build-analyze/DartConfiguration.tcl
UpdateCTestConfiguration from :/usr/local/src/craft/build-analyze/DartConfiguration.tcl
Test project /usr/local/src/craft/build-analyze
Constructing a list of tests
Done constructing a list of tests
Checking test dependency graph...
Checking test dependency graph end
test 1
Start 1: craft_test
1: Test command: /usr/local/src/craft/build-analyze/craft "--gtest_output='xml:report.xml'"
1: Test timeout computed to be: 9.99988e+06
1: WARNING: unrecognized output format "'xml" ignored.
1: [==========] Running 1 test from 1 test case.
1: [----------] Global test environment set-up.
1: [----------] 1 test from best_answer_test
1: [ RUN ] best_answer_test.test_sample
1: [ OK ] best_answer_test.test_sample (0 ms)
1: [----------] 1 test from best_answer_test (0 ms total)
1:
1: [----------] Global test environment tear-down
1: [==========] 1 test from 1 test case ran. (0 ms total)
1: [ PASSED ] 1 test.
1/1 Test #1: craft_test ....................... Passed 0.00 sec
100% tests passed, 0 tests failed out of 1
Total Test time (real) = 0.00 sec
It's not the quotes that CMake adds that is the problem here; it's the single quotes in 'xml:report.xml' that are at fault.
You should do:
add_test(NAME craft_test
COMMAND craft --gtest_output=xml:report.xml)
Suppose I have 2 test suites in the local directory, foo and bar, and I want to run the test suite in the order of foo then bar.
I tried to run pybot -s foo -s bar ., but then it just goes and run bar then foo (i.e. in alphabetical order).
Is there a way to get pybot to run robot framework suites to be execute in the order that I define?
Robot framework can use argument files that can be used to specify order of execution (docs):
This is from older docs (not online anymore):
Another important usage for argument files is specifying input files or directories in certain order. This can be very useful if the alphabetical default execution order is not suitable:
Basically, you create something similar to start up script.
--name My Example Tests
tests/some_tests.html
tests/second.html
tests/more/tests.html
tests/more/another.html
tests/even_more_tests.html
There is neat feature that from argument file you can call another argument file that can override previously set parameters. Execution is recursive, so you can nest as many argument files as you need
Another option would be to use start up script. Than you have to deal with other aspects like which operating system you are running test on. You could also use python for starting up script on multiple platforms. There is more in this section of docs
If there are multiple test case files in an RF directory , the execution order can be specified by giving numbers as prefixes to test case names , like this.
01__my_suite.html -> My Suite
02__another_suite.html -> Another Suite
Such prefixes are not included in the generated test suite name if they are separated from the base name of the suite with two underscores:
More details are here.
http://robotframework.org/robotframework/latest/RobotFrameworkUserGuide.html#execution-order
You can use tagging.
Tag the tests as foo and bar so you can run each test separately:
pybot -i foo tests
or
pybot -i bar tests
and decide the order
pybot -i bar tests || pybot -i foo tests
or in a script.
The drawback is that you have to run the setup for each test.
Would something like this be of any use?
pybot tests/test1.txt tests/test2.txt
So, to reverse:
pybot tests/test2.txt tests/test1.txt
I had success using a listener:
Listener.py:
class Listener(object):
ROBOT_LISTENER_API_VERSION = 3
def __init__(self):
self.priorities = ['foo', 'bar']
def start_suite(self, data, suite):
#data.suites is a list of <TestSuite> instances
data.suites = self.rearrange(data.suites)
def rearrange(self, suites=[]):
#Do some sorting of suites based on self.priorities e.g. using bubblesort
n = len(suites)
if n > 1:
for i in range(0, n):
for j in range(0, n-i-1):
#Initialize the compared suites with lowest priority
priorityA = 0
priorityB = 0
#If suite[j] is prioritized, get the priority of it
if str(suites[j]) in self.priorities:
priorityA = len(self.priorities)-self.priorities.index(str(suites[j]))
#If suite[j+1] is prioritized, get the priority of it
if str(suites[j+1]) in self.priorities:
priorityB = len(self.priorities)-self.priorities.index(str(suites[j+1]))
#Compare and swap if suite[j] is of lower priority than suite[j+1]
if priorityA < priorityB:
suites[j], suites[j+1] = suites[j+1], suites[j]
return arr
Assuming foo.robot and bar.robot are contained in a toplevel suite called 'tests', you can run it like this:
pybot --listener Listener.py tests/
This will rearrange childsuites on the fly. It's possible you can modify it upfront using a prerunmodifier instead.