Sort tests from `cargo test` by name - testing

I'm running some tests with cargo test and I want the tests to be sorted in alphabetical order. If I do cargo run I get something like the following:
test node::tests::start_node::get_value_start ... ok
test node::tests::start_node::get_value ... ok
test node::tests::start_node::set_value ... ok
test node::tests::main_node::test_new ... ok
test main_tests::run_checks ... ok
test node::tests::start_node::test_new ... ok
test sigmoid::tests::sig_deriv_f32 ... ok
test sigmoid::tests::sig_deriv_inf::test_f32 ... ok
test sigmoid::tests::sig_deriv_inf::test_f64 ... ok
test training_data::tests::iter_chunks ... ok
test training_data::tests::get_chunks ... ok
test sigmoid::tests::sig_inf::test_f64 ... ok
test sigmoid::tests::sig_f32 ... ok
test sigmoid::tests::sig_inf::test_f32 ... ok
test sigmoid::tests::sig_deriv_f64 ... ok
test sigmoid::tests::sig_f64 ... ok
and the tests are output in a different, seemingly random order each time.
Is there a way to output the tests so that they are sorted alphabetically, something like this:
test main_tests::run_checks ... ok
test node::tests::main_node::test_new ... ok
test node::tests::start_node::get_value_start ... ok
test node::tests::start_node::get_value ... ok
test node::tests::start_node::set_value ... ok
test node::tests::start_node::test_new ... ok
test sigmoid::tests::sig_deriv_f32 ... ok
test sigmoid::tests::sig_deriv_f64 ... ok
test sigmoid::tests::sig_deriv_inf::test_f32 ... ok
test sigmoid::tests::sig_deriv_inf::test_f64 ... ok
test sigmoid::tests::sig_f32 ... ok
test sigmoid::tests::sig_f64 ... ok
test sigmoid::tests::sig_inf::test_f32 ... ok
test sigmoid::tests::sig_inf::test_f64 ... ok
test training_data::tests::iter_chunks ... ok
test training_data::tests::get_chunks ... ok

They already are run alphabetically by default, the problem is they're also run in parallel and output their result as soon as possible to save you some time. If you want the output in sorted order you'd have to run the tests consecutively by limiting the test framework to one thread:
cargo test -- --test-threads=1
Note that this will most likely increase the time the tests take.

Related

Ask re-run test failed and merge (RobotFramework)

I have 3 test suites: test1.robot (10 TCs inside), test2.robot(3 TCs inside), test3.robot(2TCs inside).
I run all test suites by shell script: robot --variable:ABC --name Testing --outputdir /perf-logs/Testing test1.robot test2.robot test3.robot
I found that we have 2 ways to rerun:
--rerunfailed (for tests) and --rerunfailedsuites (for testsuites)
I have some question:
1/ What is different between them (--rerunfailed vs --retunfailedsuites)
2/ Assumpting I have 2 TCs failed in test suite (test1.robot) and 1 TCs failed in testsuite test2.robot, so Which re-run I should use?
3/ Assumpting first time run 3 testsuites I have 1 output.xml. After re-running TCs failed (for 2 testsuites) I have another output2.xml. Could I merge them?
4/ In case I only re-run 1 TCs failed (in test1.robot) and get result in output3.xml. Could I merge output3.xml with first output.xml?
Many thanks
Difference:
https://robotframework.org/robotframework/latest/RobotFrameworkUserGuide.html
-R, --rerunfailed <file>
Selects failed tests from an earlier output file to be re-executed.
-S, --rerunfailedsuites <file>
Selects failed test suites from an earlier output file to be re-executed.
Which to use:
if you want to rerun an entire suite use rerunfailedsuite if you want to rerun failed test cases only not the passed tests in the suite then use rerunfailed ( if test are independent)
to combine files
rebot --outputdir . --output final_output.xml output.xml output.xml
4)same as above

How to disable test cases in JMeter non-GUI?

If I run a test suite, it will run all the test cases inside it (i.e. 30 test cases). But how to disable some of the test cases so I just run 20 test cases instead of 30 test cases in that test suite for example. Is there any command to do it?
You need to add If Controller as a parent for each TestCase
Add the property ${__P(do_the_search,0)} == 1 to the If Controller:
in order to run the script with the search part of the script turned on, we simply pass this command to the console:
jmeter -n -t <test-name> -Jdo_the_search=1
You can use the following __groovy() function in order to determine the path to the test plan
${__groovy(org.apache.jmeter.services.FileServer.getFileServer().getBaseDir().contains('TestCase04'),)}
To include 2 clauses:
${__groovy(org.apache.jmeter.services.FileServer.getFileServer().getBaseDir().contains('TestCase04') || org.apache.jmeter.services.FileServer.getFileServer().getBaseDir().contains('TestCase05'),)}
You can use the above functions directly in Thread Group like:
${__groovy(if (org.apache.jmeter.services.FileServer.getFileServer().getBaseDir().contains('TestCase04') || org.apache.jmeter.services.FileServer.getFileServer().getBaseDir().contains('TestCase05')) {return '0'} else {return '100'},)}
More information: Apache Groovy - Why and How You Should Use It

Golang test gives inconsistent results when ran in verbose mode

I have a single test function in my _test.go file with a bunch of sub tests.
It looks like this:
func MyTest(t *testing.T) {
t.Run("Subtest1", func(t *testing.T) {
...
})
t.Run("Subtest2", func(t *testing.T) {
...
})
}
I run the test with go test and get
PASS
ok package_path 9.137s
However, I would like to see listed all my subtests in the result. Looking at the Run function in $GOROOT/src/testing/testing.go it looks like I need the test to be chatty.
So I tried to run the test via go test -v but I still do not get the desired output. Instead my test is now failing:
=== RUN MyTest
api.test: error: expected argument for flag '-t', try --help
exit status 1
FAIL package_path 0.004s
--help does not show anything about -t
This turned out to be a problem with the code I was testing which expects its own arguments and contained this line:
kingpin.MustParse(cli.Parse(os.Args[1:]))
I know disallow parsing of arguments in the test.

gtest more than three tests in one case

I am new to C++ and gtest. I have a case with 29 tests, and I would like to execute them all at once. But it seems gtest only process up to 3 tests at a time, it shows:
[==========] Running 29 tests from 1 test case.
[----------] Global test environment set-up.
[----------] 29 tests from StringTests
[ RUN ] StringTests.DelimitedStringComponent
[ OK ] StringTests.DelimitedStringComponent (0 ms)
[ RUN ] StringTests.boolToString
[ OK ] StringTests.boolToString (0 ms)
[ RUN ] StringTests.checkFixSASNull
[ OK ] StringTests.checkFixSASNull (0 ms)
[ RUN ] StringTests.doubleToString
Then, stopped. What is wrong with it?
Does it crash or does it stops giving output?
In first case, crashing, it may be a crash in the tested code. For example, any assert will kill the proccess instantly. It will NOT be caught by gtest.
In case it simply stops giving output it may be an infinite loop or heavy calculus. Odds are it is still calculating. In my case, adding several SCOPED_TRACE slows the tests by one or two order of magnitude. Also some of the tested operations may be slow and it can take several seconds or even minutes in finishing the test.
We will need more data to catch your exact problem. Hope it helps!

How to pass quoted parameters to add_test in cmake?

I'm trying to pass parameters to a gtest test suite from cmake:
add_test(NAME craft_test
COMMAND craft --gtest_output='xml:report.xml')
The issue is that these parameters are being passed surrounded by quotes, why? It looks like a bug, is there a good way for avoiding it?
$ ctest -V
UpdateCTestConfiguration from :/usr/local/src/craft/build-analyze/DartConfiguration.tcl
UpdateCTestConfiguration from :/usr/local/src/craft/build-analyze/DartConfiguration.tcl
Test project /usr/local/src/craft/build-analyze
Constructing a list of tests
Done constructing a list of tests
Checking test dependency graph...
Checking test dependency graph end
test 1
Start 1: craft_test
1: Test command: /usr/local/src/craft/build-analyze/craft "--gtest_output='xml:report.xml'"
1: Test timeout computed to be: 9.99988e+06
1: WARNING: unrecognized output format "'xml" ignored.
1: [==========] Running 1 test from 1 test case.
1: [----------] Global test environment set-up.
1: [----------] 1 test from best_answer_test
1: [ RUN ] best_answer_test.test_sample
1: [ OK ] best_answer_test.test_sample (0 ms)
1: [----------] 1 test from best_answer_test (0 ms total)
1:
1: [----------] Global test environment tear-down
1: [==========] 1 test from 1 test case ran. (0 ms total)
1: [ PASSED ] 1 test.
1/1 Test #1: craft_test ....................... Passed 0.00 sec
100% tests passed, 0 tests failed out of 1
Total Test time (real) = 0.00 sec
It's not the quotes that CMake adds that is the problem here; it's the single quotes in 'xml:report.xml' that are at fault.
You should do:
add_test(NAME craft_test
COMMAND craft --gtest_output=xml:report.xml)