If I run a test suite, it will run all the test cases inside it (i.e. 30 test cases). But how to disable some of the test cases so I just run 20 test cases instead of 30 test cases in that test suite for example. Is there any command to do it?
You need to add If Controller as a parent for each TestCase
Add the property ${__P(do_the_search,0)} == 1 to the If Controller:
in order to run the script with the search part of the script turned on, we simply pass this command to the console:
jmeter -n -t <test-name> -Jdo_the_search=1
You can use the following __groovy() function in order to determine the path to the test plan
${__groovy(org.apache.jmeter.services.FileServer.getFileServer().getBaseDir().contains('TestCase04'),)}
To include 2 clauses:
${__groovy(org.apache.jmeter.services.FileServer.getFileServer().getBaseDir().contains('TestCase04') || org.apache.jmeter.services.FileServer.getFileServer().getBaseDir().contains('TestCase05'),)}
You can use the above functions directly in Thread Group like:
${__groovy(if (org.apache.jmeter.services.FileServer.getFileServer().getBaseDir().contains('TestCase04') || org.apache.jmeter.services.FileServer.getFileServer().getBaseDir().contains('TestCase05')) {return '0'} else {return '100'},)}
More information: Apache Groovy - Why and How You Should Use It
Related
I have this Benchmark function:
BenchmarkMyTest(b *testing.B) {
}
And I would like to run only this function not running all other tests, but the command never worked for me.
go test -bench='BenchmarkMyTest'
or
go test -run='BenchmarkMyTest'
What's the correct way of running one single benchmark function in Go?
It says to use regex but I can't find any documentation.
Thanks,
Described at Command Go: Description of testing flags:
-bench regexp
Run benchmarks matching the regular expression.
By default, no benchmarks run. To run all benchmarks,
use '-bench .' or '-bench=.'.
-run regexp
Run only those tests and examples matching the regular
expression.
So the syntax is that you have to separate it with a space or with the equal sign (with no apostrophe marks), and what you specify is a regexp:
go test -bench BenchmarkMyTest
go test -run TestMyTest
Or:
go test -bench=BenchmarkMyTest
go test -run=TestMyTest
Specifying exactly 1 function
As the specified expression is a regexp, this will also match functions whose name contains the specified name (e.g. another function whose name starts with this, for example "BenchmarkMyTestB"). If you only want to match "BenchmarkMyTest", append the regexp word boundary '\b':
go test -bench BenchmarkMyTest\b
go test -run TestMyTest\b
Note that it's enough to append it only to the end as if the function name doesn't start with "Benchmark", it is not considered to be a benchmark function, and similarly if it doesn't start with "Test", it is not considered to be a test function (and will not be picked up anyway).
I found those answers incomplete, so here is more to the topic...
The following command runs all Benchmarks starting with BenchmarkMyTest (BenchmarkMyTest1, BenchmarkMyTest2, etc...) and also skip all tests with -run=^$ .
You can also specify a test duration with -benchtime 5s or you can force b.ReportAllocs() with -benchmem in order to get values like:
BenchmarkLogsWithBytesBufferPool-48 46416456 26.91 ns/op 0 B/op 0 allocs/op
the final command would be:
go test -bench=^BenchmarkMyTest . -run=^$ . -v -benchtime 5s -benchmem
When I run ANY test I get the same message. Here is an example test:
package require tcltest
namespace import -force ::tcltest::*
test foo-1.1 {save 1 in variable name foo} {} {
set foo 1
} {1}
I get the following output:
WARNING: unknown option -run: should be one of -asidefromdir, -constraints, -debug, -errfile, -file, -limitconstraints, -load, -loadfile, -match, -notfile, -outfile, -preservecore, -relateddir, -singleproc, -skip, -testdir, -tmpdir, or -verbose
I've tried multiple tests and nothing seems to work. Does anyone know how to get this working?
Update #1:
The above error was my fault, it was due to it being run in my script. However if I run the following at a command line I got no output:
[root#server1 ~]$ tcl
tcl>package require tcltest
2.3.3
tcl>namespace import -force ::tcltest::*
tcl>test foo-1.1 {save 1 in variable name foo} {expr 1+1} {2}
tcl>echo [test foo-1.1 {save 1 in variable name foo} {expr 1+1} {2}]
tcl>
How do I get it to output pass or fail?
You don't get any output from the test command itself (as long as the test passes, as in the example: if it fails, the command prints a "contents of test case" / "actual result" / "expected result" summary; see also the remark on configuration below). The test statistics are saved internally: you can use the cleanupTests command to print the Total/Passed/Skipped/Failed numbers (that command also resets the counters and does some cleanup).
(When you run runAllTests, it runs test files in child processes, intercepting the output from each file's cleanupTests and adding them up to a grand total.)
The internal statistics collected during testing is available in AFACT undocumented namespace variables like ::tcltest::numTests. If you want to work with the statistics yourself, you can access them before calling cleanupTests, e.g.
parray ::tcltest::numTests
array set myTestData [array get ::tcltest::numTests]
set passed $::tcltest::numTests(Passed)
Look at the source for tcltest in your library to see what variables are available.
The amount of output from the test command is configurable, and you can get output even when the test passes if you add p / pass to the -verbose option. This option can also let you have less output on failure, etc.
You can also create a command called ::tcltest::ReportToMaster which, if it exists, will be called by cleanupTests with the pertinent data as arguments. Doing so seems to suppress both output of statistics and at least most resetting and cleanup. (I didn't go very far in investigating that method.) Be aware that messing about with this is more likely to create trouble than solve problems, but if you are writing your own testing software based on tcltest you might still want to look at it.
Oh, and please use the newer syntax for the test command. It's more verbose, but you'll thank yourself later on if you get started with it.
Obligatory-but-fairly-useless (in this case) documentation link: tcltest
I am using Gitlab-CI for my build tests. I have a very simple test which compares the output of the test install/build with the known output. I put the test in a makefile.
The Makefile entry looks like this:
test:clean
make install DESTDIR=$(TEST_DIR)
$(TEST_DIR)/path/to/executable > $(TEST_DIR)/tmp.out
diff test/test.result $(TEST_DIR)/tmp.out
When the diff passes, an exit code of 0 is returned, a exit code of 1 is returned if the diff shows a difference in the files.
What I've tried:
Running make test from any shell runs the tests and exits, regardless of diff result
Running make test from the shell as gitlab_ci_runner runs the tests and exists regardless of diff result
When ran from Gitlab-CI, and the diff exit status is 0, the build returns success
The problem:
When ran in the Gitlab-CI and the diff exit status is non-0, the build hangs.
The output on the build screen is the output of the diff, and the last line is the expected error: make: *** [test] Error 1
After that, the cycle symbol keeps on, the runner does not exit with a build fail.
Any ideas? I thought that it might be something with Makefiles, but the Gitlab-CI will exit with a fail status if the Make exits with Error 1 for any other test. I can only see it happening on the output of the diff.
Thanks!
Also posted this to the GitLab mailinglist https://groups.google.com/d/msgid/gitlabhq/77e82813-b98e-4abe-9755-f39e07043384%40googlegroups.com?utm_medium=email&utm_source=footer
Suppose I have 2 test suites in the local directory, foo and bar, and I want to run the test suite in the order of foo then bar.
I tried to run pybot -s foo -s bar ., but then it just goes and run bar then foo (i.e. in alphabetical order).
Is there a way to get pybot to run robot framework suites to be execute in the order that I define?
Robot framework can use argument files that can be used to specify order of execution (docs):
This is from older docs (not online anymore):
Another important usage for argument files is specifying input files or directories in certain order. This can be very useful if the alphabetical default execution order is not suitable:
Basically, you create something similar to start up script.
--name My Example Tests
tests/some_tests.html
tests/second.html
tests/more/tests.html
tests/more/another.html
tests/even_more_tests.html
There is neat feature that from argument file you can call another argument file that can override previously set parameters. Execution is recursive, so you can nest as many argument files as you need
Another option would be to use start up script. Than you have to deal with other aspects like which operating system you are running test on. You could also use python for starting up script on multiple platforms. There is more in this section of docs
If there are multiple test case files in an RF directory , the execution order can be specified by giving numbers as prefixes to test case names , like this.
01__my_suite.html -> My Suite
02__another_suite.html -> Another Suite
Such prefixes are not included in the generated test suite name if they are separated from the base name of the suite with two underscores:
More details are here.
http://robotframework.org/robotframework/latest/RobotFrameworkUserGuide.html#execution-order
You can use tagging.
Tag the tests as foo and bar so you can run each test separately:
pybot -i foo tests
or
pybot -i bar tests
and decide the order
pybot -i bar tests || pybot -i foo tests
or in a script.
The drawback is that you have to run the setup for each test.
Would something like this be of any use?
pybot tests/test1.txt tests/test2.txt
So, to reverse:
pybot tests/test2.txt tests/test1.txt
I had success using a listener:
Listener.py:
class Listener(object):
ROBOT_LISTENER_API_VERSION = 3
def __init__(self):
self.priorities = ['foo', 'bar']
def start_suite(self, data, suite):
#data.suites is a list of <TestSuite> instances
data.suites = self.rearrange(data.suites)
def rearrange(self, suites=[]):
#Do some sorting of suites based on self.priorities e.g. using bubblesort
n = len(suites)
if n > 1:
for i in range(0, n):
for j in range(0, n-i-1):
#Initialize the compared suites with lowest priority
priorityA = 0
priorityB = 0
#If suite[j] is prioritized, get the priority of it
if str(suites[j]) in self.priorities:
priorityA = len(self.priorities)-self.priorities.index(str(suites[j]))
#If suite[j+1] is prioritized, get the priority of it
if str(suites[j+1]) in self.priorities:
priorityB = len(self.priorities)-self.priorities.index(str(suites[j+1]))
#Compare and swap if suite[j] is of lower priority than suite[j+1]
if priorityA < priorityB:
suites[j], suites[j+1] = suites[j+1], suites[j]
return arr
Assuming foo.robot and bar.robot are contained in a toplevel suite called 'tests', you can run it like this:
pybot --listener Listener.py tests/
This will rearrange childsuites on the fly. It's possible you can modify it upfront using a prerunmodifier instead.
On TestUnit you can launch one test in file with -n option
for example
require 'test_helper'
class UserTest < ActiveSupport::TestCase
test "the truth" do
assert true
end
test "the truth 2" do
assert true
end
end
You can execute only the test the truth
ruby -Itest test/unit/user_test.rb -n test_the_truth
The ouput
1 tests, 1 assertions, 0 failures, 0 errors, 0 skip
How can that with rspec ?
The command seem not work
rspec spec/models/user_spec.rb -e "User the truth"
You didn't include the source of your spec, so it's hard to say where the problem is, but in general you can use the -e option to run a single example. Given this spec:
# spec/models/user_spec.rb
require 'spec_helper'
describe User do
it "is true" do
true.should be_true
end
describe "validation" do
it "is also true" do
true.should be_true
end
end
end
This command line:
rspec spec/models/user_spec.rb -e "User is true"
Will produce this output:
Run filtered including {:full_description=>/(?-mix:User\ is\ true)/}
.
Finished in 0.2088 seconds
1 example, 0 failures
And if you wanted to invoke the other example, the one nested inside the validation group, you'd use this:
rspec spec/models/user_spec.rb -e "User validation is also true"
Or to run all the examples in the validation group:
rspec spec/models/user_spec.rb -e "User validation"
You can also select in which line is the test case you want to execute.
rspec spec/models/user_spec.rb:8
By passing any line inside the scope of the test case, only this test case will be executed. You can also use this to execute a whole context inside your test.
At least in Rspec 2.11.1 you can use all of the following options:
** Filtering/tags **
In addition to the following options for selecting specific files, groups,
or examples, you can select a single example by appending the line number to
the filename:
rspec path/to/a_spec.rb:37
-P, --pattern PATTERN Load files matching pattern (default: "spec/**/*_spec.rb").
-e, --example STRING Run examples whose full nested names include STRING (may be
used more than once)
-l, --line_number LINE Specify line number of an example or group (may be
used more than once).
-t, --tag TAG[:VALUE] Run examples with the specified tag, or exclude examples
by adding ~ before the tag.
- e.g. ~slow
- TAG is always converted to a symbol