Test only one it or describe with Rspec - ruby-on-rails-3

On TestUnit you can launch one test in file with -n option
for example
require 'test_helper'
class UserTest < ActiveSupport::TestCase
test "the truth" do
assert true
end
test "the truth 2" do
assert true
end
end
You can execute only the test the truth
ruby -Itest test/unit/user_test.rb -n test_the_truth
The ouput
1 tests, 1 assertions, 0 failures, 0 errors, 0 skip
How can that with rspec ?
The command seem not work
rspec spec/models/user_spec.rb -e "User the truth"

You didn't include the source of your spec, so it's hard to say where the problem is, but in general you can use the -e option to run a single example. Given this spec:
# spec/models/user_spec.rb
require 'spec_helper'
describe User do
it "is true" do
true.should be_true
end
describe "validation" do
it "is also true" do
true.should be_true
end
end
end
This command line:
rspec spec/models/user_spec.rb -e "User is true"
Will produce this output:
Run filtered including {:full_description=>/(?-mix:User\ is\ true)/}
.
Finished in 0.2088 seconds
1 example, 0 failures
And if you wanted to invoke the other example, the one nested inside the validation group, you'd use this:
rspec spec/models/user_spec.rb -e "User validation is also true"
Or to run all the examples in the validation group:
rspec spec/models/user_spec.rb -e "User validation"

You can also select in which line is the test case you want to execute.
rspec spec/models/user_spec.rb:8
By passing any line inside the scope of the test case, only this test case will be executed. You can also use this to execute a whole context inside your test.

At least in Rspec 2.11.1 you can use all of the following options:
** Filtering/tags **
In addition to the following options for selecting specific files, groups,
or examples, you can select a single example by appending the line number to
the filename:
rspec path/to/a_spec.rb:37
-P, --pattern PATTERN Load files matching pattern (default: "spec/**/*_spec.rb").
-e, --example STRING Run examples whose full nested names include STRING (may be
used more than once)
-l, --line_number LINE Specify line number of an example or group (may be
used more than once).
-t, --tag TAG[:VALUE] Run examples with the specified tag, or exclude examples
by adding ~ before the tag.
- e.g. ~slow
- TAG is always converted to a symbol

Related

Gitlab CI only variables that come from extends

I have the following setup (simplified version), which doesn't run the expected merge::my when I use tag that includes the string "TEST". I can't figure out why is it happening - I know that only doesn't support variable expansion, but here the variable is just a string, that is being set up in a different extend - is that a problem? Would using yaml anchors be better? Are there different suggestions?
The reason that I check for only:variable in merge_builds is because I have many languages, in this case I used en, but I have many others, and I don't want to do the only:variables for each (the real matching is more complex - I stripped it to bare minimum for the example)
.merge_builds:
script:
- echo 'test'
only:
variables:
- $CI_COMMIT_TAG =~ $VARIABLEMATCH
.en_variables:
variables:
VARIABLEMATCH: /^$|(?i)EN/
merge::en:
extends:
- .en_variables
- .merge_builds
Based on GitLab issue 35438, I'd say that it is not currently possible to use a variable (as opposed to a literal) as the regular expression pattern.
Within issue 35438, #furkanayhan explains in a comment titled "Introduction" from 2021-09-06 (sorry, I wasn't able to get a permalink to it) that GitLab will make a simple string comparison between a value and a pattern given as a variable:
variables:
teststring: 'abcde'
pattern: '/^ab.*/'
test1:
script: exit 0
rules:
- if: '$teststring =~ $pattern'
test2:
script: exit 0
rules:
- if: '$teststring =~ /^ab.*/'
The test1 job is not created because the backend makes string comparison between "abcde" and "/'^ab.*/".
The test2 job is created because the backend makes regexp comparison between "abcde" and /'^ab.*/.
I believe that you are encountering the same behavior that caused "test1 job" not to be created.
However, issue 35438 shows that GitLab is planning on offering a fix in version 15.0, scheduled for 2022-05-22.
One other thing you might want to check on is the regular expression itself. GitLab's regexp doc (here) states that GitLab uses the re2 regular expression syntax for these kinds of comparison. To achieve case insensitivity, I believe one appends the "i" flag as in:
/pattern/i

How to run test cases for a binary in Makefile

There is a small project which produces a binary application. The source code is C, I'm using autotools to create the Makefile and build the binary - it works as well.
I would like to run tests cases with that binary. Here is what I did:
SUBDIRS = src
dist_doc_DATA = README
TESTS=
TESTS+=tests/config1.conf
TESTS+=tests/config2.conf
TESTS+=tests/config3.conf
TESTS+=tests/config4.conf
TESTS+=tests/config5.conf
TESTS+=tests/config6.conf
TESTS+=tests/config7.conf
TESTS+=tests/config8.conf
TESTS+=tests/config9.conf
TESTS+=tests/config10.conf
TESTS+=tests/config11.conf
I would like to run these cases as argument with the tool. When I run make check, I got:
make[3]: Entering directory '/home/airween/src/mytool'
FAIL: tests/config1.conf
FAIL: tests/config2.conf
FAIL: tests/config3.conf
which is correct, because those files are simple configurations files.
How can I solve that make check runs my tool with the scripts above, and finally I get a list with number of success, failed, ... tests, like in that case:
============================================================================
Testsuite summary for mytool 0.1
============================================================================
# TOTAL: 11
# PASS: 0
# SKIP: 0
# XFAIL: 0
# FAIL: 11
# XPASS: 0
# ERROR: 0
Edit: so I would like to emulate these runs:
for f in `ls -1 tests/*.conf; do src/mytool ${f}; done
but - of course - I want to see the summary at the end.
Thanks.
The Autotools' built-in test runner expects you to specify the names of executable tests via the make variable TESTS. You cannot just put random filenames in there and expect make or Automake to know what to do with them.
The tests can be built programs, generated scripts, static scripts distributed with the project, or any combination of the above.
How can I solve that make check runs my tool with the scripts above, and finally I get a [test summary report]?
You have acknowledged that your configuration files are not scripts, so stop calling them that! This is in fact the crux of the problem. The easiest solution is probably to create actual executable scripts, one for each case, and name those in your TESTS variable. Each one would run the binary under test with the appropriate configuration file (that is, you're responsible for making them do that if those are the tests you want to perform).
See also the Automake Manual's chapter on tests.
Okay, the solution from here:
tests/Makefile.am:
==================
TEST_EXTENSIONS = .conf
CONF_LOG_COMPILER = ./test-suit.sh
TESTS=
TESTS+=config1.conf
TESTS+=config2.conf
TESTS+=config3.conf
TESTS+=config4.conf
TESTS+=config5.conf
TESTS+=config6.conf
TESTS+=config7.conf
TESTS+=config8.conf
TESTS+=config9.conf
TESTS+=config10.conf
TESTS+=config11.conf
test/test-suit.sh:
==================
#!/bin/sh
CONF=$1
exec ../src/mytool $CONF
And the result:
make check
...
PASS: config1.conf
PASS: config2.conf
PASS: config3.conf
PASS: config4.conf
PASS: config5.conf
PASS: config6.conf
PASS: config7.conf
PASS: config8.conf
PASS: config9.conf
PASS: config10.conf
PASS: config11.conf
============================================================================
Testsuite summary for mytool 0.1
============================================================================
# TOTAL: 11
# PASS: 11
# SKIP: 0
# XFAIL: 0
# FAIL: 0
# XPASS: 0
# ERROR: 0
============================================================================
This is what I expected.

How do I write test for homebrew formula?

I made an homebrew formula which is now accessible only on my local taps. I want to send pull request to homebrew-core. Now I am required to write test for my formula. How to write that based on example below?
test do
output = shell_output("#{bin}/balance 2>&1", 64)
assert_match "this is balance #{version}", output
end
My formula
#!/usr/bin/env ruby
def match
files = Dir.glob("*")
if ARGV.length == 0
puts "usage: match <keyword>"
return
end
files.each { |x|
if File.directory?(x)
puts "#{x}_ found directory"
puts "***"
next
end
found = false
File.open(x).each_line.with_index do |line, index|
if line.include? ARGV[0]
puts "#{x}_ #{index+1} #{line}"
found = true
end
end
puts "***" if found
}
end
match
Brew formula
class Match < Formula
desc "Browse all files inside any directory for a keyword"
homepage "https://github.com/aatalyk/homebrew-match"
url ""
sha256 ""
def install
bin.install "match"
end
end
Tests for shell commands in Homebrew formulae usually usually follow this scenario:
create a context usable by the command : a git repository, a directories hierarchy, a sample file, etc.
run the command
assert the result is correct
In your case since match is a grep -R -like you could create a bunch of files with some content, then run match <something> and ensure it finds the correct files.
You can use any Ruby code in your tests as well as Homebrew utilities such as shell_output("...command...") to get the output of a command.
Here is an example of test you could write:
class Match < Formula
# ...
test do
# Create two dummy files
(testpath/"file1").write "foo\nbar\nqux"
(testpath/"file2").write "bar\nabc"
# Ensure `match bar` finds both files
assert_match "file1_ 2 bar\n***\nfile2_ 1 bar",
shell_output("#{bin}/match bar")
# Ensure `match abc` finds the second file
assert_match "file2_ 2 abc", shell_output("#{bin}/match abc")
# Ensure `match idontmatchanything` doesn’t match any of the files
assert_not_match(/file[12]/,
shell_output("#{bin}/match idontmatchanything"))
end
end
assert_match "something", shell_output("command") ensures that (1) command runs successfully and (2) its output contains "something".

tcl tcltest unknown option -run

When I run ANY test I get the same message. Here is an example test:
package require tcltest
namespace import -force ::tcltest::*
test foo-1.1 {save 1 in variable name foo} {} {
set foo 1
} {1}
I get the following output:
WARNING: unknown option -run: should be one of -asidefromdir, -constraints, -debug, -errfile, -file, -limitconstraints, -load, -loadfile, -match, -notfile, -outfile, -preservecore, -relateddir, -singleproc, -skip, -testdir, -tmpdir, or -verbose
I've tried multiple tests and nothing seems to work. Does anyone know how to get this working?
Update #1:
The above error was my fault, it was due to it being run in my script. However if I run the following at a command line I got no output:
[root#server1 ~]$ tcl
tcl>package require tcltest
2.3.3
tcl>namespace import -force ::tcltest::*
tcl>test foo-1.1 {save 1 in variable name foo} {expr 1+1} {2}
tcl>echo [test foo-1.1 {save 1 in variable name foo} {expr 1+1} {2}]
tcl>
How do I get it to output pass or fail?
You don't get any output from the test command itself (as long as the test passes, as in the example: if it fails, the command prints a "contents of test case" / "actual result" / "expected result" summary; see also the remark on configuration below). The test statistics are saved internally: you can use the cleanupTests command to print the Total/Passed/Skipped/Failed numbers (that command also resets the counters and does some cleanup).
(When you run runAllTests, it runs test files in child processes, intercepting the output from each file's cleanupTests and adding them up to a grand total.)
The internal statistics collected during testing is available in AFACT undocumented namespace variables like ::tcltest::numTests. If you want to work with the statistics yourself, you can access them before calling cleanupTests, e.g.
parray ::tcltest::numTests
array set myTestData [array get ::tcltest::numTests]
set passed $::tcltest::numTests(Passed)
Look at the source for tcltest in your library to see what variables are available.
The amount of output from the test command is configurable, and you can get output even when the test passes if you add p / pass to the -verbose option. This option can also let you have less output on failure, etc.
You can also create a command called ::tcltest::ReportToMaster which, if it exists, will be called by cleanupTests with the pertinent data as arguments. Doing so seems to suppress both output of statistics and at least most resetting and cleanup. (I didn't go very far in investigating that method.) Be aware that messing about with this is more likely to create trouble than solve problems, but if you are writing your own testing software based on tcltest you might still want to look at it.
Oh, and please use the newer syntax for the test command. It's more verbose, but you'll thank yourself later on if you get started with it.
Obligatory-but-fairly-useless (in this case) documentation link: tcltest

Is there a way to get Robot Framework to run test suites in a certain order?

Suppose I have 2 test suites in the local directory, foo and bar, and I want to run the test suite in the order of foo then bar.
I tried to run pybot -s foo -s bar ., but then it just goes and run bar then foo (i.e. in alphabetical order).
Is there a way to get pybot to run robot framework suites to be execute in the order that I define?
Robot framework can use argument files that can be used to specify order of execution (docs):
This is from older docs (not online anymore):
Another important usage for argument files is specifying input files or directories in certain order. This can be very useful if the alphabetical default execution order is not suitable:
Basically, you create something similar to start up script.
--name My Example Tests
tests/some_tests.html
tests/second.html
tests/more/tests.html
tests/more/another.html
tests/even_more_tests.html
There is neat feature that from argument file you can call another argument file that can override previously set parameters. Execution is recursive, so you can nest as many argument files as you need
Another option would be to use start up script. Than you have to deal with other aspects like which operating system you are running test on. You could also use python for starting up script on multiple platforms. There is more in this section of docs
If there are multiple test case files in an RF directory , the execution order can be specified by giving numbers as prefixes to test case names , like this.
01__my_suite.html -> My Suite
02__another_suite.html -> Another Suite
Such prefixes are not included in the generated test suite name if they are separated from the base name of the suite with two underscores:
More details are here.
http://robotframework.org/robotframework/latest/RobotFrameworkUserGuide.html#execution-order
You can use tagging.
Tag the tests as foo and bar so you can run each test separately:
pybot -i foo tests
or
pybot -i bar tests
and decide the order
pybot -i bar tests || pybot -i foo tests
or in a script.
The drawback is that you have to run the setup for each test.
Would something like this be of any use?
pybot tests/test1.txt tests/test2.txt
So, to reverse:
pybot tests/test2.txt tests/test1.txt
I had success using a listener:
Listener.py:
class Listener(object):
ROBOT_LISTENER_API_VERSION = 3
def __init__(self):
self.priorities = ['foo', 'bar']
def start_suite(self, data, suite):
#data.suites is a list of <TestSuite> instances
data.suites = self.rearrange(data.suites)
def rearrange(self, suites=[]):
#Do some sorting of suites based on self.priorities e.g. using bubblesort
n = len(suites)
if n > 1:
for i in range(0, n):
for j in range(0, n-i-1):
#Initialize the compared suites with lowest priority
priorityA = 0
priorityB = 0
#If suite[j] is prioritized, get the priority of it
if str(suites[j]) in self.priorities:
priorityA = len(self.priorities)-self.priorities.index(str(suites[j]))
#If suite[j+1] is prioritized, get the priority of it
if str(suites[j+1]) in self.priorities:
priorityB = len(self.priorities)-self.priorities.index(str(suites[j+1]))
#Compare and swap if suite[j] is of lower priority than suite[j+1]
if priorityA < priorityB:
suites[j], suites[j+1] = suites[j+1], suites[j]
return arr
Assuming foo.robot and bar.robot are contained in a toplevel suite called 'tests', you can run it like this:
pybot --listener Listener.py tests/
This will rearrange childsuites on the fly. It's possible you can modify it upfront using a prerunmodifier instead.