Hello I'm trying to do a simple groovy script in Soapui
I try to get a testcase property and increment it then save it.
when I run then script it increments it two times I don't know why. I tried different syntaxes but nothing seems to work up to now.
Here is a screenshot that shows my problem
here I run the test 2 times, first the variable was 3, normally when I run the test the second time the before value should be at 4 and the after at 5, and not 5 and 6.
I believe that you do not want to have the increment logic in Script Assertion.
Instead increment counter in the Setup Script of test case.
If you need the counter value in the script assertion, just read it alone.
Hope this helps.
By the way, I do not see any issue with script you have shown.
Check if there any where else if this variable is being manipulated.
def cnt = context.testCase.getPropertyValue('COUNT') as Integer
if (cnt< 10){
log.info "before : $cnt"
cnt += 1
log.info "after : $cnt"
context.testCase.setPropertyValue('COUNT', cnt.toString())
}
Can't comment yet. I'm seeing the same issue in 5.3.0 - here's my script that grabs a string value from properties, converts it to an Integer, increments it and sends back as a string.
loopsInt = messageExchange.modelItem.testStep.testCase.getPropertyValue("loops").toInteger();
log.info loopsInt;
loopsInt++;
log.info loopsInt;
messageExchange.modelItem.testStep.testCase.setPropertyValue("loops", String.valueOf(loopsInt));
I log the value before I increment, and immediately after, and as you can see, the value is being incremented twice. Here I run the script 3 times:
Thu Mar 16 12:04:54 NZDT 2017:INFO:52
Thu Mar 16 12:04:54 NZDT 2017:INFO:53
Thu Mar 16 12:04:56 NZDT 2017:INFO:54
Thu Mar 16 12:04:56 NZDT 2017:INFO:55
Thu Mar 16 12:04:59 NZDT 2017:INFO:56
Thu Mar 16 12:04:59 NZDT 2017:INFO:57
I get the same result whether I use loopsInt++ or loopsInt = loopsInt + 1. The "loops" property is not being used anywhere else. Weird.
When you execute the assertion script with the green arrow in Script Assertion Window, it is executed twice.
I have used the following script:
def loopsInt = messageExchange.modelItem.testStep.testCase.getPropertyValue("myNum").toInteger();
log.info loopsInt
loopsInt++
messageExchange.modelItem.testStep.testCase.setPropertyValue("myNum", String.valueOf(loopsInt))
See the following picture. One window logs even numbers and second the odd numbers.
Please note that execution in Script Assertion Window shall be utilized just for debugging of the script. When you execute the test case (test step), the script is executed only once, as expected.
Anyway, I think there are better places to set test case properties (setUp script, Groovy Script test step and others). I recommend to use assertion scripts for checking the message exchange.
Karel
Found a very strange reason for this.
If the testStep having this assertion is been run successfully [i.e., if SOAP test step, then it becomes green], and after this if you open the assertion and run it separately, then it's incremented twice. Once in your editor, once within the test step itself.
Say the testSetp had failed [is red in color], then you try running the assertion separately, it works absolutely fine.
Related
I added a simple test to ctest with the following lines in a .cmake file:
add_test( NAME ktxsc-test-many-in-one-out
COMMAND ktxsc -o foo a.ktx2 b.ktx2 c.ktx2
)
set_tests_properties(
ktxsc-test-many-in-one-out
PROPERTIES
WILL_FAIL TRUE
FAIL_REGULAR_EXPRESSION "^Can't use -o when there are multiple infiles."
)
The test passes and the TestLog shows
----------------------------------------------------------
Test Pass Reason:
Error regular expression found in output. Regex=[^Can't use -o when there are multiple infiles.]
"ktxsc-test-many-in-one-out" end time: Jun 30 16:34 JST
"ktxsc-test-many-in-one-out" time elapsed: 00:00:00
----------------------------------------------------------
If I change FAIL_REGULAR_EXPRESSION to
FAIL_REGULAR_EXPRESSION "some rubbish"
the test still passes even though the app is printing the same message as before. This time the test log shows
----------------------------------------------------------
Test Passed.
"ktxsc-test-many-in-one-out" end time: Jun 30 16:53 JST
"ktxsc-test-many-in-one-out" time elapsed: 00:00:00
----------------------------------------------------------
which is what I normally see when no *_REGULAR_EXPRESSION is set.
Why is this happening? How can I get ctest to fail the test when the FAIL_REGULAR_EXPRESSION doesn't match?
Thanks to #Tsyvarev for this answer.
ctest is doing
match FAIL_REGULAR_EXPRESSION || exit code != 0
to determine if a test has failed. If the match succeeds the exit code is not checked. If the match fails the exit code is checked and the failed match is ignored.
The documentation is unclear. I've read it many times and still didn't figure out this behavior until steered in the right direction by #Tsyvarev. I also find this a poor behavioral choice. All my tools output both a non-zero error code and a message when there is an error condition. I need to test both. I previously asked this question about that. It requires duplicating tests.
I'm using Selenium IDE in Chrome to test a web site. When the test runs successfully, the site produces the text " Success Saving Scenario!" Selenium IDE finds this text, but I can't find the right value to match that text.
Here's my setting:
Command: Assert Text
Target: css-li > span
Value: Success Saving Scenario
Each time I run this test, the IDE records a failure with the message:
assertText on css=li > span with value Success Saving Scenario Failed:
12:23:02
Actual value "Thu, 03 Feb 2022 17:23:02 GMT - Success Saving Scenario!" did not match "Success Saving Scenario"
I checked the page, and sure enough the text displays Thu, 03 Feb 2022 17:23:02 GMT - Success Saving Scenario!
Why does that not match Success Saving Scenario? I thought the asterisks would be a wildcard that would match any characters.
I've tried these values as well with no success:
glob: Success Saving Scenario
regexp: Success Saving Scenario
(just an asterisk by itself)
Any ideas?
I would use 'Assert Element Present' for this case. Find another locator from the dropdown with a 'contain' keyword and remove the timezone from the contain keyword as needed. Leave value field empty.
Sample
Command: assert element present | Target: xpath=//span[contains(.,'Success Saving Scenario')] | Value: empty
So my understanding is that after command can be used to delay the execution of a script or a command for certain ms but when i execute the below command, the output is printed immediately
Command:
after 4000 [list puts "hello"]
Output:
hello
after#0
Question: Why was the output not delayed for 4s here?
That what you wrote works; I just tried it at a tclsh prompt like this:
% after 4000 [list puts "hello"]; after 5000 set x 1; vwait x
Did you write something else instead, such as this:
after 4000 [puts "hello"]
In this case, instead of delaying the execution of puts, you'd be calling it immediately and using its result (the empty string) as the argument to after (which is valid, but useless).
Think of [list …] as a special kind of quoting.
The other possibility when running interactively is that you take several seconds between running the after and starting the event loop (since the callbacks are only ever run by the event loop). You won't see that in wish of course — that's always running an event loop — but it's possible in tclsh as that doesn't start an event loop by default. But I'd put that as a far distant second in terms of probability to omitting the list word…
I like the Python feature of doc-tests for testing functions independently. Does Emacs Lisp have something similar, or could I emulate it in some way?
For example, this function gets timestamps from an Org-mode clock segment:
(defun org-get-timestamps (line)
"Parses a clock segment line and returns the first and last timestamps in a list."
(let* ((org-clock-regexp (concat "CLOCK: " org-ts-regexp3 "--" org-ts-regexp3))
(t1 (if (string-match org-clock-regexp line)
(match-string 1 line)
(user-error "The argument must have a valid CLOCK range")))
(t2 (match-string 9 line)))
(cons t1 (cons t2 '()))))
I would like a doc-test such as:
(org-get-timestamps "CLOCK: [2019-09-26 Thu 00:29]--[2019-09-26 Thu 01:11] => 0:42")
("2019-09-26 Thu 00:29" "2019-09-26 Thu 01:11")
A test of the user-error would also be nice.
I also would like to ensure that any refactoring passes the doc-test, so it's also a regression test.
Does that exist?
An important feature of Python doctest is how its input looks like a Python interactive REPL session, as described in the doctest documentation:
The doctest module searches for pieces of text that look like
interactive Python sessions, and then executes those sessions to
verify that they work exactly as shown.
I'm not aware of any elisp facilities exactly like this, but I think you can achieve what you want using the Emacs Lisp Regression Testing (ERT) framework, which supports both interactive and batch test execution.
To test the org-get-timestamps function you can define a test like this:
(require 'ert)
(ert-deftest org-timestamp-test ()
(should (equal
(org-get-timestamps "CLOCK: [2019-09-26 Thu 00:29]--[2019-09-26 Thu 01:11] => 0:42")
'("2019-09-26 Thu 00:29" "2019-09-26 Thu 01:11"))))
To run the test interactively, you can type M-x ert, press enter, and then either press enter again to select all tests using the default t argument or type the name of the test to run and press enter, and the test results will be shown in the *ert* buffer:
Selector: org-timestamp-test
Passed: 1
Failed: 0
Skipped: 0
Total: 1/1
Started at: 2019-09-27 08:44:57-0400
Finished.
Finished at: 2019-09-27 08:44:57-0400
.
The dot character at the very end above represents the test that was run. If multiple tests were executed, there would be multiple dots.
To run the test in batch mode, save it to file org-timestamp-test.el and assuming the org-get-timestamps function resides in file org-timestamps.el, run it like this from your shell command line:
emacs -batch -l ert -l org-timestamps.el -l org-timestamp-test.el -f ert-run-tests-batch-and-exit
The test results are then presented on the shell output:
Running 1 tests (2019-09-27 06:03:09-0700) passed 1/1
org-timestamp-test
Ran 1 tests, 1 results as expected (2019-09-27 06:03:09-0700)
When I run ANY test I get the same message. Here is an example test:
package require tcltest
namespace import -force ::tcltest::*
test foo-1.1 {save 1 in variable name foo} {} {
set foo 1
} {1}
I get the following output:
WARNING: unknown option -run: should be one of -asidefromdir, -constraints, -debug, -errfile, -file, -limitconstraints, -load, -loadfile, -match, -notfile, -outfile, -preservecore, -relateddir, -singleproc, -skip, -testdir, -tmpdir, or -verbose
I've tried multiple tests and nothing seems to work. Does anyone know how to get this working?
Update #1:
The above error was my fault, it was due to it being run in my script. However if I run the following at a command line I got no output:
[root#server1 ~]$ tcl
tcl>package require tcltest
2.3.3
tcl>namespace import -force ::tcltest::*
tcl>test foo-1.1 {save 1 in variable name foo} {expr 1+1} {2}
tcl>echo [test foo-1.1 {save 1 in variable name foo} {expr 1+1} {2}]
tcl>
How do I get it to output pass or fail?
You don't get any output from the test command itself (as long as the test passes, as in the example: if it fails, the command prints a "contents of test case" / "actual result" / "expected result" summary; see also the remark on configuration below). The test statistics are saved internally: you can use the cleanupTests command to print the Total/Passed/Skipped/Failed numbers (that command also resets the counters and does some cleanup).
(When you run runAllTests, it runs test files in child processes, intercepting the output from each file's cleanupTests and adding them up to a grand total.)
The internal statistics collected during testing is available in AFACT undocumented namespace variables like ::tcltest::numTests. If you want to work with the statistics yourself, you can access them before calling cleanupTests, e.g.
parray ::tcltest::numTests
array set myTestData [array get ::tcltest::numTests]
set passed $::tcltest::numTests(Passed)
Look at the source for tcltest in your library to see what variables are available.
The amount of output from the test command is configurable, and you can get output even when the test passes if you add p / pass to the -verbose option. This option can also let you have less output on failure, etc.
You can also create a command called ::tcltest::ReportToMaster which, if it exists, will be called by cleanupTests with the pertinent data as arguments. Doing so seems to suppress both output of statistics and at least most resetting and cleanup. (I didn't go very far in investigating that method.) Be aware that messing about with this is more likely to create trouble than solve problems, but if you are writing your own testing software based on tcltest you might still want to look at it.
Oh, and please use the newer syntax for the test command. It's more verbose, but you'll thank yourself later on if you get started with it.
Obligatory-but-fairly-useless (in this case) documentation link: tcltest