Can I doc-test ELisp functions? - testing

I like the Python feature of doc-tests for testing functions independently. Does Emacs Lisp have something similar, or could I emulate it in some way?
For example, this function gets timestamps from an Org-mode clock segment:
(defun org-get-timestamps (line)
"Parses a clock segment line and returns the first and last timestamps in a list."
(let* ((org-clock-regexp (concat "CLOCK: " org-ts-regexp3 "--" org-ts-regexp3))
(t1 (if (string-match org-clock-regexp line)
(match-string 1 line)
(user-error "The argument must have a valid CLOCK range")))
(t2 (match-string 9 line)))
(cons t1 (cons t2 '()))))
I would like a doc-test such as:
(org-get-timestamps "CLOCK: [2019-09-26 Thu 00:29]--[2019-09-26 Thu 01:11] => 0:42")
("2019-09-26 Thu 00:29" "2019-09-26 Thu 01:11")
A test of the user-error would also be nice.
I also would like to ensure that any refactoring passes the doc-test, so it's also a regression test.
Does that exist?

An important feature of Python doctest is how its input looks like a Python interactive REPL session, as described in the doctest documentation:
The doctest module searches for pieces of text that look like
interactive Python sessions, and then executes those sessions to
verify that they work exactly as shown.
I'm not aware of any elisp facilities exactly like this, but I think you can achieve what you want using the Emacs Lisp Regression Testing (ERT) framework, which supports both interactive and batch test execution.
To test the org-get-timestamps function you can define a test like this:
(require 'ert)
(ert-deftest org-timestamp-test ()
(should (equal
(org-get-timestamps "CLOCK: [2019-09-26 Thu 00:29]--[2019-09-26 Thu 01:11] => 0:42")
'("2019-09-26 Thu 00:29" "2019-09-26 Thu 01:11"))))
To run the test interactively, you can type M-x ert, press enter, and then either press enter again to select all tests using the default t argument or type the name of the test to run and press enter, and the test results will be shown in the *ert* buffer:
Selector: org-timestamp-test
Passed: 1
Failed: 0
Skipped: 0
Total: 1/1
Started at: 2019-09-27 08:44:57-0400
Finished.
Finished at: 2019-09-27 08:44:57-0400
.
The dot character at the very end above represents the test that was run. If multiple tests were executed, there would be multiple dots.
To run the test in batch mode, save it to file org-timestamp-test.el and assuming the org-get-timestamps function resides in file org-timestamps.el, run it like this from your shell command line:
emacs -batch -l ert -l org-timestamps.el -l org-timestamp-test.el -f ert-run-tests-batch-and-exit
The test results are then presented on the shell output:
Running 1 tests (2019-09-27 06:03:09-0700) passed 1/1
org-timestamp-test
Ran 1 tests, 1 results as expected (2019-09-27 06:03:09-0700)

Related

How can I only see the execution process of the instructions of my C code starting at main in gem5 syscall emulation?

I am going crazy with gem5. When I run a program that just outputs "Hello, world". Only set debug=Exec in gem5. I saw the operation of thousands of lines of assembly instructions. How can I only see the execution process of my own code?
Is running ExecAll which enables ExecSymbol which shows the current function name good enough? E.g. with this I see the first instruction on main as:
58852000: system.cpu: A0 T0 : 0x3fffd94f70 #__end__+274871107384 : blr x3 : IntAlu : D=0x0000003fffd94f74 flags=(IsInteger|IsControl|IsIndirectControl|IsUncondControl|IsCall)
58852500: system.cpu: A0 T0 : 0x4006f0 #main : stp
If you really don't want to log the instructions before main, you can also do a first run that determines the timestamp of the start of main (58852500 on the above run) and then use:
gem5.opt --debug-start=58852500
I don't know any method that does not require an initial run to determine the timestamp. It would be cool to add something to gem5 that enables logging based on the symbol name, I've wanted that before.

Groovy assertion script is executed twice within SoapUI

Hello I'm trying to do a simple groovy script in Soapui
I try to get a testcase property and increment it then save it.
when I run then script it increments it two times I don't know why. I tried different syntaxes but nothing seems to work up to now.
Here is a screenshot that shows my problem
here I run the test 2 times, first the variable was 3, normally when I run the test the second time the before value should be at 4 and the after at 5, and not 5 and 6.
I believe that you do not want to have the increment logic in Script Assertion.
Instead increment counter in the Setup Script of test case.
If you need the counter value in the script assertion, just read it alone.
Hope this helps.
By the way, I do not see any issue with script you have shown.
Check if there any where else if this variable is being manipulated.
def cnt = context.testCase.getPropertyValue('COUNT') as Integer
if (cnt< 10){
log.info "before : $cnt"
cnt += 1
log.info "after : $cnt"
context.testCase.setPropertyValue('COUNT', cnt.toString())
}
Can't comment yet. I'm seeing the same issue in 5.3.0 - here's my script that grabs a string value from properties, converts it to an Integer, increments it and sends back as a string.
loopsInt = messageExchange.modelItem.testStep.testCase.getPropertyValue("loops").toInteger();
log.info loopsInt;
loopsInt++;
log.info loopsInt;
messageExchange.modelItem.testStep.testCase.setPropertyValue("loops", String.valueOf(loopsInt));
I log the value before I increment, and immediately after, and as you can see, the value is being incremented twice. Here I run the script 3 times:
Thu Mar 16 12:04:54 NZDT 2017:INFO:52
Thu Mar 16 12:04:54 NZDT 2017:INFO:53
Thu Mar 16 12:04:56 NZDT 2017:INFO:54
Thu Mar 16 12:04:56 NZDT 2017:INFO:55
Thu Mar 16 12:04:59 NZDT 2017:INFO:56
Thu Mar 16 12:04:59 NZDT 2017:INFO:57
I get the same result whether I use loopsInt++ or loopsInt = loopsInt + 1. The "loops" property is not being used anywhere else. Weird.
When you execute the assertion script with the green arrow in Script Assertion Window, it is executed twice.
I have used the following script:
def loopsInt = messageExchange.modelItem.testStep.testCase.getPropertyValue("myNum").toInteger();
log.info loopsInt
loopsInt++
messageExchange.modelItem.testStep.testCase.setPropertyValue("myNum", String.valueOf(loopsInt))
See the following picture. One window logs even numbers and second the odd numbers.
Please note that execution in Script Assertion Window shall be utilized just for debugging of the script. When you execute the test case (test step), the script is executed only once, as expected.
Anyway, I think there are better places to set test case properties (setUp script, Groovy Script test step and others). I recommend to use assertion scripts for checking the message exchange.
Karel
Found a very strange reason for this.
If the testStep having this assertion is been run successfully [i.e., if SOAP test step, then it becomes green], and after this if you open the assertion and run it separately, then it's incremented twice. Once in your editor, once within the test step itself.
Say the testSetp had failed [is red in color], then you try running the assertion separately, it works absolutely fine.

Image freeze when a continuation is called

I'm trying to test the continuation facility in Pharo, with this code(in the playground):
| cont f |
f:=[
|i|
i:=0.
Continuation currentDo: [ :cc | cont:=cc ].
i:=i+1.
].
f value. "1"
cont. "a Continuation"
However, as soon as I call the continuation saved in cont(replacing cont. by cont value.), the image freezes immediately, and I have to press atl+. to gain back control.
VM version: VM: NBCoInterpreter NativeBoost-CogPlugin-GuillermoPolito.19 uuid: acc98e51-2fba-4841-a965-2975997bba66 May 15 2014 NBCogit NativeBoost-CogPlugin-GuillermoPolito.19 uuid: acc98e51-2fba-4841-a965-2975997bba66 May 15 2014 https://github.com/pharo-project/pharo-vm.git Commit: ed4a4f59208968a21d82fd2406f75c2c4de558b2 Date: 2014-05-15 18:23:04 +0200 By: Esteban Lorenzano <estebanlm#gmail.com> Jenkins build #14826
Pharo version: [version] 4.0 #40614
Thanks.
Edit: I was stupid, didn't think this through...
You've effectively created an infinite loop by reevaluating the same code again and again. You can see that if you debug the code and step through it. The original context will always be restored and then evaluated starting with the first expression following the #currentDo: send. This is exactly what the continuation is supposed to do: save the current position in the execution and restart there later on.
I do not have a Fedora to test, however I tried your code in Ubuntu, using this version of Pharo:
wget -O- get.pharo.org/40+vm | bash
./pharo-ui Pharo.image
and your code seems to work properly :(
In case this error persists, could you be more specific about the version of the vm you are using?:
./pharo Pharo.image --version
And the version of Pharo you are using?:
./pharo Pharo.image printVersion
Also, send us the crash.dmp file would help a lot.

tcl tcltest unknown option -run

When I run ANY test I get the same message. Here is an example test:
package require tcltest
namespace import -force ::tcltest::*
test foo-1.1 {save 1 in variable name foo} {} {
set foo 1
} {1}
I get the following output:
WARNING: unknown option -run: should be one of -asidefromdir, -constraints, -debug, -errfile, -file, -limitconstraints, -load, -loadfile, -match, -notfile, -outfile, -preservecore, -relateddir, -singleproc, -skip, -testdir, -tmpdir, or -verbose
I've tried multiple tests and nothing seems to work. Does anyone know how to get this working?
Update #1:
The above error was my fault, it was due to it being run in my script. However if I run the following at a command line I got no output:
[root#server1 ~]$ tcl
tcl>package require tcltest
2.3.3
tcl>namespace import -force ::tcltest::*
tcl>test foo-1.1 {save 1 in variable name foo} {expr 1+1} {2}
tcl>echo [test foo-1.1 {save 1 in variable name foo} {expr 1+1} {2}]
tcl>
How do I get it to output pass or fail?
You don't get any output from the test command itself (as long as the test passes, as in the example: if it fails, the command prints a "contents of test case" / "actual result" / "expected result" summary; see also the remark on configuration below). The test statistics are saved internally: you can use the cleanupTests command to print the Total/Passed/Skipped/Failed numbers (that command also resets the counters and does some cleanup).
(When you run runAllTests, it runs test files in child processes, intercepting the output from each file's cleanupTests and adding them up to a grand total.)
The internal statistics collected during testing is available in AFACT undocumented namespace variables like ::tcltest::numTests. If you want to work with the statistics yourself, you can access them before calling cleanupTests, e.g.
parray ::tcltest::numTests
array set myTestData [array get ::tcltest::numTests]
set passed $::tcltest::numTests(Passed)
Look at the source for tcltest in your library to see what variables are available.
The amount of output from the test command is configurable, and you can get output even when the test passes if you add p / pass to the -verbose option. This option can also let you have less output on failure, etc.
You can also create a command called ::tcltest::ReportToMaster which, if it exists, will be called by cleanupTests with the pertinent data as arguments. Doing so seems to suppress both output of statistics and at least most resetting and cleanup. (I didn't go very far in investigating that method.) Be aware that messing about with this is more likely to create trouble than solve problems, but if you are writing your own testing software based on tcltest you might still want to look at it.
Oh, and please use the newer syntax for the test command. It's more verbose, but you'll thank yourself later on if you get started with it.
Obligatory-but-fairly-useless (in this case) documentation link: tcltest

File I/O in gnu parallel

I have a program that takes a single argument. I am using gnu parallel to perform parameter sweeps on this argument. Each run generates a single result, and I want to append all results into a single file, say Results.txt.
What would be a correct way to do this?
I should not have each instance open the file and write to it, as this could create conflicts and also mess up the order of results. The only way I can think of doing this is having each run generate its output in a file with a unique name, and then , when gnu parallel finishes running, merge the results into a single file using a script.
Is there a simpler way of achieving this?
What happens when multiple instances write to/read from the same file? Does gnu parallel create multiple copies, one for each instances, as it does for stdout and stderror?
thanks
If your command sends the result to stdout (standard output) the solution is trivial:
seq 1000 | parallel echo > Results.txt
GNU Parallel guarantees the output will not be mixed.
Normally GNU Parallel prints the output of a job as soon as it's completed. When jobs run for a different amount of time, this can lead to their output being mixed.
To keep the output in order, simply add -k / --keep-order parameter.
Try for example:
parallel -j4 sleep {}\; echo {} ::: 2 1 4 3
parallel -j4 -k sleep {}\; echo {} ::: 2 1 4 3