How do you use CTEST_CUSTOM_PRE_TEST? - cmake

I've searched all the docs but can't seem to find a single example of using CTEST_CUSTOM_PRE_TEST.
Basically I need to start and run some commands on the server before the test runs. So I need to add a few pre-test steps. What's the syntax of CTEST_CUSTOM_PRE_TEST?
CTEST_CUSTOM_PRE_TEST( ??? what to put here ??? )
ADD_TEST(MyTest MyTestCommand)

CTEST_CUSTOM_PRE_TEST is a variable used in the context of running a ctest dashboard. It should either be set directly in the ctest -S script itself, or in a CTestCustom.cmake file at the top of your build tree.
In either file, an example value might be:
set(CTEST_CUSTOM_PRE_TEST "perl prepareForTesting.pl -with-this -and-that")
It should be a single command line, properly formatted for running on the system you're on. It runs once during a ctest_test call, before all the tests run. Similarly, there is also a CTEST_CUSTOM_POST_TEST variable, that should also be a single command line, but runs after all the tests are done.
Quoting and escaping args with spaces, quotes and backslashes may be challenging ... but maybe you won't need that, either.
I do not know of a real world example of this that I can point you to, but I can read the ctest source code... ;-)

Place set(CTEST_CUSTOM_PRE_TEST .. in a file which during cmake execution is copied to ${CMAKE_BINARY_DIR}/CTestCustom.cmake. For details, see https://stackoverflow.com/a/37748933/1017348.

In OpenSCAD on headless linux we attempt to startup a virtual framebuffer before ctest runs. We don't use PRE_TEST though. We build our own CTestCustom.cmake in the build directory during the 'cmake' run. (We do use POST_TEST, but there were a few recent versions of cmake where POST_TEST was broken)
You can find the code here https://github.com/openscad/openscad/blob/master/tests

Related

Why can't Comma IDE find `raku` binary after a reboot?

I have a test that I'm running in Comma IDE from a Raku distro downloaded from github.
The tests passed last night. But after rebooting this morning, the test no longer passes. The test runs the raku on the machine. After some investigation, I discovered, that the binary was not getting found in the test:
say (run 'which', 'raku', :out).out.slurp; # outputs nothing
But if I run the test directly with prove6 from the command line, I get the full path to raku.
I'm using rakubrew.
I can easily fix this by adding the full path in the test, but I'm curious to know why Comma IDE sudddenly can't find the path to the raku binary.
UPDATE: I should also mention I reimported the proejct this morning and that caused some problems so I invalidated caches. So it may have been this and not the reboot that caused the problem. I'm unsure.
UPDATE 2: No surprise but
my $raku-path = (shell 'echo $PATH', :out).out.slurp;
yields only /usr/bin:/bin:/usr/sbin:/sbin
My best guess: in the situation where it worked, Comma was started from a shell where rakubrew had set up the environment. Then, after the reboot, Comma was started again, but from a shell where that was not the case.
Unless you choose to do otherwise, environment variables are passed on from parent process to child process. Comma inherits those from the process that starts it, and those are passed on to any Raku process that is spawned from Comma. Your options:
Make your Raku program more robust by using $*EXECUTABLE instead of which raku (this variable holds the path to the currently executing Raku implementation)
Make sure to start Comma from a shell where rakubrew has tweaked the path.
Tweak the environment variables in the Run Configuration in Comma.

How to execute raku script from interpreter?

I open raku/rakudo/perl6 thus:
con#V:~/Scripts/perl6$ perl6
To exit type 'exit' or '^D'
>
Is the above environment called the "interpreter"? I have been searching forever, and I cannot find what it's called.
How can I execute a rakudo script like I would do
source('script.R') in R, or exec(open('script.py').read()) in python3?
To clarify, I would like the functions and libraries in the script to be available in REPL, which run doesn't seem to do.
I'm sure this exists in documentation somewhere but I can't find it :(
As Valle Lukas has said, there's no exact replacement. However, all usual functions are there to run external programs,
shell("raku multi-dim-hash.raku") will run that as an external program.
IIRC, source also evaluated the source. So you might want to use require, although symbols will not be imported directly and you'll have to use indirect lookup for that.
You can also use EVAL on the loaded module, but again, variables and symbols will not be imported.
It's called Read-Eval-Print Loop REPL. You can execute raku scripts direct in the shell: raku filename.raku without REPL. To run code from REPL you can have a look at run (run <raku test.raku> ) or EVALFILE.
The rosettacode page Include a file has some information. But it looks like there is no exact replacement for your R source('script.R') example at the moment.

Let CMake setup CTtest to print header and footer around the output from single tests

Is there a way, ideally from CMakeLists.txt, to setup ctest as to
print a header before running the individual tests,
print a footer after running the individual tests,
make the footer dependent on whether the tests were all successful or not ?
The footer should appear below the default output
The following tests FAILED:
76 - MyHardTest
Errors while running CTest
This concretizes and generalizes a somewhat unclear question that is open since more than 2 years (CMakeLists.txt: How to print a message if ctest fails?). Therefore I fear there is no easy solution.
Thence an alternative question: could the desired bevhavior achieved with CDash?
YES, CTest does have macros to achieve exactly this [1]:
CTEST_CUSTOM_PRE_TEST Command to execute before any tests are run during Test stage
CTEST_CUSTOM_POST_TEST Command to execute after any tests are run during Test stage
To activate these macros from cmake for use by ctest, they must somehow be placed into the build directory. So it seems, two steps are necessary:
(1) Have a script scriptdir/CTestCustom.cmake.in somewhere in the source tree, which contains
set(CTEST_CUSTOM_POST_TEST "echo AFTER_THE_TEST")
or whatever whatever command instead of "echo"
(2) Let CMakeLists.txt call
configure_file("scriptdir/CTestCustom.cmake.in" ${CMAKE_BINARY_DIR}/CTestCustom.cmake)
so that during configuration stage a CTest configuration file is placed under the preferred name [2] CTestCustom.cmake in the build directory.
[1] https://cmake.org/Wiki/CMake/Testing_With_CTest
[2] https://blog.kitware.com/ctest-performance-tip-use-ctestcustom-cmake-not-ctest/
During my research, I found it was extremely difficult to integrate something like this. I am not entirely sure but I believe you can do this in CTestScript, then create a add_custom_target to always allow that Script to execute with ctest. For example, the command make check will now run ctest with the CTestScript that you made... too much work?
Easiest way I can think of for your application is to just add two empty tests at top and bottom as placeholders for header and footers. Ctest already has a "The following tests FAILED:" kind of output at the very end, so you might not have to worry about it. Any sort of conditional logic IF TEST FAILED DO THIS, you cannot do currently in ctest.
add_test(NAME HEADER_GOES_HERE)
add_test(NAME ACTUAL_TEST COMMAND test)
add_test(NAME FOOTER_GOES_HERE)
Maybe someone can give you a better answer, but this is the easiest (not at all good) implementation I can think of.

The 'right' way to run unit tests in Clojure

Currently, I define the following function in the REPL at the start of a coding session:
(defn rt []
(let [tns 'my.namespace-test]
(use tns :reload-all)
(cojure.test/test-ns tns)))
And everytime I make a change I rerun the tests:
user=>(rt)
That been working moderately well for me. When I remove a test, I have to restart the REPL and redefine the method which is a little annoying. Also I've heard bad rumblings about using the use function like this. So my questions are:
Is using use this way going to cause me a problem down the line?
Is there a more idiomatic workflow than what I'm currently doing?
most people run
lein test
form a different terminal. Which guarantees that what is in the files is what is tested not what is in your memory. Using reload-all can lead to false passes if you have changed a function name and are still calling the old name somewhere.
calling use like that is not a problem in it's self, it just constrains you to not have any name conflicts if you use more namespaces in your tests. So long as you have one, it's ok.
using lein lets you specify unit and integration tests and easily run them in groups using the test-selectors feature.
I also run tests in my REPL. I like doing this because I have more control over the tests and it's faster due to the JVM already running. However, like you said, it's easy to get in trouble. In order to clean things up, I suggest taking a look at tools.namespace.
In particular, you can use clojure.tools.namespace.repl/refresh to reload files that have changed in your live REPL. There's alsorefresh-all to reload all the files on the classpath.
I add tools.namespace to my :dev profile in my ~/.lein/profiles.clj so that I have it there for every project. Then when you run lein repl, it will be included on the classpath, but it wont leak into your project's proper dependencies.
Another thing I'll do when I'm working on a test is to require it into my REPL and run it manually. A test is just a no-argument function, so you can invoke them as such.
I am so far impressed with lein-midje
$ lein midje :autotest
Starts a clojure process watching src and test files, reloads the associated namespaces and runs the tests relevant to the changed file (tracking dependencies). I use it with VimShell to open a split buffer in vim and have both the source and the test file open as well. I write a change to either one and the (relevant) tests are executed in the split pane.

Faster way of testing your prolog program

I am new to Prolog, and the task of launching the prolog interpreter from the terminal, typing consult('some_prolog_program.pl'), and then testing the predicate you just wrote is very time consuming, is there a way to run a scripted test to speed up development?
For example in C I can write a main where I would use the functions I defined, I can then execute:
make && ./a.out
to test the code, can I do something similar with Prolog?
You can have the interpreter always open and then recompile the file.
You can auto-run a predicate after compiling the file:
:- foo(4,2).
This will run foo(4,2) when the line is encountered in the file.
There are flags that can be used while launching (most) Prolog interpreters that allow you to compile a file and run predicates (check the man page). This way you could make a Bash script. The following will consult file.pl and run foo/0 using SWI-Prolog:
#!/bin/sh
exec swipl -q -f none -g "load_files([file],[silent(true)])" \
-t foo -- $*
This predicate will unify Arguments with a list of the flags you gave at the command line:
current_prolog_flag(argv, Arguments)
But unless you are going to run a lot of tests, I don't think that writing all this extra code will be faster.
Personally I really like the flexibility of testing any predicate at any time with or without tracing (see trace/0) without having to write extra code to call them (unlike in C).
P.S. about reloading the file without leaving the interpreter: You might have some problem if you have used dynamic predicates or global variables; you will have to do some cleaning.
You can invoke a test file from the command-line with prolog +l <file>
Also, you can build a single run_tests predicate that exercises a series of calls and validates the actual results against expected results. Here's an article with a good worked-out example: http://kenegozi.com/blog/2008/07/24/unit-testing-in-prolog
In SWI, you can load things as usual. Then, when you edit your files you simply say make. on the toplevel and it checks all dependencies automatically and only reloads the modified files.
For bigger projects it does make a lot of sense to use makefiles. In particular to do unit testing. See SWI's package plunit.
For simple scripts in SWI-Prolog, using REPL to test the code manually is usually good enough. Changed files can be reloaded via make/0 (?- make. on toplevel). Just keep the Prolog REPL running while editing, then save the edits, run make. in the REPL and hit ↑, ↑, Enter to execute the last query before the make. from history.
The main benefit of REPL is its interactivity:
You may fiddle with the arguments.
Transition to debugging or tracing (both command line and graphical) is easy.
You don't need to perform I/O to print the result. Output is handled by the toplevel, which prints the substitution. You see the whole substitution, not only its part you just happen to print (possibly accidentally overlooking other parts).
You may interactively choose how many substitutions you want to see for a goal that succeeds multiple times.
It is obvious if there is a choice point left after the last result returned by a non-deterministic predicate, which is hard to observe otherwise. In that case, false. is printed when backtracking beyond the last result.
If you need to preserve the test calls to repeat them later, create a protocol (transcript or "log" of the interactive session) and edit it to become a script, or even a test suite (see below). The protocol is a plain text file with escape sequences for the terminal, containing a verbatim copy of what you see during the interactive session. View the protocol using cat protocol.txt on Linux (and other *NIXes) or type protocol.txt on Windows.
If interactivity is not needed, perform the test calls from the command line non-interactively. Let's test the CLP(FD) factorial example n_factorial/2, saved in factorial.pl (don't forget to add :- use_module(library(clpfd)). when copying the code):
$ swipl -q -t "between(0, 9, N), n_factorial(N, F), format('~D ', F), fail." factorial.pl
1 1 2 6 24 120 720 5,040 40,320 362,880
On Windows, you may need to specify full path to swipl.exe as it's not in the PATH, probably.
If the call is always the same, you may save it to a shell script or Makefile (run would be a good name for the target).
In your current workflow for testing functions in C, you create a new program and call the function under test from its entry point (main function). Prolog scripts can have an entry point, too. See library(main). Prolog does not require compilation, so you can just directly call the script (./test.pl) without calling Make first.
For larger projects, you may want to create a less ad-hoc test suite. A unit testing framework like PlUnit is needed. Its use is beyond the scope of this answer; see the documentation.