Faster way of testing your prolog program - testing

I am new to Prolog, and the task of launching the prolog interpreter from the terminal, typing consult('some_prolog_program.pl'), and then testing the predicate you just wrote is very time consuming, is there a way to run a scripted test to speed up development?
For example in C I can write a main where I would use the functions I defined, I can then execute:
make && ./a.out
to test the code, can I do something similar with Prolog?

You can have the interpreter always open and then recompile the file.
You can auto-run a predicate after compiling the file:
:- foo(4,2).
This will run foo(4,2) when the line is encountered in the file.
There are flags that can be used while launching (most) Prolog interpreters that allow you to compile a file and run predicates (check the man page). This way you could make a Bash script. The following will consult file.pl and run foo/0 using SWI-Prolog:
#!/bin/sh
exec swipl -q -f none -g "load_files([file],[silent(true)])" \
-t foo -- $*
This predicate will unify Arguments with a list of the flags you gave at the command line:
current_prolog_flag(argv, Arguments)
But unless you are going to run a lot of tests, I don't think that writing all this extra code will be faster.
Personally I really like the flexibility of testing any predicate at any time with or without tracing (see trace/0) without having to write extra code to call them (unlike in C).
P.S. about reloading the file without leaving the interpreter: You might have some problem if you have used dynamic predicates or global variables; you will have to do some cleaning.

You can invoke a test file from the command-line with prolog +l <file>
Also, you can build a single run_tests predicate that exercises a series of calls and validates the actual results against expected results. Here's an article with a good worked-out example: http://kenegozi.com/blog/2008/07/24/unit-testing-in-prolog

In SWI, you can load things as usual. Then, when you edit your files you simply say make. on the toplevel and it checks all dependencies automatically and only reloads the modified files.
For bigger projects it does make a lot of sense to use makefiles. In particular to do unit testing. See SWI's package plunit.

For simple scripts in SWI-Prolog, using REPL to test the code manually is usually good enough. Changed files can be reloaded via make/0 (?- make. on toplevel). Just keep the Prolog REPL running while editing, then save the edits, run make. in the REPL and hit ↑, ↑, Enter to execute the last query before the make. from history.
The main benefit of REPL is its interactivity:
You may fiddle with the arguments.
Transition to debugging or tracing (both command line and graphical) is easy.
You don't need to perform I/O to print the result. Output is handled by the toplevel, which prints the substitution. You see the whole substitution, not only its part you just happen to print (possibly accidentally overlooking other parts).
You may interactively choose how many substitutions you want to see for a goal that succeeds multiple times.
It is obvious if there is a choice point left after the last result returned by a non-deterministic predicate, which is hard to observe otherwise. In that case, false. is printed when backtracking beyond the last result.
If you need to preserve the test calls to repeat them later, create a protocol (transcript or "log" of the interactive session) and edit it to become a script, or even a test suite (see below). The protocol is a plain text file with escape sequences for the terminal, containing a verbatim copy of what you see during the interactive session. View the protocol using cat protocol.txt on Linux (and other *NIXes) or type protocol.txt on Windows.
If interactivity is not needed, perform the test calls from the command line non-interactively. Let's test the CLP(FD) factorial example n_factorial/2, saved in factorial.pl (don't forget to add :- use_module(library(clpfd)). when copying the code):
$ swipl -q -t "between(0, 9, N), n_factorial(N, F), format('~D ', F), fail." factorial.pl
1 1 2 6 24 120 720 5,040 40,320 362,880
On Windows, you may need to specify full path to swipl.exe as it's not in the PATH, probably.
If the call is always the same, you may save it to a shell script or Makefile (run would be a good name for the target).
In your current workflow for testing functions in C, you create a new program and call the function under test from its entry point (main function). Prolog scripts can have an entry point, too. See library(main). Prolog does not require compilation, so you can just directly call the script (./test.pl) without calling Make first.
For larger projects, you may want to create a less ad-hoc test suite. A unit testing framework like PlUnit is needed. Its use is beyond the scope of this answer; see the documentation.

Related

How to execute raku script from interpreter?

I open raku/rakudo/perl6 thus:
con#V:~/Scripts/perl6$ perl6
To exit type 'exit' or '^D'
>
Is the above environment called the "interpreter"? I have been searching forever, and I cannot find what it's called.
How can I execute a rakudo script like I would do
source('script.R') in R, or exec(open('script.py').read()) in python3?
To clarify, I would like the functions and libraries in the script to be available in REPL, which run doesn't seem to do.
I'm sure this exists in documentation somewhere but I can't find it :(
As Valle Lukas has said, there's no exact replacement. However, all usual functions are there to run external programs,
shell("raku multi-dim-hash.raku") will run that as an external program.
IIRC, source also evaluated the source. So you might want to use require, although symbols will not be imported directly and you'll have to use indirect lookup for that.
You can also use EVAL on the loaded module, but again, variables and symbols will not be imported.
It's called Read-Eval-Print Loop REPL. You can execute raku scripts direct in the shell: raku filename.raku without REPL. To run code from REPL you can have a look at run (run <raku test.raku> ) or EVALFILE.
The rosettacode page Include a file has some information. But it looks like there is no exact replacement for your R source('script.R') example at the moment.

LLVM lit testing: is it possible to configure number of threads via `lit.cfg`?

I wonder if it's possible to configure the number of threads for testing in a lit.cfg file.
lit offers a command-line flag for specifying the number of threads:
llvm/utils/lit/lit.py -j1 <test directory>
However, I'm not sure how to do this in a lit.cfg file. I want to force all tests in a subdirectory to be run with -j1 - not sure if this is possible.
Edit: for reference, I'm working on the Swift codebase which has a large test suite (4000+ tests) with multiple test subdirectories.
I want to run just one subdirectory with -j1 and the rest with the default number of threads (-j12 for my machine).
I was wondering about that too a while back, but I don't think there is one because of this line here. Usually, the main project compilation times dwarf the lit tests execution time.
It is easy to change, but I'd suggest using your build configuration to this (e.g. make or cmake). So, make test could execute something like lit -j $(nproc) underneath.
Edit (after OP update):
I haven't worked with the swift repo, but maybe you could hack your way around. One thing I could see is that you could influence the LIT_ARGS cmake variable with the options you want by appending to it.
Now to force a single process execution for a specific directory, you may add a lit.local.cfg that sets the singleProcess flag. This seems to override multi-thread execution:
config.singleProcess = True

Giving unique names to GDB logging files in a script

I would like to log GDB command output to a log file.
This is done using the following command:
set logging file outfulfile.txt
But, instead of simple outfulfile.txt, I would like to give a unique name to the file; for example outfulfile-PID.txt. This is because I will have several processes simultaneously producing the output and I want each one to log to its own unique file.
How can I programmatically derive such a file name in a GDB script?
There area few ways.
One relatively simple way is to use gdb's "eval" command. It substitutes arguments like printf, and then executes the result as a gdb command. This is a new-ish command.
If you don't have "eval" you might still have Python scripting. In this case you can write a short (one line) Python script like:
(gdb) python gdb.execute('set logging file ' + .. your logic here ..)
If you don't have Python scripting, then your gdb is really old and you ought to upgrade. However you can still maybe accomplish what you want, just with some awful gyrations using "shell" and writing out a script that you then "source" back into gdb. Though this technique seems somewhat hard in this case.

The 'right' way to run unit tests in Clojure

Currently, I define the following function in the REPL at the start of a coding session:
(defn rt []
(let [tns 'my.namespace-test]
(use tns :reload-all)
(cojure.test/test-ns tns)))
And everytime I make a change I rerun the tests:
user=>(rt)
That been working moderately well for me. When I remove a test, I have to restart the REPL and redefine the method which is a little annoying. Also I've heard bad rumblings about using the use function like this. So my questions are:
Is using use this way going to cause me a problem down the line?
Is there a more idiomatic workflow than what I'm currently doing?
most people run
lein test
form a different terminal. Which guarantees that what is in the files is what is tested not what is in your memory. Using reload-all can lead to false passes if you have changed a function name and are still calling the old name somewhere.
calling use like that is not a problem in it's self, it just constrains you to not have any name conflicts if you use more namespaces in your tests. So long as you have one, it's ok.
using lein lets you specify unit and integration tests and easily run them in groups using the test-selectors feature.
I also run tests in my REPL. I like doing this because I have more control over the tests and it's faster due to the JVM already running. However, like you said, it's easy to get in trouble. In order to clean things up, I suggest taking a look at tools.namespace.
In particular, you can use clojure.tools.namespace.repl/refresh to reload files that have changed in your live REPL. There's alsorefresh-all to reload all the files on the classpath.
I add tools.namespace to my :dev profile in my ~/.lein/profiles.clj so that I have it there for every project. Then when you run lein repl, it will be included on the classpath, but it wont leak into your project's proper dependencies.
Another thing I'll do when I'm working on a test is to require it into my REPL and run it manually. A test is just a no-argument function, so you can invoke them as such.
I am so far impressed with lein-midje
$ lein midje :autotest
Starts a clojure process watching src and test files, reloads the associated namespaces and runs the tests relevant to the changed file (tracking dependencies). I use it with VimShell to open a split buffer in vim and have both the source and the test file open as well. I write a change to either one and the (relevant) tests are executed in the split pane.

How do you use CTEST_CUSTOM_PRE_TEST?

I've searched all the docs but can't seem to find a single example of using CTEST_CUSTOM_PRE_TEST.
Basically I need to start and run some commands on the server before the test runs. So I need to add a few pre-test steps. What's the syntax of CTEST_CUSTOM_PRE_TEST?
CTEST_CUSTOM_PRE_TEST( ??? what to put here ??? )
ADD_TEST(MyTest MyTestCommand)
CTEST_CUSTOM_PRE_TEST is a variable used in the context of running a ctest dashboard. It should either be set directly in the ctest -S script itself, or in a CTestCustom.cmake file at the top of your build tree.
In either file, an example value might be:
set(CTEST_CUSTOM_PRE_TEST "perl prepareForTesting.pl -with-this -and-that")
It should be a single command line, properly formatted for running on the system you're on. It runs once during a ctest_test call, before all the tests run. Similarly, there is also a CTEST_CUSTOM_POST_TEST variable, that should also be a single command line, but runs after all the tests are done.
Quoting and escaping args with spaces, quotes and backslashes may be challenging ... but maybe you won't need that, either.
I do not know of a real world example of this that I can point you to, but I can read the ctest source code... ;-)
Place set(CTEST_CUSTOM_PRE_TEST .. in a file which during cmake execution is copied to ${CMAKE_BINARY_DIR}/CTestCustom.cmake. For details, see https://stackoverflow.com/a/37748933/1017348.
In OpenSCAD on headless linux we attempt to startup a virtual framebuffer before ctest runs. We don't use PRE_TEST though. We build our own CTestCustom.cmake in the build directory during the 'cmake' run. (We do use POST_TEST, but there were a few recent versions of cmake where POST_TEST was broken)
You can find the code here https://github.com/openscad/openscad/blob/master/tests