Why can't Comma IDE find `raku` binary after a reboot? - raku

I have a test that I'm running in Comma IDE from a Raku distro downloaded from github.
The tests passed last night. But after rebooting this morning, the test no longer passes. The test runs the raku on the machine. After some investigation, I discovered, that the binary was not getting found in the test:
say (run 'which', 'raku', :out).out.slurp; # outputs nothing
But if I run the test directly with prove6 from the command line, I get the full path to raku.
I'm using rakubrew.
I can easily fix this by adding the full path in the test, but I'm curious to know why Comma IDE sudddenly can't find the path to the raku binary.
UPDATE: I should also mention I reimported the proejct this morning and that caused some problems so I invalidated caches. So it may have been this and not the reboot that caused the problem. I'm unsure.
UPDATE 2: No surprise but
my $raku-path = (shell 'echo $PATH', :out).out.slurp;
yields only /usr/bin:/bin:/usr/sbin:/sbin

My best guess: in the situation where it worked, Comma was started from a shell where rakubrew had set up the environment. Then, after the reboot, Comma was started again, but from a shell where that was not the case.
Unless you choose to do otherwise, environment variables are passed on from parent process to child process. Comma inherits those from the process that starts it, and those are passed on to any Raku process that is spawned from Comma. Your options:
Make your Raku program more robust by using $*EXECUTABLE instead of which raku (this variable holds the path to the currently executing Raku implementation)
Make sure to start Comma from a shell where rakubrew has tweaked the path.
Tweak the environment variables in the Run Configuration in Comma.

Related

How to disable Perl 6 REPL creating .precomp

Every time I run perl6 to enter the REPL mode, it creates a .precomp directory, which also slows down the appearance of the prompt. If the .precomp directory already exists, the prompt appears almost immediately, otherwise perl6 takes several seconds to create it.
Is there a way to disable this feature?
Check if you have a PERL6LIB environment variable set, and if it contains .. I can produce exactly the behavior you're encountering if I set that. The solution is to clear that from your PERL6LIB.

The 'right' way to run unit tests in Clojure

Currently, I define the following function in the REPL at the start of a coding session:
(defn rt []
(let [tns 'my.namespace-test]
(use tns :reload-all)
(cojure.test/test-ns tns)))
And everytime I make a change I rerun the tests:
user=>(rt)
That been working moderately well for me. When I remove a test, I have to restart the REPL and redefine the method which is a little annoying. Also I've heard bad rumblings about using the use function like this. So my questions are:
Is using use this way going to cause me a problem down the line?
Is there a more idiomatic workflow than what I'm currently doing?
most people run
lein test
form a different terminal. Which guarantees that what is in the files is what is tested not what is in your memory. Using reload-all can lead to false passes if you have changed a function name and are still calling the old name somewhere.
calling use like that is not a problem in it's self, it just constrains you to not have any name conflicts if you use more namespaces in your tests. So long as you have one, it's ok.
using lein lets you specify unit and integration tests and easily run them in groups using the test-selectors feature.
I also run tests in my REPL. I like doing this because I have more control over the tests and it's faster due to the JVM already running. However, like you said, it's easy to get in trouble. In order to clean things up, I suggest taking a look at tools.namespace.
In particular, you can use clojure.tools.namespace.repl/refresh to reload files that have changed in your live REPL. There's alsorefresh-all to reload all the files on the classpath.
I add tools.namespace to my :dev profile in my ~/.lein/profiles.clj so that I have it there for every project. Then when you run lein repl, it will be included on the classpath, but it wont leak into your project's proper dependencies.
Another thing I'll do when I'm working on a test is to require it into my REPL and run it manually. A test is just a no-argument function, so you can invoke them as such.
I am so far impressed with lein-midje
$ lein midje :autotest
Starts a clojure process watching src and test files, reloads the associated namespaces and runs the tests relevant to the changed file (tracking dependencies). I use it with VimShell to open a split buffer in vim and have both the source and the test file open as well. I write a change to either one and the (relevant) tests are executed in the split pane.

Workaround for enoent error from Erlang's Common Test on Windows?

When using Common Test with Erlang on Windows, I run into a lot of bugs with Common Test and Erlang. For one, if there are any spaces in the project's path, Common Test often fails outright. To workaround this, I moved the project to a path with no spaces (but I really wish the devs would fix the libraries so they work better on Windows). Now, I got Common Test to mostly run, except it won't print out the HTML report at the end. This is the error I get after the tests run:
Testing myapp.ebin: EXIT, reason {
{badmatch,{error,enoent}},
[{test_server_ctrl,start_minor_log_file1,4,
[{file,"test_server_ctrl.erl"},{line,1959}]},
{test_server_ctrl,run_test_case1,11,
[{file,"test_server_ctrl.erl"},{line,3761}]},
{test_server_ctrl,run_test_cases_loop,5,
[{file,"test_server_ctrl.erl"},{line,3032}]},
{test_server_ctrl,run_test_cases,3,
[{file,"test_server_ctrl.erl"},{line,2294}]},
{test_server_ctrl,ts_tc,3,
[{file,"test_server_ctrl.erl"},{line,1434}]},
{test_server_ctrl,init_tester,9,
[{file,"test_server_ctrl.erl"},
{line,1401}]}]}
This happened in sometimes in Erlang R15 and older if the test function names were either too long or had too many underscores in the name (which I suspect is also a bug) or when too many tests failed (which means Common Test is useless to me for TDD). But now it happens on every ct:run from Common Test in R15B01. Does anyone know how I can workaround this? Has anyone had any success with TDD and Common Test on Windows?
Given the last comment you might want to disable the buildin_hooks. You can do this by passing the following to ct:run/1 or ct_run
{enable_builtin_hooks,false}
That should disable the cth_log_redirect hook and maybe solve your problem during overload.

Faster way of testing your prolog program

I am new to Prolog, and the task of launching the prolog interpreter from the terminal, typing consult('some_prolog_program.pl'), and then testing the predicate you just wrote is very time consuming, is there a way to run a scripted test to speed up development?
For example in C I can write a main where I would use the functions I defined, I can then execute:
make && ./a.out
to test the code, can I do something similar with Prolog?
You can have the interpreter always open and then recompile the file.
You can auto-run a predicate after compiling the file:
:- foo(4,2).
This will run foo(4,2) when the line is encountered in the file.
There are flags that can be used while launching (most) Prolog interpreters that allow you to compile a file and run predicates (check the man page). This way you could make a Bash script. The following will consult file.pl and run foo/0 using SWI-Prolog:
#!/bin/sh
exec swipl -q -f none -g "load_files([file],[silent(true)])" \
-t foo -- $*
This predicate will unify Arguments with a list of the flags you gave at the command line:
current_prolog_flag(argv, Arguments)
But unless you are going to run a lot of tests, I don't think that writing all this extra code will be faster.
Personally I really like the flexibility of testing any predicate at any time with or without tracing (see trace/0) without having to write extra code to call them (unlike in C).
P.S. about reloading the file without leaving the interpreter: You might have some problem if you have used dynamic predicates or global variables; you will have to do some cleaning.
You can invoke a test file from the command-line with prolog +l <file>
Also, you can build a single run_tests predicate that exercises a series of calls and validates the actual results against expected results. Here's an article with a good worked-out example: http://kenegozi.com/blog/2008/07/24/unit-testing-in-prolog
In SWI, you can load things as usual. Then, when you edit your files you simply say make. on the toplevel and it checks all dependencies automatically and only reloads the modified files.
For bigger projects it does make a lot of sense to use makefiles. In particular to do unit testing. See SWI's package plunit.
For simple scripts in SWI-Prolog, using REPL to test the code manually is usually good enough. Changed files can be reloaded via make/0 (?- make. on toplevel). Just keep the Prolog REPL running while editing, then save the edits, run make. in the REPL and hit ↑, ↑, Enter to execute the last query before the make. from history.
The main benefit of REPL is its interactivity:
You may fiddle with the arguments.
Transition to debugging or tracing (both command line and graphical) is easy.
You don't need to perform I/O to print the result. Output is handled by the toplevel, which prints the substitution. You see the whole substitution, not only its part you just happen to print (possibly accidentally overlooking other parts).
You may interactively choose how many substitutions you want to see for a goal that succeeds multiple times.
It is obvious if there is a choice point left after the last result returned by a non-deterministic predicate, which is hard to observe otherwise. In that case, false. is printed when backtracking beyond the last result.
If you need to preserve the test calls to repeat them later, create a protocol (transcript or "log" of the interactive session) and edit it to become a script, or even a test suite (see below). The protocol is a plain text file with escape sequences for the terminal, containing a verbatim copy of what you see during the interactive session. View the protocol using cat protocol.txt on Linux (and other *NIXes) or type protocol.txt on Windows.
If interactivity is not needed, perform the test calls from the command line non-interactively. Let's test the CLP(FD) factorial example n_factorial/2, saved in factorial.pl (don't forget to add :- use_module(library(clpfd)). when copying the code):
$ swipl -q -t "between(0, 9, N), n_factorial(N, F), format('~D ', F), fail." factorial.pl
1 1 2 6 24 120 720 5,040 40,320 362,880
On Windows, you may need to specify full path to swipl.exe as it's not in the PATH, probably.
If the call is always the same, you may save it to a shell script or Makefile (run would be a good name for the target).
In your current workflow for testing functions in C, you create a new program and call the function under test from its entry point (main function). Prolog scripts can have an entry point, too. See library(main). Prolog does not require compilation, so you can just directly call the script (./test.pl) without calling Make first.
For larger projects, you may want to create a less ad-hoc test suite. A unit testing framework like PlUnit is needed. Its use is beyond the scope of this answer; see the documentation.

How do you use CTEST_CUSTOM_PRE_TEST?

I've searched all the docs but can't seem to find a single example of using CTEST_CUSTOM_PRE_TEST.
Basically I need to start and run some commands on the server before the test runs. So I need to add a few pre-test steps. What's the syntax of CTEST_CUSTOM_PRE_TEST?
CTEST_CUSTOM_PRE_TEST( ??? what to put here ??? )
ADD_TEST(MyTest MyTestCommand)
CTEST_CUSTOM_PRE_TEST is a variable used in the context of running a ctest dashboard. It should either be set directly in the ctest -S script itself, or in a CTestCustom.cmake file at the top of your build tree.
In either file, an example value might be:
set(CTEST_CUSTOM_PRE_TEST "perl prepareForTesting.pl -with-this -and-that")
It should be a single command line, properly formatted for running on the system you're on. It runs once during a ctest_test call, before all the tests run. Similarly, there is also a CTEST_CUSTOM_POST_TEST variable, that should also be a single command line, but runs after all the tests are done.
Quoting and escaping args with spaces, quotes and backslashes may be challenging ... but maybe you won't need that, either.
I do not know of a real world example of this that I can point you to, but I can read the ctest source code... ;-)
Place set(CTEST_CUSTOM_PRE_TEST .. in a file which during cmake execution is copied to ${CMAKE_BINARY_DIR}/CTestCustom.cmake. For details, see https://stackoverflow.com/a/37748933/1017348.
In OpenSCAD on headless linux we attempt to startup a virtual framebuffer before ctest runs. We don't use PRE_TEST though. We build our own CTestCustom.cmake in the build directory during the 'cmake' run. (We do use POST_TEST, but there were a few recent versions of cmake where POST_TEST was broken)
You can find the code here https://github.com/openscad/openscad/blob/master/tests