How do I prevent script from crashing as a result of failed proc? - raku

I've got this:
try { run 'tar', '-zxvf', $path.Str, "$dir/META6.json", :err }
Despite being in a try{} block, this line is still causing my script to crash:
The spawned command 'tar' exited unsuccessfully (exit code: 1, signal: 0)
in block at ./all-versions.raku line 27
in block at ./all-versions.raku line 16
in block <unit> at ./all-versions.raku line 13
Why isn't the try{} block allowing the script to continue and how can I get it to continue?

That's because the run didn't fail (yet). run returns a Proc object. And that by itself doesn't throw (yet).
try just returns that Proc object. As soon as the returned value is used however (for instance, by having it sunk), then it will throw.
Compare (with immediate sinking):
$ raku -e 'run "foo"'
The spawned command 'foo' exited unsuccessfully (exit code: 1, signal: 0)
with:
$ raku -e 'my $a = run "foo"; say "ran, going to sink"; $a.sink'
ran, going to sink
The spawned command 'foo' exited unsuccessfully (exit code: 1, signal: 0)
Now, what causes the usage of the Proc object in your code, is unclear. You'd have to show more code.
A way to check for success, is to check the exit-code:
$ raku -e 'my $a = run "foo"; say "failed" if $a.exitcode > 0'
failed
$ raku -e 'my $a = run "echo"; say "failed" if $a.exitcode > 0'
Or alternately, use Jonathan's solution:
$ raku -e 'try sink run "foo"'

Related

What's a bad file descriptor?

I have the next system swi-prolog in a file call 'system.pl';
helloWorld :- read(X), write(X).
And i want to test it, then, i write it;
:- begin_tests(helloWorld_test).
test(myTest, true(Output == "hello")) :-
with_output_to(string(Output), getEntry).
:- end_tests(helloWorld_test).
getEntry :-
open('testcase.test', read, Myfile),
set_input(Myfile),
process_create(path(swipl), ['-g', 'main', '-t', 'halt', 'system.pl'], [stdin(stream(Myfile)), stdout(pipe(Stream))]),
copy_stream_data(Stream, current_output),
close(Myfile).
In testcase.test is contained the following;
hello.
Ok, now, when i call to swipl -g run_tests -t halt system.pl i get it;
% PL-Unit: helloWorld_test ERROR: -g helloWorld: read/1: I/O error in read on stream user_input (Bad file descriptor)
ERROR: c:/programasvscode/prolog/programasrandom/system.pl:40:
test myTest: wrong answer (compared using ==)
ERROR: Expected: "hello"
ERROR: Got: ""
done
% 1 test failed
% 0 tests passed
ERROR: -g run_tests: false
Warning: Process "c:\swipl\bin\swipl.exe": exit status: 2
I tried use read/2 with current_input but i got the same with the difference of read/2 instead read/1
What does mean it? any solve?

How to call a process in workflow.onError

I have this small pipeline:
process test {
"""
echo 'hello'
exit 1
"""
}
workflow.onError {
process finish_error{
script:
"""
echo 'blablabla'
"""
}
}
I want to trigger a python script in case the pipeline has an error using the finish error process, but this entire process does not seem to be triggered even when using a simple echo blabla example.
nextflow run test.nf
N E X T F L O W ~ version 20.10.0
Launching `test.nf` [cheesy_banach] - revision: 9020d641ca
executor > local (1)
[56/994298] process > test [100%] 1 of 1, failed: 1 ✘
[- ] process > finish_error [ 0%] 0 of 1
Error executing process > 'test'
Caused by:
Process `test` terminated with an error exit status (1)
Command executed:
echo 'hello'
exit 1
Command exit status:
1
Command output:
hello
Command wrapper:
hello
Work dir:
/home/joost/nextflow/work/56/9942985fc9948fd9bf7797d39c1785
Tip: when you have fixed the problem you can continue the execution adding the option `-resume` to the run command line
How can I trigger this finish_error process, and how can I view its output?
The onError handler is invoked when a process causes pipeline execution to terminate prematurely. Since a Nextflow pipeline is really just a series of processes joined together, launching another pipeline process from within an event handler doesn't make much sense to me. If your python script should be run using the local executor, you can just execute it in the usual way. This example assumes your script is executable and has an appropriate shebang:
process test {
"""
echo 'hello'
exit 1
"""
}
workflow.onError {
def proc = "${baseDir}/test.py".execute()
proc.waitFor()
println proc.text
}
Run using:
nextflow run -ansi-log false test.nf

Execute any bash command, get the results of stdout/stderr immediatly and use stdin

I would like to execute any bash command. I found Command::new but I'm unable to execute "complex" commands such as ls ; sleep 1; ls. Moreover, even if I put this in a bash script, and execute it, I will only have the result at the end of the script (as it is explain in the process doc). I would like to get the result as soon as the command prints it (and to be able to read input as well) the same way we can do it in bash.
Command::new is indeed the way to go, but it is meant to execute a program. ls ; sleep 1; ls is not a program, it's instructions for some shell. If you want to execute something like that, you would need to ask a shell to interpret that for you:
Command::new("/usr/bin/sh").args(&["-c", "ls ; sleep 1; ls"])
// your complex command is just an argument for the shell
To get the output, there are two ways:
the output method is blocking and returns the outputs and the exit status of the command.
the spawn method is non-blocking, and returns a handle containing the child's process stdin, stdout and stderr so you can communicate with the child, and a wait method to wait for it to cleanly exit. Note that by default the child inherits its parent file descriptor and you might want to set up pipes instead:
You should use something like:
let child = Command::new("/usr/bin/sh")
.args(&["-c", "ls sleep 1 ls"])
.stderr(std::process::Stdio::null()) // don't care about stderr
.stdout(std::process::Stdio::piped()) // set up stdout so we can read it
.stdin(std::process::Stdio::piped()) // set up stdin so we can write on it
.spawn().expect("Could not run the command"); // finally run the command
write_something_on(child.stdin);
read(child.stdout);

Declare bash variables inside sql EOF

how to declare variable in bash command. See "?"
I thought we could almost run any bash statement with ! or host in front of line
#!/bin/bash
sqlplus scott/tiger#orcl << EOF
! export v10="Hi" Doesn't work, why?
! echo $v10 Doesn't work, why?
! echo "Done" Works perfectly and also other bash commands
select * from dept; Works perfectly
exit
EOF
Thank you
What #jordanm says "probably" is exactly what is happening. When you specify a host command from within sqlplus, a separate shell process is spawned, the command executed by that process, then that process is terminated and control returns to sqlplus. Any environment variables that are set in that child shell process are good only within it, so when it terminates, they are gone.
As for your specific lines that "work" and "don't work" .. "export v10="Hi" does work but there is no stdout display of the 'export' command, and as explained, that variable v10 ceases to exist once the child process completes and control returns to sqlplus. The "echo $v10" also works, but since that is a new shell process, it has no value for $v10, so there is nothing to echo.
What are you trying to accomplish by setting enviornment variables from within sqlplus?
found it, all I had to do was
<< EOF
whenever sqlerror exit failure rollback
whenever oserror exit failure rollback
#scriptname.sql
EXIT
EOF

Input string in J script hangs

I write script in J for linux with #!
But script hang. After Control-D script echoed entered value. But normal ENTER only put cursor on new line.
#!/path/jconsole
a =. 1!:1]3
echo a
exit ''
You can't read a single line of text while j is in script mode, but you can schedule something to run the next time j returns to immediate execution mode by setting the immex phrase with 9!:27 and then setting the immex bit to 1 with 9!:29. Here's an example:
#!/usr/bin/env j
NB. demo showing how to make a simple repl in j.
readln =: [: (1!:01) 1:
donext =: [: (9!:29) 1: [ 9!:27
main =: verb define
echo ''
echo 'main loop. type ''bye'' to exit.'
echo '--------------------------------'
while. (s:'`bye') ~: s:<input=:readln'' do.
echo ".input
end.
echo '--------------------------------'
echo 'loop complete. returning to j.'
NB. or put ( exit'' ) here to exit j.
)
donext 'main _'
The thing is that (1!:1)&3 reads till "end of file". In Linux, pressing ctrl-D sends the EOF signal.
If this is not what you're looking for, I'm afraid there there's nothing else but your "ugly trick"
a=. shell 'read foo; echo -n $foo'
as (1!:1)&1 only works during a session for some reason ...