Get output from EXE launched in Tcl and pause further processing until EXE finishes - process

I'm launching a single EXE from a Tcl script, and would like to get the output from the EXE and display it using a simple PUTS command to provide user feedback. At the moment, I am launching the EXE in a CMD window where the user can see the progress, and waiting for the EXE to create a file. The first script here works whenever the output file LUC.CSV is created.
file delete -force luc.csv
set cmdStatus [open "| cmd.exe /c start /wait uc.exe"]
while {![file exists "luc.csv"]} {
}
# continue after output file is created
However, sometimes the file is not created, so I can't rely on this method.
I've been trying to get my head around the use of fileevent and pipes, and have tried several incarnations of the script below, but I'm obviously either missing the point or just not getting the syntax right.
puts "starting"
set fifo [open "| cmd.exe /c start uc.exe" r]
fconfigure $fifo -blocking 0
proc read_fifo {fifo} {
puts "calling read_fifo"
if {[gets $fifo x] < 0} {
if {[eof $fifo]} {
close $fifo
}
}
puts "x is $x"
}
fileevent $fifo readable [list read_fifo $fifo]
vwait forever
puts"finished"
Any help would be greatly appreciated!

If you just want to launch a subprocess and do nothing else until it finishes, Tcl's exec command is perfect.
exec cmd.exe /c start /wait uc.exe
(Since you're launching a GUI application via start, there won't be any meaningful result unless there's an error in launching. And in that case you'll get a catchable error.) Things only get complicated when you want to do several things at once.
To make your original code work, you have to understand that the subprocess has finished. Tcl's just vwaiting forever because your code says to do that. We need to put something in to make the wait finish. A good way is to make the wait be for something to happen to the fifo variable, which can be unset after the pipe is closed as it no longer contains anything useful. (vwait will become eligible to return once the variable it is told about is either written to or destroyed; it uses a variable trace under the covers. It also won't actually return until the event handlers it is currently processing return.)
puts "starting"
# ***Assume*** that this code is in the global scope
set fifo [open "| cmd.exe /c start uc.exe" r]
fconfigure $fifo -blocking 0
proc read_fifo {} {
global fifo
puts "calling read_fifo"
if {[gets $fifo x] < 0} {
if {[eof $fifo]} {
close $fifo
unset fifo
}
}
puts "x is $x"
}
fileevent $fifo readable read_fifo
vwait fifo
puts "finished"
That ought to work. The lines that were changed were the declaration of read_fifo (no variable passed in), the adding of global fifo just below (because we want to work with that instead), the adding of unset fifo just after close $fifo, the setting up of the fileevent (don't pass an extra argument to the callback), and the vwait (because we want to wait for fifo, not forever).

Related

How do I nohup a here document in the background from within a ksh script?

I have a ksh script that comes to a point where it must run a long running command. The long running command is executed via a heredoc in the script presently. I want to throw the command (represented by cat in my samples below) into the background but only after taking its input from the heredoc. Since the "nohup cat.." finishes instantaneously and I see an empty nohup.out file, I am not sure the script is doing what I need it to do, which is to spawn a nohupped version of the heredoc command and exit, leaving the command to run for however long it takes to complete.
I am using cat as the "command" since it too sits there and just waits for console input.
Working version without nohupping:
#!/bin/ksh
cat << EOF
Hello
World
HOw are you!
EOF
Trying to nohup the heredoc:
#!/bin/ksh
nohup cat <<EOF
Hello
World
HOw are you!
EOF
Seems to work, output is going into nohup.out as expected. But now, how to throw that into the background? I tried the below (and many variations of it) :
#!/bin/ksh
nohup cat & <<EOF
Hello
World
HOw are you!
EOF
but, nohup.out is empty, so I am not sure what the above is doing. There is no running "cat" in the background which tells me it ran and completed at least - or maybe didn't run at all. No other variation I can invent for trying to throw the heredoc into the background from my ksh script works.
Any suggestions on a way to achieve this using heredoc?
Here are two options.
You could try wrapping the nohup sequence inside a function, which may look cleaner, and then invoking that function with the trailing ampersand.
Using a function, like this:
#!/bin/ksh
function dostuff
{
nohup cat <<- END
Hello
World
How are you!
END
}
dostuff &
wait
You can also try wrapping the commands to be backgrounded into a grouping { } block, separating each command with a ; inside the brackets, and then backgrounding that block via:
{ nohup cat <<- EOF
...
EOF
; whatever
} &

Is it possible to enable exit on error behavior in an interactive Tcl shell?

I need to automate a huge interactive Tcl program using Tcl expect.
As I realized, this territory is really dangerous, as I need to extend the already existing mass of code, but I can't rely on errors actually causing the program to fail with a positive exit code as I could in a regular script.
This means I have to think about every possible thing that could go wrong and "expect" it.
What I currently do is use a "die" procedure instead of raising an error in my own code, that automatically exits. But this kind of error condition can not be catched, and makes it hard to detect errors especially in code not written by me, since ultimately, most library routines will be error-based.
Since I have access to the program's Tcl shell, is it possible to enable fail-on-error?
EDIT:
I am using Tcl 8.3, which is a severe limitation in terms of available tools.
Examples of errors I'd like to automatically exit on:
% puts $a(2)
can't read "a(2)": no such element in array
while evaluating {puts $a(2)}
%
% blublabla
invalid command name "blublabla"
while evaluating blublabla
%
As well as any other error that makes a normal script terminate.
These can bubble up from 10 levels deep within procedure calls.
I also tried redefining the global error command, but not all errors that can occur in Tcl use it. For instance, the above "command not found" error did not go through my custom error procedure.
Since I have access to the program's Tcl shell, is it possible to
enable fail-on-error?
Let me try to summarize in my words: You want to exit from an interactive Tcl shell upon error, rather than having the prompt offered again?
Update
I am using Tcl 8.3, which is a severe limitation in terms of available
tools [...] only source patches to the C code.
As you seem to be deep down in that rabbit hole, why not add another source patch?
--- tclMain.c 2002-03-26 03:26:58.000000000 +0100
+++ tclMain.c.mrcalvin 2019-10-23 22:49:14.000000000 +0200
## -328,6 +328,7 ##
Tcl_WriteObj(errChannel, Tcl_GetObjResult(interp));
Tcl_WriteChars(errChannel, "\n", 1);
}
+ Tcl_Exit(1);
} else if (tsdPtr->tty) {
resultPtr = Tcl_GetObjResult(interp);
Tcl_GetStringFromObj(resultPtr, &length);
This is untested, the Tcl 8.3.5 sources don't compile for me. But this section of Tcl's internal are comparable to current sources, tested using my Tcl 8.6 source installation.
For the records
With a stock shell (tclsh), this is a little fiddly, I am afraid. The following might work for you (though, I can imagine cases where this might fail you). The idea is
to intercept writes to stderr (this is to where an interactive shell redirects error messages, before returning to the prompt).
to discriminate between arbitrary writes to stderr and error cases, one can use the global variable ::errorInfo as a sentinel.
Step 1: Define a channel interceptor
oo::class create Bouncer {
method initialize {handle mode} {
if {$mode ne "write"} {error "can't handle reading"}
return {finalize initialize write}
}
method finalize {handle} {
# NOOP
}
method write {handle bytes} {
if {[info exists ::errorInfo]} {
# This is an actual error;
# 1) Print the message (as usual), but to stdout
fconfigure stdout -translation binary
puts stdout $bytes
# 2) Call on [exit] to quit the Tcl process
exit 1
} else {
# Non-error write to stderr, proceed as usual
return $bytes
}
}
}
Step 2: Register the interceptor for stderr in interactive shells
if {[info exists ::tcl_interactive]} {
chan push stderr [Bouncer new]
}
Once registered, this will make your interactive shell behave like so:
% puts stderr "Goes, as usual!"
Goes, as usual!
% error "Bye, bye"
Bye, bye
Some remarks
You need to be careful about the Bouncer's write method, the error message has already been massaged for the character encoding (therefore, the fconfigure call).
You might want to put this into a Tcl package or Tcl module, to load the bouncer using package req.
I could imagine that your program writes to stderr and the errorInfo variable happens to be set (as a left-over), this will trigger an unintended exit.

Execute any bash command, get the results of stdout/stderr immediatly and use stdin

I would like to execute any bash command. I found Command::new but I'm unable to execute "complex" commands such as ls ; sleep 1; ls. Moreover, even if I put this in a bash script, and execute it, I will only have the result at the end of the script (as it is explain in the process doc). I would like to get the result as soon as the command prints it (and to be able to read input as well) the same way we can do it in bash.
Command::new is indeed the way to go, but it is meant to execute a program. ls ; sleep 1; ls is not a program, it's instructions for some shell. If you want to execute something like that, you would need to ask a shell to interpret that for you:
Command::new("/usr/bin/sh").args(&["-c", "ls ; sleep 1; ls"])
// your complex command is just an argument for the shell
To get the output, there are two ways:
the output method is blocking and returns the outputs and the exit status of the command.
the spawn method is non-blocking, and returns a handle containing the child's process stdin, stdout and stderr so you can communicate with the child, and a wait method to wait for it to cleanly exit. Note that by default the child inherits its parent file descriptor and you might want to set up pipes instead:
You should use something like:
let child = Command::new("/usr/bin/sh")
.args(&["-c", "ls sleep 1 ls"])
.stderr(std::process::Stdio::null()) // don't care about stderr
.stdout(std::process::Stdio::piped()) // set up stdout so we can read it
.stdin(std::process::Stdio::piped()) // set up stdin so we can write on it
.spawn().expect("Could not run the command"); // finally run the command
write_something_on(child.stdin);
read(child.stdout);

Bad spawn_id while executing expect command

I am writing a script that will copy Valgrind onto whatever shelf that we enter on the command line. The syntax is as follows:
vgrindCopy [shelf number]
For some reason, the files will copy over without any issue, but after the copy completes the follow error is observed:
bad spawn_id (process died earlier?)
while executing
"expect "#""
Here is a copy of the relevant code:
function login_shelf {
expect -c "
set timeout 15
spawn $1
expect \"password:\"
send \"$PW\r\"
expect \"#\"
sleep 1
exit
"
}
# login and make the valgrind directory at /sfs/software/shelf/current
set -- /opt/swe/tools/ext/gnu/valgrind-3.7.0/i686-linux2.6/lib/valgrind/*
login_shelf "/opt/corp/projects/shelftools/bin/app rsync -Lau $* $shelf:/shelf/valgrind"
After playing around with the code, I found that if I remove the line "expect \"#\"", then the program doesn't copy any of the files over anymore. What odd as well is that I'm seeing the issue when I run the script, but a co-worker is not.
Has anyone had a similar issue and determined the cause? Any help would be greatly appreciated as always!
Your code is spawning the rsync and at the expect \"#\" is waiting for rsync to output a #, which it never does, so it exits and expect reports the error.
When you remove the expect \"#\" the expect script exits, terminating the rsync.
Instead of expect \"#\" you should wait for rsync to exit:
expect eof
wait

Terminate NSTask even if app crashes

If my app crashes I don't get a chance to terminate the NSTasks it spawned, so they stay around eating up resources.
Is there any way to launch a task such that it terminates when your app terminates (even if it crashes)?
I suppose you need to handle application crashes manually and in a different way to terminate spawned processes. For example, you can check following article http://cocoawithlove.com/2010/05/handling-unhandled-exceptions-and.html and in exception/signal handler when the application crashes send terminate signal to your child processes using kill(pid, SIGKILL), but for this you need also to keep the pid of child processes (NSTask - (int)processIdentifier) somewhere to get it from exception/signal handler.
What I've done in the past is create a pipe in the parent process, and pass the write end of that pipe into the child. The parent never closes the read end, and the child watches the write end to close. If the write end ever closes, that means the parent exited. You'll also need to mark parent's end of the pipe to close on exec.
I actually wrote a program / script / whatever that does just this… Here's the shell script that was the foundation of it… The project actually implements it within X-code as a single file executable.. weird that apple makes this so precarious, IMO.
#!/bin/bash
echo "arg1 is the SubProcess: $1, arg2 is sleepytime: $2, and arg3 is ParentPID, aka $$: $3"
CHILD=$1 && SLEEPYTIME=$2 || SLEEPYTIME=10; PARENTPID=$3 || PARENTPID=$$
GoSubProcess () { # define functions, start script at very end.
$1 arguments & # "&" puts SubP in background subshell
CHILDPID=$! # what all the fuss is about.
if kill -0 $CHILDPID; then # rock the cradle to make sure it aint dead
echo "Child is alive at $!" # glory be to god
else echo "couldnt start child. dying."; exit 2; fi
babyRISEfromtheGRAVE # keep an eye on child process
}
babyRISEfromtheGRAVE () {
echo "PARENT is $PARENTPID"; # remember where you came from, like j.lo
while kill -0 $PARENTPID; do # is that fount of life, nstask parent alive?
echo "Parent is alive, $PARENTPID is it's PID"
sleep $SLEEPTIME # you lazy boozehound
if kill -0 $CHILDPID; then # check on baby.
echo "Child is $CHILDPID and is alive."
sleep $SLEEPTIME # naptime!
else echo "Baby, pid $CHILDPID died! Respawn!"
GoSubProcess; fi # restart daemon if it dies
done # if this while loop ends, the parent PID crashed.
logger "My Parent Process, aka $PARENTPID died!"
logger "I'm killing my baby, $CHILDPID, and myself."
kill -9 $CHILDPID; exit 1 # process table cleaned. nothing is left. all three tasks are dead. long live nstask.
}
GoSubProcess # this is where we start the script.
exit 0 # this is where we never get to
You could have your tasks periodically check to see if their parent process still exists.