I'm attempting to use the TProcess unit to execute ssh to connect to one of my servers and provide me with the shell. It's a rewrite of one I had in Ruby as the execution time for Ruby is very slow. When I run my Process.Execute function, I am presented with the shell but it is immediately backgrounded. Running pgrep ssh reveals that it is running but I have no access to it whatsoever, using fg does not bring it back. The code is as follows for this segment:
if HasOption('c', 'connect') then begin
TempFile:= GetRecord(GetOptionValue('c', 'connect'));
AProcess:= TProcess.Create(nil);
AProcess.Executable:= '/usr/bin/ssh';
AProcess.Parameters.Add('-p');
AProcess.Parameters.Add(TempFile.Port);
AProcess.Parameters.Add('-ntt');
AProcess.Parameters.Add(TempFile.Username + '#' + TempFile.Address);
AProcess.Options:= [];
AProcess.ShowWindow:= swoShow;
AProcess.InheritHandles:= False;
AProcess.Execute;
AProcess.Free;
Terminate;
Exit;
end;
TempFile is a variable of type TProfile, which is a record containing information about the server. The cataloging system and retrieval works fine, but pulling up the shell does not.
...
AProcess.ShowWindow:= swoShow;
AProcess.InheritHandles:= False;
AProcess.Execute;
AProcess.Free;
...
You're starting the process but not waiting for it to exit. This is from the documentation on Execute:
Execute actually executes the program as specified in CommandLine, applying as much as of the specified options as supported on the current platform.
If the poWaitOnExit option is specified in Options, then the call will only return when the program has finished executing (or if an error occured). If this option is not given, the call returns immediatly[sic], but the WaitOnExit call can be used to wait for it to close, or the Running call can be used to check whether it is still running.
You should set the poWaitOnExit option in options before calling Execute, so that Execute will block until the process exits. Or else call AProcess.WaitOnExit to explicitly wait for the process to exit.
Related
I am using DB2 11.5.
I have a stored procedure that will run some complex tasks.
Before running the tasks, it will first check from a log table if the job is already running, if yes, it signal for SQLSTATE 75002 with error meesage.
If it is not already running, it will insert a record of the job with status RUNNING, then run the tasks.
When it finishes, it update the status to FINISHED.
CREATE OR REPLACE PROCEDURE WORK.TEST_SP()
P1: BEGIN
if exists(select 1 from db2inst1.job_log where job='abc' and status='RUNNING' and date=current date) then
SIGNAL SQLSTATE '75002' SET MESSAGE_TEXT = 'Job abc is already running, please wait for it to finish';
end if;
insert into db2inst1.job_log values ('abc', 'RUNNING', current date);
commit;
-- Some complex tasks here
call dbms_lock.sleep(120);
update db2inst1.job_log set job_status='FINISHED' where job_name='abc' and job_date=current date
commit;
END P1
My question is how do I handle sigint when user press ctrl-c that aborted the stored procedure when the complex tasks are running?
I want it to update the job_status to ABORTED when ctrl-c occurs so that the job will not be "running" forever.
#Edit 1
Users run the stored procedure with a windows .bat file on local machine with db2 client installed.
#echo off
#if ""%DB2CLP%""=="""" db2cmd /c /i /w ""%0"" && goto :EOF
db2 connect to mydb user db2inst1 using abc123
db2 "call WORK.TEST_SP()"
IF ERRORLEVEL 1 (echo Job failed) else (echo Job done)
db2 connect reset > nul
pause
If your MS-Windows batch file gets interrupted by a Control-C or other signal, then any already started/running stored-procedures invoked by that app will continue running by default. The stored procedure will be unaware that the client application has terminated. So your batch file (cmd/bat) will terminate but any currently running stored procedure will continue to execute on the Db2-server.
You cannot send operating-system signals directly to a Db2-LUW stored procedure, as they run on the Db2-server in the background and are usually owned by a different account than the userid performing the call.
Your stored-procedure should have its own condition handlers or exit handlers or undo handlers. Usually you want to issue a rollback if a hard error happens from which your procedure itself cannot recover. But Db2 itself will issue a rollback for specific sqlcodes (e.g. -911 ).
Db2-LUW also has a sysproc.cancel_work procedure which an application might use in specific situations. Refer to the Knowledge Centre for details. If WLM (workload management) or equivalent is enabled then stored procedures are subject to its configuration as regards resource consumption, and WLM also offers a wlm_cancel_activity routine.
There is no way to do this in SP.
Control is not passed to an exception handler defined in SP upon forcing a caller off the database, canceling activity and some other conditions (log full, for example).
So, don't put any flag / status management logic into SP exception handlers.
How is the stored procedure run? From the command line (db2)? If so, on what operating systems?
If, for instance, the command is run from bash on Linux, you can use trap myfunc SIGINT in Bash to run a custom Bash function myfunc if the user presses Ctrl-C. myfunc could then change the job status.
On Windows, you will have more control if you switch from plain .bat files to Powershell . Some related Stack Overflow questions:
batch script if user press Ctrl+C do a command before exiting
Gracefully stopping in Powershell
I need to automate a huge interactive Tcl program using Tcl expect.
As I realized, this territory is really dangerous, as I need to extend the already existing mass of code, but I can't rely on errors actually causing the program to fail with a positive exit code as I could in a regular script.
This means I have to think about every possible thing that could go wrong and "expect" it.
What I currently do is use a "die" procedure instead of raising an error in my own code, that automatically exits. But this kind of error condition can not be catched, and makes it hard to detect errors especially in code not written by me, since ultimately, most library routines will be error-based.
Since I have access to the program's Tcl shell, is it possible to enable fail-on-error?
EDIT:
I am using Tcl 8.3, which is a severe limitation in terms of available tools.
Examples of errors I'd like to automatically exit on:
% puts $a(2)
can't read "a(2)": no such element in array
while evaluating {puts $a(2)}
%
% blublabla
invalid command name "blublabla"
while evaluating blublabla
%
As well as any other error that makes a normal script terminate.
These can bubble up from 10 levels deep within procedure calls.
I also tried redefining the global error command, but not all errors that can occur in Tcl use it. For instance, the above "command not found" error did not go through my custom error procedure.
Since I have access to the program's Tcl shell, is it possible to
enable fail-on-error?
Let me try to summarize in my words: You want to exit from an interactive Tcl shell upon error, rather than having the prompt offered again?
Update
I am using Tcl 8.3, which is a severe limitation in terms of available
tools [...] only source patches to the C code.
As you seem to be deep down in that rabbit hole, why not add another source patch?
--- tclMain.c 2002-03-26 03:26:58.000000000 +0100
+++ tclMain.c.mrcalvin 2019-10-23 22:49:14.000000000 +0200
## -328,6 +328,7 ##
Tcl_WriteObj(errChannel, Tcl_GetObjResult(interp));
Tcl_WriteChars(errChannel, "\n", 1);
}
+ Tcl_Exit(1);
} else if (tsdPtr->tty) {
resultPtr = Tcl_GetObjResult(interp);
Tcl_GetStringFromObj(resultPtr, &length);
This is untested, the Tcl 8.3.5 sources don't compile for me. But this section of Tcl's internal are comparable to current sources, tested using my Tcl 8.6 source installation.
For the records
With a stock shell (tclsh), this is a little fiddly, I am afraid. The following might work for you (though, I can imagine cases where this might fail you). The idea is
to intercept writes to stderr (this is to where an interactive shell redirects error messages, before returning to the prompt).
to discriminate between arbitrary writes to stderr and error cases, one can use the global variable ::errorInfo as a sentinel.
Step 1: Define a channel interceptor
oo::class create Bouncer {
method initialize {handle mode} {
if {$mode ne "write"} {error "can't handle reading"}
return {finalize initialize write}
}
method finalize {handle} {
# NOOP
}
method write {handle bytes} {
if {[info exists ::errorInfo]} {
# This is an actual error;
# 1) Print the message (as usual), but to stdout
fconfigure stdout -translation binary
puts stdout $bytes
# 2) Call on [exit] to quit the Tcl process
exit 1
} else {
# Non-error write to stderr, proceed as usual
return $bytes
}
}
}
Step 2: Register the interceptor for stderr in interactive shells
if {[info exists ::tcl_interactive]} {
chan push stderr [Bouncer new]
}
Once registered, this will make your interactive shell behave like so:
% puts stderr "Goes, as usual!"
Goes, as usual!
% error "Bye, bye"
Bye, bye
Some remarks
You need to be careful about the Bouncer's write method, the error message has already been massaged for the character encoding (therefore, the fconfigure call).
You might want to put this into a Tcl package or Tcl module, to load the bouncer using package req.
I could imagine that your program writes to stderr and the errorInfo variable happens to be set (as a left-over), this will trigger an unintended exit.
I would like to execute any bash command. I found Command::new but I'm unable to execute "complex" commands such as ls ; sleep 1; ls. Moreover, even if I put this in a bash script, and execute it, I will only have the result at the end of the script (as it is explain in the process doc). I would like to get the result as soon as the command prints it (and to be able to read input as well) the same way we can do it in bash.
Command::new is indeed the way to go, but it is meant to execute a program. ls ; sleep 1; ls is not a program, it's instructions for some shell. If you want to execute something like that, you would need to ask a shell to interpret that for you:
Command::new("/usr/bin/sh").args(&["-c", "ls ; sleep 1; ls"])
// your complex command is just an argument for the shell
To get the output, there are two ways:
the output method is blocking and returns the outputs and the exit status of the command.
the spawn method is non-blocking, and returns a handle containing the child's process stdin, stdout and stderr so you can communicate with the child, and a wait method to wait for it to cleanly exit. Note that by default the child inherits its parent file descriptor and you might want to set up pipes instead:
You should use something like:
let child = Command::new("/usr/bin/sh")
.args(&["-c", "ls sleep 1 ls"])
.stderr(std::process::Stdio::null()) // don't care about stderr
.stdout(std::process::Stdio::piped()) // set up stdout so we can read it
.stdin(std::process::Stdio::piped()) // set up stdin so we can write on it
.spawn().expect("Could not run the command"); // finally run the command
write_something_on(child.stdin);
read(child.stdout);
I want to use the pidof by a process given by name in tcl. I have used [exec pidof $proc_name ], but it always returns an error: child process exited abnormally.
I read somewhere exec always treat non-zero return as error as pidof return the process id number. Does anyone know if there is a workaround? Thanks in advance!
I want to use pidof is that i want to see if that process is running if not i will restart the process.
The problem is that pidof does strange things with exit codes:
Exit Status
At least one program was found with the requested name.
No program was found with the requested name.
This interacts badly with exec which treats a non-zero exit code as indicating that it should tell the rest of Tcl that there was an error.
The simplest way of dealing with this is a little extra shell script wrapper. Let's hide it inside a procedure for convenience:
proc pidof {name} {
exec /bin/bash -c "pidof '$name'; exit \$(( \$? - 1 ))"
}
All that does is subtract 1 from the exit code before it hits back into Tcl.
(You could also fix this using the techniques described in the exec manual but I think it's simpler to fix on the bash side this time.)
I ran into this and ended up causing some issues with the old linux environment I run in (no bash and exit code handling was a bit different with busybox).
My solution that should work anywhere would be similar to what a few suggested:
proc pidof {name} {
catch {exec -ignorestderr -- pidof $name} pid
if {[string is entier -strict $pid]} {
return $pid
}
}
I am writing a script that will copy Valgrind onto whatever shelf that we enter on the command line. The syntax is as follows:
vgrindCopy [shelf number]
For some reason, the files will copy over without any issue, but after the copy completes the follow error is observed:
bad spawn_id (process died earlier?)
while executing
"expect "#""
Here is a copy of the relevant code:
function login_shelf {
expect -c "
set timeout 15
spawn $1
expect \"password:\"
send \"$PW\r\"
expect \"#\"
sleep 1
exit
"
}
# login and make the valgrind directory at /sfs/software/shelf/current
set -- /opt/swe/tools/ext/gnu/valgrind-3.7.0/i686-linux2.6/lib/valgrind/*
login_shelf "/opt/corp/projects/shelftools/bin/app rsync -Lau $* $shelf:/shelf/valgrind"
After playing around with the code, I found that if I remove the line "expect \"#\"", then the program doesn't copy any of the files over anymore. What odd as well is that I'm seeing the issue when I run the script, but a co-worker is not.
Has anyone had a similar issue and determined the cause? Any help would be greatly appreciated as always!
Your code is spawning the rsync and at the expect \"#\" is waiting for rsync to output a #, which it never does, so it exits and expect reports the error.
When you remove the expect \"#\" the expect script exits, terminating the rsync.
Instead of expect \"#\" you should wait for rsync to exit:
expect eof
wait