get return code from plink? - scripting

In a DOS batch script, I'm running a single command on a remote (also windows) computer using plink. Formerly, this command was only run on the local machine, and was relying on the return code to determine success. Is there a way to easily get this information back through plink?

That's not possible with plink. The current consensus is to have the remote script echo its exit code to a log file, then use pscp to transfer the log file to the local machine.
See http://fixunix.com/ssh/74235-errorlevel-capturing-plink.html.

with plink 0.66
C:\Code>echo Y | "C:\Program Files (x86)\PuTTY\plink.exe" bob#myserver exit 42
C:\Code>echo %ERRORLEVEL%
42
Also for #John Wiersba's concern about when a connection cannot be made, this appears to be fixed
C:\CodeMisc>echo Y | "C:\Program Files (x86)\PuTTY\plink.exe" bob#garbageservername exit 42
Unable to open connection:
Host does not exist
C:\Code>echo %ERRORLEVEL%
1
Also note the piping of echo Y ... this enables you to accept the server fingerprint automatically (a little dangerous to say the least ... but our login server is load balanced, so you are always getting different fingerprints :( )
However as #LeonBloy notes, plink still has some connection conditions which return a zero exit code. If you know your exit code range and you don't have a good way of communicating back to windows via a file. You could either +3 to the exit code (if you know the exit code will never == 253-255) or you could apply a bitwise OR (I'd suggest exit $(($?|128)) - in bash).
Or if you don't care about the exact exit code, you could return 2 for success, and zero for failure. Thus a non-two exit code would indicate failure. In bash this would be: echo $((($?==0) << 1)). This would be by far the most robust general purpose solution, but you should make sure your exit code is logged for debug-ability.

Related

while [[ condition ]] stalls on loop exit

I have a problem with ksh in that a while loop is failing to obey the "while" condition. I should add now that this is ksh88 on my client's Solaris box. (That's a separate problem that can't be addressed in this forum. ;) I have seen Lance's question and some similar but none that I have found seem to address this. (Disclaimer: NO I haven't looked at every ksh question in this forum)
Here's a very cut down piece of code that replicates the problem:
1 #!/usr/bin/ksh
2 #
3 go=1
4 set -x
5 tail -0f loop-test.txt | while [[ $go -eq 1 ]]
6 do
7 read lbuff
8 set $lbuff
9 nwords=$#
10 printf "Line has %d words <%s>\n" $nwords "${lbuff}"
11 if [[ "${lbuff}" = "0" ]]
12 then
13 printf "Line consists of %s; time to absquatulate\n" $lbuff
14 go=0 # Violate the WHILE condition to get out of loop
15 fi
16 done
17 printf "\nLooks like I've fallen out of the loop\n"
18 exit 0
The way I test this is:
Run loop-test.sh in background mode
In a different window I run commands like "echo some nonsense >>loop_test.txt" (w/o the quotes, of course)
When I wish to exit, I type "echo 0 >>loop-test.txt"
What happens? It indeed sets go=0 and displays the line:
Line consists of 0; time to absquatulate
but does not exit the loop. To break out I append one more line to the txt file. The loop does NOT process that line and just falls out of the loop, issuing that "fallen out" message before exiting.
What's going on with this? I don't want to use "break" because in the actual script, the loop is monitoring the log of a database engine and the flag is set when it sees messages that the engine is shutting down. The actual script must still process those final lines before exiting.
Open to ideas, anyone?
Thanks much!
-- J.
OK, that flopped pretty quick. After reading a few other posts, I found an answer given by dogbane that sidesteps my entire pipe-to-while scheme. His is the second answer to a question (from 2013) where I see neeraj is using the same scheme I'm using.
What was wrong? The pipe-to-while has always worked for input that will end, like a file or a command with a distinct end to its output. However, from a tail command, there is no distinct EOF. Hence, the while-in-a-subshell doesn't know when to terminate.
Dogbane's solution: Don't use a pipe. Applying his logic to my situation, the basic loop is:
while read line
do
# put loop body here
done < <(tail -0f ${logfile})
No subshell, no problem.
Caveat about that syntax: There must be a space between the two < operators; otherwise it looks like a HEREIS document with bad syntax.
Er, one more catch: The syntax did not work in ksh, not even in the mksh (under cygwin) which emulates ksh93. But it did work in bash. So my boss is gonna have a good laugh at me, 'cause he knows I dislike bash.
So thanks MUCH, dogbane.
-- J
After articulating the problem and sleeping on it, the reason for the described behavior came to me: After setting go=0, the control flow of the loop still depends on another line of data coming in from STDIN via that pipe.
And now that I have realized the cause of the weirdness, I can speculate on an alternative way of reading from the stream. For the moment I am thinking of the following solution:
Open the input file as STDIN (Need to research the exec syntax for that)
When the condition occurs, close STDIN (Again, need to research the syntax for that)
It should then be safe to use the more intuitive:while read lbuffat the top of the loop.
I'll test this out today and post the result. I'd hope someone else benefit from the method (if it works).

how to see if a process by name is running in tcl

I want to use the pidof by a process given by name in tcl. I have used [exec pidof $proc_name ], but it always returns an error: child process exited abnormally.
I read somewhere exec always treat non-zero return as error as pidof return the process id number. Does anyone know if there is a workaround? Thanks in advance!
I want to use pidof is that i want to see if that process is running if not i will restart the process.
The problem is that pidof does strange things with exit codes:
Exit Status
At least one program was found with the requested name.
No program was found with the requested name.
This interacts badly with exec which treats a non-zero exit code as indicating that it should tell the rest of Tcl that there was an error.
The simplest way of dealing with this is a little extra shell script wrapper. Let's hide it inside a procedure for convenience:
proc pidof {name} {
exec /bin/bash -c "pidof '$name'; exit \$(( \$? - 1 ))"
}
All that does is subtract 1 from the exit code before it hits back into Tcl.
(You could also fix this using the techniques described in the exec manual but I think it's simpler to fix on the bash side this time.)
I ran into this and ended up causing some issues with the old linux environment I run in (no bash and exit code handling was a bit different with busybox).
My solution that should work anywhere would be similar to what a few suggested:
proc pidof {name} {
catch {exec -ignorestderr -- pidof $name} pid
if {[string is entier -strict $pid]} {
return $pid
}
}

Bad spawn_id while executing expect command

I am writing a script that will copy Valgrind onto whatever shelf that we enter on the command line. The syntax is as follows:
vgrindCopy [shelf number]
For some reason, the files will copy over without any issue, but after the copy completes the follow error is observed:
bad spawn_id (process died earlier?)
while executing
"expect "#""
Here is a copy of the relevant code:
function login_shelf {
expect -c "
set timeout 15
spawn $1
expect \"password:\"
send \"$PW\r\"
expect \"#\"
sleep 1
exit
"
}
# login and make the valgrind directory at /sfs/software/shelf/current
set -- /opt/swe/tools/ext/gnu/valgrind-3.7.0/i686-linux2.6/lib/valgrind/*
login_shelf "/opt/corp/projects/shelftools/bin/app rsync -Lau $* $shelf:/shelf/valgrind"
After playing around with the code, I found that if I remove the line "expect \"#\"", then the program doesn't copy any of the files over anymore. What odd as well is that I'm seeing the issue when I run the script, but a co-worker is not.
Has anyone had a similar issue and determined the cause? Any help would be greatly appreciated as always!
Your code is spawning the rsync and at the expect \"#\" is waiting for rsync to output a #, which it never does, so it exits and expect reports the error.
When you remove the expect \"#\" the expect script exits, terminating the rsync.
Instead of expect \"#\" you should wait for rsync to exit:
expect eof
wait

proc_open interaction

Here's what I'm trying to achieve: open a shell (korn or bash, doesn't matter), from that shell, I want to open a ssh connection (ssh user#host). At some point it is likely to happen I will be prompted for either a password or I might be asked whether or not I'm sure I want to connect (offending keys).
Before anyone asks: yes, I am aware there is a plugin for ssh2 exec calls, but the servers I'm working on don't support it, and are unlikely to do so.
Here's what I've tried so far:
$desc = array(array('pipe','r'),array('pipe','w'));//used in all example code
$p = proc_open('ssh user#host',$desc,$pipes);
if(!is_resource($p)){ die('#!#$%');}//will omit this line from now on
sleep(1);//omitting this,too but it's there every time I need it
Then I tried to read console output (stream_get_contents($pipes[1])) to see what I have to pass next (either password, yes or return 'connection failed: '.stream_get_contents($pipes[1]) and proc_close $p.
This gave me the following error:
Pseudo-terminal will not be allocated because stdin is not a terminal.
So, I though ssh was called in the php:// io-stream context, seems a plausible explanation of the above error.
Next: I though about my first SO question and decided it might be a good idea to open a bash/ksh shell first:
$p = proc_open('bash',$desc,$pipes);
And take it from there, but I got the exact same error message, only this time, the script stopped running but ssh did run. So I got hopeful, then felt stupid and, eventually, desperate:
$p=proc_open('bash && ssh user#host',$desc,$pipes);
After a few seconds wait, I got the following error:
PHP Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 133693440 bytes)
The Call Stack keeps bringing up the stream_get_contents line, even in my last desperate attempt:
#!/path/to/bin/php -n
<?php
$p = proc_open('bash && ssh user#host',array(array('pipe','r'),array('pipe','w')),$ps);
if (!is_resource($p))
{
die('FFS');
}
usleep(10);
fwrite($ps[0],'yes'."\n");
fflush($ps[0]);
usleep(20);
fwrite($ps[0],'password'."\n");
fflush($ps[0]);
usleep(20);
fwrite($ps[0],'whoami'."\n");
fflush($ps[0]);
usleep(2);
$msg = stream_get_contents($ps[1]);
fwrite($ps[0],'exit'."\n");
fclose($ps[0]);
fclose($ps[1]);
proc_close($p);
?>
I know, its a mess, a lot of fflush and redundancy, but the point is: I know this connection will first prompt me for offending keys, and then ask a password. My guess is the stream in $pipes[1] holds the ssh connection, hence it's content is huge. what I need then, is a pipe inside a pipe... is this even possible? I must be missing something, what good is a pipe if this isn't possible...
My guess is the proc_open command is wrong to begin with, (error: Broken pipe). But I really can't see any other way around the first error... any thoughts? Or follow up questions if the above rant isn't at all clear (which it probably isn't).
Before anyone asks: yes, I am aware there is a plugin for ssh2 exec
calls, but the servers I'm working on don't support it, and are
unlikely to do so.
There are actually two. The PECL module, which is a PITA that most servers don't have installed anyway and phpseclib, a pure PHP SSH2 implementation. An example of its use:
<?php
include('Net/SSH2.php');
$ssh = new Net_SSH2('www.domain.tld');
if (!$ssh->login('username', 'password')) {
exit('Login Failed');
}
echo $ssh->exec('pwd');
echo $ssh->exec('ls -la');
?>

127 Return code from $?

What is the meaning of return value 127 from $? in UNIX.
Value 127 is returned by /bin/sh when the given command is not found within your PATH system variable and it is not a built-in shell command. In other words, the system doesn't understand your command, because it doesn't know where to find the binary you're trying to call.
Generally it means:
127 - command not found
but it can also mean that the command is found,
but a library that is required by the command is NOT found.
127 - command not found
example: $caat
The error message will
bash:
caat: command not found
now you check using echo $?
A shell convention is that a successful executable should exit with the value 0. Anything else can be interpreted as a failure of some sort, on part of bash or the executable you that just ran. See also $PIPESTATUS and the EXIT STATUS section of the bash man page:
For the shell’s purposes, a command which exits with a zero exit status has succeeded. An exit status
of zero indicates success. A non-zero exit status indicates failure. When a command terminates on a
fatal signal N, bash uses the value of 128+N as the exit status.
If a command is not found, the child process created to execute it returns a status of 127. If a com-
mand is found but is not executable, the return status is 126.
If a command fails because of an error during expansion or redirection, the exit status is greater than
zero.
Shell builtin commands return a status of 0 (true) if successful, and non-zero (false) if an error
occurs while they execute. All builtins return an exit status of 2 to indicate incorrect usage.
Bash itself returns the exit status of the last command executed, unless a syntax error occurs, in
which case it exits with a non-zero value. See also the exit builtin command below.
It has no special meaning, other than that the last process to exit did so with an exit status of 127.
However, it is also used by bash (assuming you're using bash as a shell) to tell you that the command you tried to execute couldn't be executed (i.e. it couldn't be found). It's unfortunately not immediately deducible though, if the process exited with status 127, or if it couldn't found.
EDIT:
Not immediately deducible, except for the output on the console, but this is stack overflow, so I assume you're doing this in a script.
If you're trying to run a program using a scripting language, you may need to include the full path of the scripting language and the file to execute. For example:
exec('/usr/local/bin/node /usr/local/lib/node_modules/uglifycss/uglifycss in.css > out.css');
This error is also at times deceiving. It says file is not found even though the files is indeed present. It could be because of invalid unreadable special characters present in the files that could be caused by the editor you are using. This link might help you in such cases.
-bash: ./my_script: /bin/bash^M: bad interpreter: No such file or directory
The best way to find out if it is this issue is to simple place an echo statement in the entire file and verify if the same error is thrown.
If the IBM mainframe JCL has some extra characters or numbers at the end of the name of unix script being called then it can throw such error.
In addition to the given answers, note that running a script file with incorrect end-of-line characters could also result in 127 exit code if you use /bin/sh as your shell.
As an example, if you run a shell script with CRLF end-of-line characters in a UNIX-based system and in the /bin/sh shell, it is possible to encounter some errors like the following I've got after running my script named my_test.sh :
$ ./my_test.sh
sh: 2: ./my_test.sh: not found
$ echo $?
127
As a note, using /bin/bash, I got 126 exit code, which is in accordance with gnu.org documentation about the bash :
If a command is not found, the child process created to execute it returns a status of 127. If a command is found but is not executable, the return status is 126.
Finally, here is the result of running my script in /bin/bash :
arman#Debian-1100:~$ ./my_test.sh
-bash: ./my_test.sh: /bin/bash^M: bad interpreter: No such file or directory
arman#Debian-1100:~$ echo $?
126
go to C:\Program Files\Git\etc
open gitconfig with notepad
change
[core]
autocrlf = true
To
[core]
autocrlf = false