Bad spawn_id while executing expect command - ssh

I am writing a script that will copy Valgrind onto whatever shelf that we enter on the command line. The syntax is as follows:
vgrindCopy [shelf number]
For some reason, the files will copy over without any issue, but after the copy completes the follow error is observed:
bad spawn_id (process died earlier?)
while executing
"expect "#""
Here is a copy of the relevant code:
function login_shelf {
expect -c "
set timeout 15
spawn $1
expect \"password:\"
send \"$PW\r\"
expect \"#\"
sleep 1
exit
"
}
# login and make the valgrind directory at /sfs/software/shelf/current
set -- /opt/swe/tools/ext/gnu/valgrind-3.7.0/i686-linux2.6/lib/valgrind/*
login_shelf "/opt/corp/projects/shelftools/bin/app rsync -Lau $* $shelf:/shelf/valgrind"
After playing around with the code, I found that if I remove the line "expect \"#\"", then the program doesn't copy any of the files over anymore. What odd as well is that I'm seeing the issue when I run the script, but a co-worker is not.
Has anyone had a similar issue and determined the cause? Any help would be greatly appreciated as always!

Your code is spawning the rsync and at the expect \"#\" is waiting for rsync to output a #, which it never does, so it exits and expect reports the error.
When you remove the expect \"#\" the expect script exits, terminating the rsync.
Instead of expect \"#\" you should wait for rsync to exit:
expect eof
wait

Related

Read all lines at the same time individually - Solaris ksh

I need some help with a script. Solaris 10 and ksh.
I Have a file called /temp.list with this content:
192.168.0.1
192.168.0.2
192.168.0.3
So, I have a script which reads this list and executes some commands using the lines values:
FILE_TMP="/temp.list"
while IFS= read line
do
ping $line
done < "$FILE_TMP"
It works, but it executes the command on line 1. When it's over, it goes to the line 2, and it goes successively until the end. I would like to find a way to execute the command ping at the same time in each line of the list. Is there a way to do it?
Thank you in advance!
Marcus Quintella
As Ari's suggested, googling ksh multithreading will produce a lot of ideas/solutions.
A simple example:
FILE_TMP="/temp.list"
while IFS= read line
do
ping $line &
done < "$FILE_TMP"
The trailing '&' says to kick the ping command off in the background, allowing loop processing to continue while the ping command is running in the background.
'course, this is just the tip of the proverbial iceberg as you now need to consider:
multiple ping commands are going to be dumping output to stdout (ie, you're going to get a mish-mash of ping output in your console), so you'll need to give some thought as to what to do with multiple streams of output (eg, redirect to a common file? redirect to separate files?)
you need to have some idea as to how you want to go about managing and (possibly) terminating commands running in the background [ see jobs, ps, fg, bg, kill ]
if running in a shell script you'll likely find yourself wanting to suspend the main shell script processing until all background jobs have completed [ see wait ]

Declare bash variables inside sql EOF

how to declare variable in bash command. See "?"
I thought we could almost run any bash statement with ! or host in front of line
#!/bin/bash
sqlplus scott/tiger#orcl << EOF
! export v10="Hi" Doesn't work, why?
! echo $v10 Doesn't work, why?
! echo "Done" Works perfectly and also other bash commands
select * from dept; Works perfectly
exit
EOF
Thank you
What #jordanm says "probably" is exactly what is happening. When you specify a host command from within sqlplus, a separate shell process is spawned, the command executed by that process, then that process is terminated and control returns to sqlplus. Any environment variables that are set in that child shell process are good only within it, so when it terminates, they are gone.
As for your specific lines that "work" and "don't work" .. "export v10="Hi" does work but there is no stdout display of the 'export' command, and as explained, that variable v10 ceases to exist once the child process completes and control returns to sqlplus. The "echo $v10" also works, but since that is a new shell process, it has no value for $v10, so there is nothing to echo.
What are you trying to accomplish by setting enviornment variables from within sqlplus?
found it, all I had to do was
<< EOF
whenever sqlerror exit failure rollback
whenever oserror exit failure rollback
#scriptname.sql
EXIT
EOF

how to see if a process by name is running in tcl

I want to use the pidof by a process given by name in tcl. I have used [exec pidof $proc_name ], but it always returns an error: child process exited abnormally.
I read somewhere exec always treat non-zero return as error as pidof return the process id number. Does anyone know if there is a workaround? Thanks in advance!
I want to use pidof is that i want to see if that process is running if not i will restart the process.
The problem is that pidof does strange things with exit codes:
Exit Status
At least one program was found with the requested name.
No program was found with the requested name.
This interacts badly with exec which treats a non-zero exit code as indicating that it should tell the rest of Tcl that there was an error.
The simplest way of dealing with this is a little extra shell script wrapper. Let's hide it inside a procedure for convenience:
proc pidof {name} {
exec /bin/bash -c "pidof '$name'; exit \$(( \$? - 1 ))"
}
All that does is subtract 1 from the exit code before it hits back into Tcl.
(You could also fix this using the techniques described in the exec manual but I think it's simpler to fix on the bash side this time.)
I ran into this and ended up causing some issues with the old linux environment I run in (no bash and exit code handling was a bit different with busybox).
My solution that should work anywhere would be similar to what a few suggested:
proc pidof {name} {
catch {exec -ignorestderr -- pidof $name} pid
if {[string is entier -strict $pid]} {
return $pid
}
}

hide error messages in dcl script

I have a test script I'm running that generates some errors,shown below, I expect these errors. Is there anyway I can prevent them from showing on the screen however? I use the
$ write sys$output
to display if there is an expected error.
I tried to use
$ DEFINE SYS$ERROR ERROR.LOG
but this then changed my entire error output log to this, if this is the correct way to handle it can I unset this at the end of my script somehow?
[error example]
%DCL-E-OPENIN, error opening TEST$DISK:[AAA]NOTTHERE.TXT; as input
-RMS-E-FNF, file not found
%DCL-E-OPENIN, error opening TEST$DISK:[AAA]NOTTHERE.TXT; as input
-RMS-E-FNF, file not found
%DCL-W-UNDFIL, file has not been opened by DCL - check logical name
DEFINE/USER creates a logical name that disappears when the next image exits.
So if you use that just before a command just to protect that command, then fine.
Otherwise I would prefer SET MESSAGE to control the output.
And of course yoy want to grab $STATUS and verify it after the command for success or for the expected error, reporting any unexpected error.
Better still... if you expect certain error conditions to occur,
then why not test for them?
For example:
$ file = F$SEARCH("TEST$DISK:[AAA]NOTTHERE.TXT")
$ IF file.NES."" THEN TYPE 'file'
Cheers,
Hein
To suppress Error message inside a script. try this command
$ DEFINE/USER SYS$ERROR NL:
NL: is a null device, so you don`t see any error messages displayed on your terminal.
good luck
This works interactively and in batch.
$ SET MESSAGE /NOTEXT /NOSEV /NOFAC /NOID
$ <DCL_Command>
$ SET MESSAGE /TEXT /SEV /FAC/ ID

proc_open interaction

Here's what I'm trying to achieve: open a shell (korn or bash, doesn't matter), from that shell, I want to open a ssh connection (ssh user#host). At some point it is likely to happen I will be prompted for either a password or I might be asked whether or not I'm sure I want to connect (offending keys).
Before anyone asks: yes, I am aware there is a plugin for ssh2 exec calls, but the servers I'm working on don't support it, and are unlikely to do so.
Here's what I've tried so far:
$desc = array(array('pipe','r'),array('pipe','w'));//used in all example code
$p = proc_open('ssh user#host',$desc,$pipes);
if(!is_resource($p)){ die('#!#$%');}//will omit this line from now on
sleep(1);//omitting this,too but it's there every time I need it
Then I tried to read console output (stream_get_contents($pipes[1])) to see what I have to pass next (either password, yes or return 'connection failed: '.stream_get_contents($pipes[1]) and proc_close $p.
This gave me the following error:
Pseudo-terminal will not be allocated because stdin is not a terminal.
So, I though ssh was called in the php:// io-stream context, seems a plausible explanation of the above error.
Next: I though about my first SO question and decided it might be a good idea to open a bash/ksh shell first:
$p = proc_open('bash',$desc,$pipes);
And take it from there, but I got the exact same error message, only this time, the script stopped running but ssh did run. So I got hopeful, then felt stupid and, eventually, desperate:
$p=proc_open('bash && ssh user#host',$desc,$pipes);
After a few seconds wait, I got the following error:
PHP Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 133693440 bytes)
The Call Stack keeps bringing up the stream_get_contents line, even in my last desperate attempt:
#!/path/to/bin/php -n
<?php
$p = proc_open('bash && ssh user#host',array(array('pipe','r'),array('pipe','w')),$ps);
if (!is_resource($p))
{
die('FFS');
}
usleep(10);
fwrite($ps[0],'yes'."\n");
fflush($ps[0]);
usleep(20);
fwrite($ps[0],'password'."\n");
fflush($ps[0]);
usleep(20);
fwrite($ps[0],'whoami'."\n");
fflush($ps[0]);
usleep(2);
$msg = stream_get_contents($ps[1]);
fwrite($ps[0],'exit'."\n");
fclose($ps[0]);
fclose($ps[1]);
proc_close($p);
?>
I know, its a mess, a lot of fflush and redundancy, but the point is: I know this connection will first prompt me for offending keys, and then ask a password. My guess is the stream in $pipes[1] holds the ssh connection, hence it's content is huge. what I need then, is a pipe inside a pipe... is this even possible? I must be missing something, what good is a pipe if this isn't possible...
My guess is the proc_open command is wrong to begin with, (error: Broken pipe). But I really can't see any other way around the first error... any thoughts? Or follow up questions if the above rant isn't at all clear (which it probably isn't).
Before anyone asks: yes, I am aware there is a plugin for ssh2 exec
calls, but the servers I'm working on don't support it, and are
unlikely to do so.
There are actually two. The PECL module, which is a PITA that most servers don't have installed anyway and phpseclib, a pure PHP SSH2 implementation. An example of its use:
<?php
include('Net/SSH2.php');
$ssh = new Net_SSH2('www.domain.tld');
if (!$ssh->login('username', 'password')) {
exit('Login Failed');
}
echo $ssh->exec('pwd');
echo $ssh->exec('ls -la');
?>