Expect script does not work under crontab - automation

I have an expect script which I need to run every 3 mins on my management node to collect tx/rx values for each port attached to DCX Brocade SAN Switch using the command #portperfshow#
Each time I try to use crontab to execute the script every 3 mins, the script does not work!
My expect script starts with #!/usr/bin/expect -f and I am calling the script using the following syntax under cron:
3 * * * * /usr/bin/expect -f /root/portsperfDCX1/collect-all.exp sanswitchhostname
However, when I execute the script (not under cron) it works as expected:
root# ./collect-all.exp sanswitchhostname
works just fine.
Please Please can someone help! Thanks.
The script collect-all.exp is:
#!/usr/bin/expect -f
#Time and Date
set day [timestamp -format %d%m%y]
set time [timestamp -format %H%M]
#logging
set LogDir1 "/FPerf/PortsLogs"
et timeout 5
set ipaddr [lrange $argv 0 0]
set passw "XXXXXXX"
if { $ipaddr == "" } {
puts "Usage: <script.exp> <ip address>\n"
exit 1
}
spawn ssh admin#$ipaddr
expect -re "password"
send "$passw\r"
expect -re "admin"
log_file "$LogDir1/$day-portsperfshow-$time"
send "portperfshow -tx -rx -t 10\r"
expect timeout "\n"
send \003
log_file
send -- "exit\r"
close

I had the same issue, except that my script was ending with
interact
Finally I got it working by replacing it with these two lines:
expect eof
exit

Changing interact to expect eof worked for me!
Needed to remove the exit part, because I had more statements in the bash script after the expect line (calling expect inside a bash script).

There are two key differences between a program that is run normally from a shell and a program that is run from cron:
Cron does not populate (many) environment variables. Notably absent are TERM, SHELL and HOME, but that's just a small proportion of the long list that will be not defined.
Cron does not set up a current terminal, so /dev/tty doesn't resolve to anything. (Note, programs spawned by Expect will have a current terminal.)
With high probability, any difficulties will come from these, especially the first. To fix, you need to save all your environment variables in an interactive session and use these in your expect script to repopulate the environment. The easiest way is to use this little expect script:
unset -nocomplain ::env(SSH_AUTH_SOCK) ;# This one is session-bound anyway
puts [list array set ::env [array get ::env]]
That will write out a single very long line which you want to put near the top of your script (or at least before the first spawn). Then see if that works.

Jobs run by cron are not considered login shells, and thus don't source your .bashrc, .bash_profile, etc.
If you want that behavior, you need to add it explicitly to the crontab entry like so:
$ crontab -l
0 13 * * * bash -c '. .bash_profile; etc ...'
$

Related

Is it possible to get pid of first command (not background)? Zsh, not bash

I have in zsh:
(sleep 100;program1 & another program & another)&
How to get PID of 'sleep' process (I need to kill it)?
$! - returns pid not of sleep process
jobs -p - also useless here
killall -9 sleep -useless, because it will kill all sleep processes, not only this.
One option is to print the pid from sleep from within the set of commands. This can be done by backgrounding the sleep process, getting the pid with $!, and then using wait to block until it exits.
% (sleep 100 &; print sleep_pid:$!; wait $!; print cmd1 && print cmd2) &
[1] 18055
sleep_pid:18056
% kill 18056
cmd1
cmd2
[1] + done ( sleep 100 & print sleep_pid:$!; wait $!; print cmd1 && print cmd2; )
%
If you need to access the pid programmatically, you can write it to a temp file or a named pipe.
Just found out one of the solutions:
sleep_pid=`pstree -p $!|grep -o "[[:digit:]]*"|tail -1`
Even if the shell you're using is a Bourne-style and thus supports the exec builtin with these semantics, you generally shouldn't try to avoid using sh -c (or equivalent) to create a new, separate shell process for this purpose, because:
Once the shell has become myCommand, there's no shell waiting to run subsequent commands. sh -c 'echo $$; exec myCommand; foo would not be able to attempt to run foo after replacing itself with myCommand. Unless you're writing a script that runs this as its last command, you can't just use echo $$; exec myCommand in a shell where you are running other commands.
You cannot use a subshell for this. (echo $$; exec myCommand) may be syntactically nicer than sh -c 'echo $$; exec myCommand', but when you run $$ inside ( ), it gives the PID of the parent shell, not of the subshell itself. But it is the subshell's PID that will be the PID of the new command. Some shells provide their own non-portable mechanisms for finding the subshell's PID, which you could use for this. In particular, in Bash 4, (echo $BASHPID; exec myCommand) does work.
Finally, note that some shells will perform an optimization where they run a command as if by exec (i.e., they forgo forking first) when it is known that the shell will not need to do anything afterward. Some shells try to do this anytime it is the last command to be run, while others will only do it when there are no other commands before or after the command, and others will not do it at all. The effect is that if your forget to write exec and just use sh -c 'echo $$; myCommand' then it will sometimes give you the right PID on some systems with some shells. I recommend against ever relying on such behavior, and instead always including exec when that's what you need.
Before I can run myCommand, I need to set a number of environment variables in my bash script. Will these carry over to the environment in which the exec command is running? –
user5359531
May 11 '18 at 15:41
Looks like my environment does carry over into the exec command. However, this approach does not work when myCommand starts other processes, which are the ones you need to work with; when I issue a kill -INT where pid was obtained this way, the signal does not reach the sub-processes started by myCommand, whereas if I run myCommand in the current session and Ctrl+C, the signals propagate correctly. –
user5359531
May 11 '18 at 16:43
1
I tried this, but the pid of the myCommand process seems to be the pid output by echo $$ +1. Am I doing something wrong? –
crobar
Aug 28 '18 at 10:13
My command looks like this: sh -c 'echo $$; exec /usr/local/bin/mbdyn -f "input.file" -o "/path/to/outputdir" > "command_output.txt" 2>&1 &' –
crobar
Aug 28 '18 at 10:51
This is brilliant, but it's not working for me when I try to get the echoed value into a variable so I can actually use it later to kill the process, e.g. PID=$(sh -c 'echo $$; exec myCommand') just hangs, whereas if I remove the PID=$(...) wrapper it displays the PID and continues immediately!

Not able to establish Oracle SQL session from within a BASH script

#!/bin/bash
#Oracle DB Info for NEXT
HOST="1.2.3.4"
PORT="5678"
SERVICE="MYDB"
DB_USER=$(whoami)
DB_PASS=$(base64 -d ~/.passwd)
DB_SCHEMA="my_db"
#Section for all of our functions.
function SQLConnection(){
sqlplus "$DB_USER"/"$DB_PASS"#"$HOST":"$PORT"/"$SERVICE"
}
function Connected(){
SQLConnection <<EOF
select sys_context('USERENV','SERVER_HOST') from dual;
EOF
}
function GetJMS(){
SQLConnection <<EOF
set echo on timing on lines 200 pages 100
select pd.destination from ${DB_SCHEMA}.pd_notification pd where pd.org_id = '$ORGID';
EOF
}
TODAY=$(date +"%A %B %d, %Y")
read -r -p $'\n\nWhat is the ORG ID? ' ORGID
read -r -p $'\n\nWhat is the REMOTE QUEUE MANAGER NAME? ' RQM
read -r -p $'\n\nWhat is the IP address of the REMOTE QUEUE MANAGER? ' CONN
read -r -p $'\n\nWhat is the PORT of the REMOTE QUEUE MANAGER? ' PORT
echo -en "* $(whoami)\n* $TODAY\n* MQ Setup $ORGID\n\nDEFINE +\n\tCHANNEL('$RQM.LQML') +\n\tCHLTYPE(SDR) +\n\tCONNAME('$CONN($PORT)') +\n\tXMITQ('BUF.2.$ORGID.XMQ')\n\tCHAUTH(TLS_RSA_WITH_AES_256_CBC_SHA256)\n\nDEFINE +\n\tCHANNEL('LQML.$RQM') +\n\tCHLTYPE(RCVR) +\n\tTRPTYPE(TCP)\n\nDEFINE +\n\tQLOCAL('$RQM') +\n\tTRIGDATA('LQML.$RQM') +\n\tINITQ('SYSTEM.CHANNEL.INITQ') +\n\tTRIGGER USAGE(XMITQ)\n\n" > ~/mqsetup.mqsc
CONNECTED=$(Connected | awk 'NR==16')
echo -en "\n\nHello From: $CONNECTED\n\n"
for JMSDESTINATION in $(GetJMS | awk 'NR>=16&&NR<=24{print $1}')
do
read -r -p $'\n\nWhich REMOTE QUEUE NAME matches with this ${JMSDESTINATION}?' RNAME
QDESC=$(echo "$JMSDESTINATION" | tr '.' ' ' | tr '[[:upper:]]' '[[:lower:]]')
echo -en "\n\nDEFINE +\n\tQR($JMSDESTINATION) +\n\t\tREPLACE DESCR('$ORGID $QDESC Queue') +\n\t\tREPLACE MAXDEPTH(5000) +\n\t\tXMITQ('BUF.2.$ORGID.XMQ') +\n\t\tRNAME('$RNAME') +\n\t\tRQMNAME('$RQM')" >> ~/mqsetup.mqsc
done
Here is the script I've built, hoping to automate the setup of IBM MQ Queues and Channels. My problem is that outside this script, I can establish an SQL Session without an issue, directly from the shell, provided I input the variables seen in the script. I can call the functions and everything returns just as I'd hope it would. When I run the exact same things from within the script, I get timeout errors ... the "Hello From" is blank, which tells me there is no DB connection.
I'm totally stumped as to why it all works great from outside the script, but inside it times out.
I appreciate the eyes and the help!
You're overwritng a variable value. You have this at the top of the script:
PORT="5678"
but then later on you do:
read -r -p $'\n\nWhat is the PORT of the REMOTE QUEUE MANAGER? ' PORT
which overwrites your 5678 value with whatever is entered there. That port may not be listening on the DB server at all, or may be doing something else, or if you don't enter a value it'll default to port 1521 when you connect. But either way the connection is going to fail, either quickly or slowly depending on the port state (e.g. slower maybe if a firewall blocks it).
If you test the connection by adding a Connected call before the read calls (as I initially did) then it seems to be working fine; but the connections after the reads don't work because port value it tries to connect to is now wrong.
Use a different name for the two variables, e.g. RQ_PORT for the second one - both in its read command and the subsequent creation of the ~/mqsetup.mqsc file.
You may also find it useful to add the -l flag to your SQL*Plus call so that if the connection fails for some reason it won't re-prompt for credentials, which in some circumstances can make the script appear to hang until you hit enter a few times.
Not directly relevant to the problem, but when automating anything like this I usually also use the -s flag to suppress the banners (which can vary between environments); and if you're only interested in capturing query output I'd usually set headings and/or pagination off, and feedback off, and generally set SQL*Plus up to generate as little noise as possible - it makes parsing out the interesting bits easier.

copy 3 newest files in remote server using expect the close the session

For starters, I'm a complete novice with expect scripts. I have written a few ssh scripts but I cant seem to figure out how to get the latest 3 log files after running a set of tests for a new build. My main goal is to find the latest log files and copy them to my local machine. PLEASE DON'T tell me that it's bad practice to hard code the login and password, I'm doing so because it's temporary to make the script work. My code currently...
#!/usr/bin/expect -f
set timeout 15
set prompt {\]\$ ?#}
spawn ssh -o "StrictHostKeyChecking no" "root#remote_ip"
expect {
"RSA key fingerprint" {send "yes\r"; exp_continue}
"assword:" {send "password\r"; exp_continue}
}
sleep 15
send -- "export DISPLAY=<display_ip>\r"
sleep 5
send "cd /path/to/test/\r"
sleep 5
set timeout -1
send "bash run.sh acceptance.test\r"
#Everything above all works. The tests has finished, about to cp log files
send "cd Log\r"
sleep 5
send -- "pwd\r"
sleep 5
set newestFile [send "ls -t | head -3"]
#tried [eval exec `ls -t | head -3`]
#No matter what I try, my code always gets stuck here. Either it wont close the session
#or ls: invalid option -- '|' or just nothing and it closes the session.
#usually never makes it beyond here :(
expect $prompt
sleep 5
puts $newestFile
sleep 5
send -- "exit\r"
sleep 5
set timeout 120
spawn rsync -azP root#remote_ip:'ls -t /logs/path/ | head -3' /local/path/
expect {
"fingerprint" {send "yes\r"; exp_continue};
"assword:" {send "password\r"; exp_continue};
}
Thanks in advance
When writing an expect script, you need to follow the pattern of expecting the remote side to write some output (e.g., a prompt) and then sending something to it in reply. The overall pattern is spawn, expect, send, expect, send, …, close, wait. If you don't expect from time to time, there are some buffers that fill up, which is probably what's happening to you.
Let's fix the section with the problems (though you should be expecting the prompt before this too):
send "cd Log\r"
expect -ex $prompt
send -- "pwd\r"
expect -ex $prompt
send "ls -t | head -3\r"
# Initialise a variable to hold the list of files produced
set newestFiles {}
# SKIP OVER THE LINE "TYPED IN" JUST ABOVE
expect \n
expect {
-re {^([^\r\n]*)\r\n} {
lappend newestFiles $expect_out(1,string)
exp_continue
}
-ex $prompt
}
# Prove what we've found for demonstration purposes
send_user "Found these files: \[[join $newestFiles ,]\]\n"
I've also made a few other corrections. In particular, send has no useful result itself, so we need an expect with a regular expression (use the -re flag) to pick out the filenames. I like to use the other form of the expect command for this, as that lets me match against several things at once. (I'm using the -ex option for exact matching with the prompts because that works better in my testing; you might need it, or might not.)
Also, make sure you use \r at the end of a line sent with send or the other side will be still be waiting “for you to press Return” which is what the \r simulates. And don't forget to use:
exp_internal 1
when debugging your code, as that tells you exactly what expect is up to.

Declare bash variables inside sql EOF

how to declare variable in bash command. See "?"
I thought we could almost run any bash statement with ! or host in front of line
#!/bin/bash
sqlplus scott/tiger#orcl << EOF
! export v10="Hi" Doesn't work, why?
! echo $v10 Doesn't work, why?
! echo "Done" Works perfectly and also other bash commands
select * from dept; Works perfectly
exit
EOF
Thank you
What #jordanm says "probably" is exactly what is happening. When you specify a host command from within sqlplus, a separate shell process is spawned, the command executed by that process, then that process is terminated and control returns to sqlplus. Any environment variables that are set in that child shell process are good only within it, so when it terminates, they are gone.
As for your specific lines that "work" and "don't work" .. "export v10="Hi" does work but there is no stdout display of the 'export' command, and as explained, that variable v10 ceases to exist once the child process completes and control returns to sqlplus. The "echo $v10" also works, but since that is a new shell process, it has no value for $v10, so there is nothing to echo.
What are you trying to accomplish by setting enviornment variables from within sqlplus?
found it, all I had to do was
<< EOF
whenever sqlerror exit failure rollback
whenever oserror exit failure rollback
#scriptname.sql
EXIT
EOF

How can i view all comments posted by users in bitbucket repository

In the repository home page , i can see comments posted in recent activity at the bottom, bit it only shows 10 commnets.
i want to all the comments posted since beginning.
Is there any way
Comments of pull requests, issues and commits can be retrieved using bitbucket’s REST API.
However it seems that there is no way to list all of them at one place, so the only way to get them would be to query the API for each PR, issue or commit of the repository.
Note that this takes a long time, since bitbucket has seemingly set a limit to the number of accesses via API to repository data: I got Rate limit for this resource has been exceeded errors after retrieving around a thousand results, then I could retrieve about only one entry per second elapsed from the time of the last rate limit error.
Finding the API URL to the repository
The first step is to find the URL to the repo. For private repositories, it is necessary to get authenticated by providing username and password (using curl’s -u switch). The URL is of the form:
https://api.bitbucket.org/2.0/repositories/{repoOwnerName}/{repoName}
Running git remote -v from the local git repository should provide the missing values. Check the forged URL (below referred to as $url) by verifying that repository information is correctly retrieved as JSON data from it: curl -u username $url.
Fetching comments of commits
Comments of a commit can be accessed at $url/commit/{commitHash}/comments.
The resulting JSON data can be processed by a script. Beware that the results are paginated.
Below I simply extract the number of comments per commit. It is indicated by the value of the member size of the retrieved JSON object; I also request a partial response by adding the GET parameter fields=size.
My script getNComments.sh:
#!/bin/sh
pw=$1
id=$2
json=$(curl -s -u username:"$pw" \
https://api.bitbucket.org/2.0/repositories/{repoOwnerName}/{repoName}/commit/$id/comments'?fields=size')
printf '%s' "$json" | grep -q '"type": "error"' \
&& printf "ERROR $id\n" && exit 0
nComments=$(printf '%s' "$json" | grep -o '"size": [0-9]*' | cut -d' ' -f2)
: ${nComments:=EMPTY}
checkNumeric=$(printf '%s' "$nComments" | tr -dc 0-9)
[ "$nComments" != "$checkNumeric" ] \
&& printf >&2 "!ERROR! $id:\n%s\n" "$json" && exit 1
printf "$nComments $id\n"
To use it, taking into account the possibility for the error mentioned above:
A) Prepare input data. From the local repository, generate the list of commits as wanted (run git fetch -a prior to update the local git repo if needed); check out git help rev-list for how it can be customised.
git rev-list --all | sort > sorted-all.id
cp sorted-all.id remaining.id
B) Run the script. Note that the password is passed here as a parameter – so first assign it to a variable safely using stty -echo; IFS= read -r passwd; stty echo, in one line; also see security considerations below. The processing is parallelised onto 15 processes here, using the option -P.
< remaining.id xargs -P 15 -L 1 ./getNComments.sh "$passwd" > commits.temp
C) When the rate limit is reached, that is when getNComments.sh prints !ERROR!, then kill the above command (Ctrl-C), and execute these below to update the input and output files. Wait a while for the request limit to increase, then re-execute the above one command and repeat until all the data is processed (that is when wc -l remaining.id returns 0).
cat commits.temp >> commits.result
cut -d' ' -f2 commits.result | sort | comm -13 - sorted-all.id > remaining.id
D) Finally, you can get the commits which received comments with:
grep '^[1-9]' commits.result
Fetching comments of pull requests and issues
The procedure is the same as for fetching commits’ comments, but for the following two adjustments:
Edit the script to replace in the URL commit by pullrequests or by issues, as appropriate;
Let $n be the number of issues/PRs to search. The git rev-list command above becomes: seq 1 $n > sorted-all.id
The total number of PRs in the repository can be obtained with:
curl -su username $url/pullrequests'?state=&fields=size'
and, if the issue tracker is set up, the number of issues with:
curl -su username $url/issues'?fields=size'
Hopefully, the repository has few enough PRs and issues so that all data can be fetched in one go.
Viewing comments
They can be viewed normally via the web interface on their commit/PR/issue page at:
https://bitbucket.org/{repoOwnerName}/{repoName}/commits/{commitHash}
https://bitbucket.org/{repoOwnerName}/{repoName}/pull-requests/{prId}
https://bitbucket.org/{repoOwnerName}/{repoName}/issues/{issueId}
For example, to open all PRs with comments in firefox:
awk '/^[1-9]/{print "https://bitbucket.org/{repoOwnerName}/{repoName}/pull-requests/"$2}' PRs.result | xargs firefox
Security considerations
Arguments passed on the command line are visible to all users of the system, via ps ax (or /proc/$PID/cmdline). Therefore the bitbucket password will be exposed, which could be a concern if the system is shared by multiple users.
There are three commands getting the password from the command line: xargs, the script, and curl.
It appears that curl tries to hide the password by overwriting its memory, but it is not guaranteed to work, and even if it does, it leaves it visible for a (very short) time after the process starts. On my system, the parameters to curl are not hidden.
A better option could be to pass the sensitive information through environment variables. They should be visible only to the current user and root via ps axe (or /proc/$PID/environ); although it seems that there are systems that let all users access this information (do a ls -l /proc/*/environ to check the environment files’ permissions).
In the script simply replace the lines pw=$1 id=$2 with id=$1, then pass pw="$passwd" before xargs in the command line invocation. It will make the environment variable pw visible to xargs and all of its descendent processes, that is the script and its children (curl, grep, cut, etc), which may or may not read the variable. curl does not read the password from the environment, but if its password hiding trick mentioned above works then it might be good enough.
There are ways to avoid passing the password to curl via the command line, notably via standard input using the option -K -. In the script, replace curl -s -u username:"$pw" with printf -- '-s\n-u "%s"\n' "$authinfo" | curl -K - and define the variable authinfo to contain the data in the format username:password. Note that this method needs printf to be a shell built-in to be safe (check with type printf), otherwise the password will show up in its process arguments. If it is not a built-in, try with print or echo instead.
A simple alternative to an environment variable that will not appear in ps output in any case is via a file. Create a file with read/write permissions restricted to the current user (chmod 600), and edit it so that it contains username:password as its first line. In the script, replace pw=$1 with IFS= read -r authinfo < "$1", and edit it to use curl’s -K option as in the paragraph above. In the command line invocation replace $passwd with the filename.
The file approach has the drawback that the password will be written to disk (note that files in /proc are not on the disk). If this too is undesirable, it is possible to pass a named pipe instead of a regular file:
mkfifo pipe
chmod 600 pipe
# make sure printf is a builtin, or use an equivalent instead
(while :; do printf -- '%s\n' "username:$passwd"; done) > pipe&
pid=$!
exec 3<pipe
Then invoke the script passing pipe instead of the file. Finally, to clean up do:
kill $pid
exec 3<&-
This will ensure the authentication info is passed directly from the shell to the script (through the kernel), is not written to disk and is not exposed to other users via ps.
You can go to Commits and see the top line for each commit, you will need to click on each one to see further information.
If I find a way to see all without drilling into each commit, I will update this answer.