Expect - wait until process terminate - telnet

I am completely new in Expect, and I want to run my Python script via Telnet.
This py script takes about 1 minute to execute, but when I try to run it via Telnet with Expect - it doesn't work.
I have this expect simple code:
#! /usr/bin/expect
spawn telnet <ip_addr>
expect "login"
send "login\r"
expect "assword"
send "password\r"
expect "C:\\Users\\user>\r"
send "python script.py\r"
expect "C:\\Users\\user>\r"
close
When I replace script.py with the one with shorter execution time - it works great. Could you tell me what should I change, so I can wait until my script.py process will terminate? Should I use timeout or sleep?

If you are sure about the execution time of the script, then you can add sleep or set the timeout to the desired value
send "python script.py\r"
sleep 60; # Sleeping for 1 min
expect "C:\\Users\\user>"; # Now expecting for the prompt
Or
set timeout 60;
send "python script.py\r"
expect "C:\\Users\\user>"; # Now expecting for the prompt
But, if the time is variant, then better handle the timeout event and wait for the prompt till some amount of time. i.e.
set timeout 60; # Setting timeout as 1 min;
set counter 0
send "python script.py\r"
expect {
# Check if 'counter' is equal to 5
# This means, we have waited 5 mins already.
# So,exiting the program.
if {$counter==5} {
puts "Might be some problem with python script"
exit 1
}
# Increase the 'counter' in case of 'timeout' and continue with 'expect'
timeout {
incr counter;
puts "Waiting for the completion of script...";
exp_continue; # Causes the 'expect' to run again
}
# Now expecting for the prompt
"C:\\Users\\user>" {puts "Script execution is completed"}
}

A simpler alternative: if you don't care how long it takes to complete:
set timeout -1
# rest of your code here ...

Related

After command usage in Tcl

So my understanding is that after command can be used to delay the execution of a script or a command for certain ms but when i execute the below command, the output is printed immediately
Command:
after 4000 [list puts "hello"]
Output:
hello
after#0
Question: Why was the output not delayed for 4s here?
That what you wrote works; I just tried it at a tclsh prompt like this:
% after 4000 [list puts "hello"]; after 5000 set x 1; vwait x
Did you write something else instead, such as this:
after 4000 [puts "hello"]
In this case, instead of delaying the execution of puts, you'd be calling it immediately and using its result (the empty string) as the argument to after (which is valid, but useless).
Think of [list …] as a special kind of quoting.
The other possibility when running interactively is that you take several seconds between running the after and starting the event loop (since the callbacks are only ever run by the event loop). You won't see that in wish of course — that's always running an event loop — but it's possible in tclsh as that doesn't start an event loop by default. But I'd put that as a far distant second in terms of probability to omitting the list word…

AutoHotKey: wait for process, breaking on user input

A script is waiting for a process to start; it shall do so indefinitely, but only as long as there is no human input. In another words: the script shall wait for either process start or human input, whichever comes first (also, the process may already be running when the script starts, in which case the script shall close immediately). I have thought of something like this, but there’s likely a better way, since this loop doesn’t break with input:
while (A_TimeIdlePhysical > 100) {
Process, Wait, SomeProcess.exe
}
Any ideas?
Tested with notepad:
#Persistent
SetTimer, DetectProcess, 50
return
DetectProcess:
If (ProcessExist("notepad.exe")) ; if the process is already running
ExitApp
; otherwise:
If (A_TimeIdlePhysical > 100) ; as long as there is no human input
{
If (ProcessExist("notepad.exe")) ; wait for either process start
ExitApp
}
else ; or human input
ExitApp
return
ProcessExist(ProcessName){
Process, Exist, %ProcessName%
return Errorlevel
}

copy 3 newest files in remote server using expect the close the session

For starters, I'm a complete novice with expect scripts. I have written a few ssh scripts but I cant seem to figure out how to get the latest 3 log files after running a set of tests for a new build. My main goal is to find the latest log files and copy them to my local machine. PLEASE DON'T tell me that it's bad practice to hard code the login and password, I'm doing so because it's temporary to make the script work. My code currently...
#!/usr/bin/expect -f
set timeout 15
set prompt {\]\$ ?#}
spawn ssh -o "StrictHostKeyChecking no" "root#remote_ip"
expect {
"RSA key fingerprint" {send "yes\r"; exp_continue}
"assword:" {send "password\r"; exp_continue}
}
sleep 15
send -- "export DISPLAY=<display_ip>\r"
sleep 5
send "cd /path/to/test/\r"
sleep 5
set timeout -1
send "bash run.sh acceptance.test\r"
#Everything above all works. The tests has finished, about to cp log files
send "cd Log\r"
sleep 5
send -- "pwd\r"
sleep 5
set newestFile [send "ls -t | head -3"]
#tried [eval exec `ls -t | head -3`]
#No matter what I try, my code always gets stuck here. Either it wont close the session
#or ls: invalid option -- '|' or just nothing and it closes the session.
#usually never makes it beyond here :(
expect $prompt
sleep 5
puts $newestFile
sleep 5
send -- "exit\r"
sleep 5
set timeout 120
spawn rsync -azP root#remote_ip:'ls -t /logs/path/ | head -3' /local/path/
expect {
"fingerprint" {send "yes\r"; exp_continue};
"assword:" {send "password\r"; exp_continue};
}
Thanks in advance
When writing an expect script, you need to follow the pattern of expecting the remote side to write some output (e.g., a prompt) and then sending something to it in reply. The overall pattern is spawn, expect, send, expect, send, …, close, wait. If you don't expect from time to time, there are some buffers that fill up, which is probably what's happening to you.
Let's fix the section with the problems (though you should be expecting the prompt before this too):
send "cd Log\r"
expect -ex $prompt
send -- "pwd\r"
expect -ex $prompt
send "ls -t | head -3\r"
# Initialise a variable to hold the list of files produced
set newestFiles {}
# SKIP OVER THE LINE "TYPED IN" JUST ABOVE
expect \n
expect {
-re {^([^\r\n]*)\r\n} {
lappend newestFiles $expect_out(1,string)
exp_continue
}
-ex $prompt
}
# Prove what we've found for demonstration purposes
send_user "Found these files: \[[join $newestFiles ,]\]\n"
I've also made a few other corrections. In particular, send has no useful result itself, so we need an expect with a regular expression (use the -re flag) to pick out the filenames. I like to use the other form of the expect command for this, as that lets me match against several things at once. (I'm using the -ex option for exact matching with the prompts because that works better in my testing; you might need it, or might not.)
Also, make sure you use \r at the end of a line sent with send or the other side will be still be waiting “for you to press Return” which is what the \r simulates. And don't forget to use:
exp_internal 1
when debugging your code, as that tells you exactly what expect is up to.

Expect script does not work under crontab

I have an expect script which I need to run every 3 mins on my management node to collect tx/rx values for each port attached to DCX Brocade SAN Switch using the command #portperfshow#
Each time I try to use crontab to execute the script every 3 mins, the script does not work!
My expect script starts with #!/usr/bin/expect -f and I am calling the script using the following syntax under cron:
3 * * * * /usr/bin/expect -f /root/portsperfDCX1/collect-all.exp sanswitchhostname
However, when I execute the script (not under cron) it works as expected:
root# ./collect-all.exp sanswitchhostname
works just fine.
Please Please can someone help! Thanks.
The script collect-all.exp is:
#!/usr/bin/expect -f
#Time and Date
set day [timestamp -format %d%m%y]
set time [timestamp -format %H%M]
#logging
set LogDir1 "/FPerf/PortsLogs"
et timeout 5
set ipaddr [lrange $argv 0 0]
set passw "XXXXXXX"
if { $ipaddr == "" } {
puts "Usage: <script.exp> <ip address>\n"
exit 1
}
spawn ssh admin#$ipaddr
expect -re "password"
send "$passw\r"
expect -re "admin"
log_file "$LogDir1/$day-portsperfshow-$time"
send "portperfshow -tx -rx -t 10\r"
expect timeout "\n"
send \003
log_file
send -- "exit\r"
close
I had the same issue, except that my script was ending with
interact
Finally I got it working by replacing it with these two lines:
expect eof
exit
Changing interact to expect eof worked for me!
Needed to remove the exit part, because I had more statements in the bash script after the expect line (calling expect inside a bash script).
There are two key differences between a program that is run normally from a shell and a program that is run from cron:
Cron does not populate (many) environment variables. Notably absent are TERM, SHELL and HOME, but that's just a small proportion of the long list that will be not defined.
Cron does not set up a current terminal, so /dev/tty doesn't resolve to anything. (Note, programs spawned by Expect will have a current terminal.)
With high probability, any difficulties will come from these, especially the first. To fix, you need to save all your environment variables in an interactive session and use these in your expect script to repopulate the environment. The easiest way is to use this little expect script:
unset -nocomplain ::env(SSH_AUTH_SOCK) ;# This one is session-bound anyway
puts [list array set ::env [array get ::env]]
That will write out a single very long line which you want to put near the top of your script (or at least before the first spawn). Then see if that works.
Jobs run by cron are not considered login shells, and thus don't source your .bashrc, .bash_profile, etc.
If you want that behavior, you need to add it explicitly to the crontab entry like so:
$ crontab -l
0 13 * * * bash -c '. .bash_profile; etc ...'
$

cron script to act as a queue OR a queue for cron?

I'm betting that someone has already solved this and maybe I'm using the wrong search terms for google to tell me the answer, but here is my situation.
I have a script that I want to run, but I want it to run only when scheduled and only one at a time. (can't run the script simultaneously)
Now the sticky part is that say I have a table called "myhappyschedule" which has the data I need and the scheduled time. This table can have multiple scheduled times even at the same time, each one would run this script. So essentially I need a queue of each time the script fires and they all need to wait for each one before it to finish. (sometimes this can take just a minute for the script to execute sometimes its many many minutes)
What I'm thinking about doing is making a script that checks myhappyschedule every 5 min and gathers up those that are scheduled, puts them into a queue where another script can execute each 'job' or occurrence in the queue in order. Which all of this sounds messy.
To make this longer - I should say that I'm allowing users to schedule things in myhappyschedule and not edit crontab.
What can be done about this? File locks and scripts calling scripts?
add a column exec_status to myhappytable (maybe also time_started and time_finished, see pseudocode)
run the following cron script every x minutes
pseudocode of cron script:
[create/check pid lock (optional, but see "A potential pitfall" below)]
get number of rows from myhappytable where (exec_status == executing_now)
if it is > 0, exit
begin loop
get one row from myhappytable
where (exec_status == not_yet_run) and (scheduled_time <= now)
order by scheduled_time asc
if no such row, exit
set row exec_status to executing_now (maybe set time_started to now)
execute whatever command the row contains
set row exec_status to completed
(maybe also store the command output/return as well, set time_finished to now)
end loop
[delete pid lock file (complementary to the starting pid lock check)]
This way, the script first checks if none of the commands is running, then runs first not-yet run command, until there are no more commands to be run at the given moment. Also, you can see what command is executing by querying the database.
A potential pitfall: if the cron script is killed, a scheduled task will remain in "executing_now" state. That's what the pid lock at beginning and end is for: to see if the cron script terminated properly. pseudocode of create/check pidlock:
if exists pidlockfile then
check if process id given in file exists
if not exists then
update myhappytable set exec_status = error_cronscript_died_while_executing_this
where exec_status == executing_now
delete pidlockfile
else (previous instance still running)
exit
endif
endif
create pidlockfile containing cron script process id
You can use the at(1) command inside your script to schedule its next run. Before it exits, it can check myhappyschedule for the next run time. You don't need cron at all, really.
I came across this question while researching for a solution to the queuing problem. For the benefit of anyone else searching here is my solution.
Combine this with a cron that starts jobs as they are scheduled (even if they are scheduled to run at the same time) and that solves the problem you described as well.
Problem
At most one instance of the script should be running.
We want to cue up requests to process them as fast as possible.
ie. We need a pipeline to the script.
Solution:
Create a pipeline to any script. Done using a small bash script (further down).
The script can be called as
./pipeline "<any command and arguments go here>"
Example:
./pipeline sleep 10 &
./pipeline shabugabu &
./pipeline single_instance_script some arguments &
./pipeline single_instance_script some other_argumnts &
./pipeline "single_instance_script some yet_other_arguments > output.txt" &
..etc
The script creates a new named pipe for each command. So the above will create named pipes: sleep, shabugabu, and single_instance_script
In this case the initial call will start a reader and run single_instance_script with some arguments as arguments. Once the call completes, the reader will grab the next request off the pipe and execute with some other_arguments, complete, grab the next etc...
This script will block requesting processes so call it as a background job (& at the end) or as a detached process with at (at now <<< "./pipeline some_script")
#!/bin/bash -Eue
# Using command name as the pipeline name
pipeline=$(basename $(expr "$1" : '\(^[^[:space:]]*\)')).pipe
is_reader=false
function _pipeline_cleanup {
if $is_reader; then
rm -f $pipeline
fi
rm -f $pipeline.lock
exit
}
trap _pipeline_cleanup INT TERM EXIT
# Dispatch/initialization section, critical
lockfile $pipeline.lock
if [[ -p $pipeline ]]
then
echo "$*" > $pipeline
exit
fi
is_reader=true
mkfifo $pipeline
echo "$*" > $pipeline &
rm -f $pipeline.lock
# Reader section
while read command < $pipeline
do
echo "$(date) - Executing $command"
($command) &> /dev/null
done