In a batch file containing the following:
cd .\10%%_AltApp\
call Process_UZ_Output.exe
sleep 2
cd ..
cd .\10%%_FalFld\
call Process_UZ_Output.exe
sleep 2
cd ..
...(repeated many more time)...
the batch process itself is temporarily hung up by the fact that when "Process_UZ_Output.exe" completes its task, it is waiting for the user to hit . I'm wondering if there is a way to modify the batch file such that the flow of the batch file will continue without the user having to hit enter everytime Process_UZ_Output.exe completes?
Try START instead of CALL:
START Process_UZ_Output.exe /B
Related
I have a scheduled task with a .bat file that downloads some files from a web server every day by the morning then process the data and UPDATES a database. Then it triggers another .bat file to SELECT data and EXPORT to a .xls file.
The second .bat file is like this:
set a=%date:/=-%
del /q F:\file_path\file1_%a%.xls
del /q F:\file_path\file2_%a%.xls
echo %time%_%date%
cd /D D:\oracle\product\10.2.0\db_1\BIN
sqlplus usrname/psswd#ORCL #F:\select_path\select1.sql
timeout /t 30 /nobreak > nul
ren F:\file_path\file1.xls file1_%a%.xls
sqlplus usrname/psswd#ORCL #F:\select_path\select2.sql
timeout /t 30 /nobreak > nul
ren F:\file_path\file2.xls file2_%a%.xls
cd /D F:\KMB-SP\TI\Scripts\script_select
::Command to send file1 and file2 via e-mail.
But when I arrive at the office and check the progress, only the first .xls is done. So I have to run the second .bat manually and it runs perfectly.
What could be causing this?
Notes:
I put the timeout between the two SELECTs because, in the past, the code was stopping after the INSERT and didn't trigger the second .bat . My colleague said it could be execution exception. Puting a timeout would give time to end the INSERT properly.
Before, it used to make both SELECTs and then rename both files. Doing so, sometimes it worked, sometimes not, then I tried to change the order: select1, rename1, select2, rename2.
As we download files everyday, we concatenate the data on a single file called DT-date. The first code goes like this:
rem The data is downloaded and the files are organized in their files
if exist F:\path\DT-date (
Data_consolidation.exe
timeout /t 300 /nobreak > nul
F:\path\second_bat.bat
) else (exit)
As #William Robertson said, I tried echo exit right after the first SELECT, but again, it only extracted the first file and not the second one.
As #WilliamRobertson suggested, writing echo exit | before the sqlplus commands solved the problem.
What I would like to do, is the following:
if process-x fails (to (re)start) then execute cmd-x
if it recovers then execute cmd-y
For the alerting via E-mail, a notificaton is sent per default on recovery. For the exec method however, I can not find a way to make this work. If I try this in the monitrc:
check process proc_x with pidfile /var/run/proc_x.pid
start program = "/bin/sh -c '/etc/init.d/Sxxproc_x start'"
stop program = "/bin/sh -c '/etc/init.d/Sxxproc_x stop'"
if 3 restarts within 5 cycles then exec "<some error cmd>"
else if succeeded then exec "<some restore cmd>"
this results in a "syntax error 'else'". If I remove the else line, the error command is called as expected. Apparently, the 'else' can not be used for the restarts test. But how can I add to execute a command is program starting succeeds or recovers?
I found a solution thanks to the answer to this topic:
get monit to alert first and restart later
The "if not exist for ..." with corresponding "else" did the trick for me to report the recover. The error report is separate. My monitrc code now:
check process proc_x with pidfile /var/run/proc_x.pid
start program = "/bin/sh -c '/etc/init.d/Sxxproc_x start'"
stop program = "/bin/sh -c '/etc/init.d/Sxxproc_x stop'"
if 1 restart within 1 cycle then exec "<some error cmd>"
repeat every 1 cycle
if not exist for 3 cycles then restart
else if succeeded 2 times within 2 cycles then exec "<some restore cmd>"
I have what I hope is a pretty simple question, but I'm not super familiar with Sun Grid, so I've been having trouble finding the answer. I am currently submitting jobs to a grid using a bash submission script that generates a command and then executes it. I have read online that if a sun grid job exits with a code of 99, it gets re-submitted to the grid. I have successfully written my bash script to do this:
[code to generate command, stores in $command]
$command
STATUS=$?
if [[ $STATUS -ne 0 ]]; then
exit 99
fi
exit 0
When I submit this job to the grid with a command that I know has a non-zero exit status, the job does indeed appear to be resubmitted, however the scheduler never sends it to another host, instead it just remains stuck in the queue with the status "Rq":
job-ID prior name user state submit/start at queue slots ja-task-ID
-----------------------------------------------------------------------------------------------------------------
2150015 0.55500 GridJob.sh my_user Rq 04/08/2013 17:49:00 1
I have a feeling that this is something simple in the config options for the queue, but I haven't been able to find anything googling. I've tried submitting this job with the qsub -r y option, but that doesn't seem to change anything.
Thanks!
Rescheduled jobs will only get run in queues that have their rerun attribute (FALSE by default) set to TRUE, so check your queue configuration (qconf -mq myqueue). Without this, your job remains in the rescheduled-pending state indefinitely because it has nowhere to go.
IIRC, submitting jobs with qsub -r yes only qualifies them for automatic rescheduling in the event of an exec node crash, and that exiting with status 99 should trigger a reschedule regardless.
I am writing a script that will copy Valgrind onto whatever shelf that we enter on the command line. The syntax is as follows:
vgrindCopy [shelf number]
For some reason, the files will copy over without any issue, but after the copy completes the follow error is observed:
bad spawn_id (process died earlier?)
while executing
"expect "#""
Here is a copy of the relevant code:
function login_shelf {
expect -c "
set timeout 15
spawn $1
expect \"password:\"
send \"$PW\r\"
expect \"#\"
sleep 1
exit
"
}
# login and make the valgrind directory at /sfs/software/shelf/current
set -- /opt/swe/tools/ext/gnu/valgrind-3.7.0/i686-linux2.6/lib/valgrind/*
login_shelf "/opt/corp/projects/shelftools/bin/app rsync -Lau $* $shelf:/shelf/valgrind"
After playing around with the code, I found that if I remove the line "expect \"#\"", then the program doesn't copy any of the files over anymore. What odd as well is that I'm seeing the issue when I run the script, but a co-worker is not.
Has anyone had a similar issue and determined the cause? Any help would be greatly appreciated as always!
Your code is spawning the rsync and at the expect \"#\" is waiting for rsync to output a #, which it never does, so it exits and expect reports the error.
When you remove the expect \"#\" the expect script exits, terminating the rsync.
Instead of expect \"#\" you should wait for rsync to exit:
expect eof
wait
I'm betting that someone has already solved this and maybe I'm using the wrong search terms for google to tell me the answer, but here is my situation.
I have a script that I want to run, but I want it to run only when scheduled and only one at a time. (can't run the script simultaneously)
Now the sticky part is that say I have a table called "myhappyschedule" which has the data I need and the scheduled time. This table can have multiple scheduled times even at the same time, each one would run this script. So essentially I need a queue of each time the script fires and they all need to wait for each one before it to finish. (sometimes this can take just a minute for the script to execute sometimes its many many minutes)
What I'm thinking about doing is making a script that checks myhappyschedule every 5 min and gathers up those that are scheduled, puts them into a queue where another script can execute each 'job' or occurrence in the queue in order. Which all of this sounds messy.
To make this longer - I should say that I'm allowing users to schedule things in myhappyschedule and not edit crontab.
What can be done about this? File locks and scripts calling scripts?
add a column exec_status to myhappytable (maybe also time_started and time_finished, see pseudocode)
run the following cron script every x minutes
pseudocode of cron script:
[create/check pid lock (optional, but see "A potential pitfall" below)]
get number of rows from myhappytable where (exec_status == executing_now)
if it is > 0, exit
begin loop
get one row from myhappytable
where (exec_status == not_yet_run) and (scheduled_time <= now)
order by scheduled_time asc
if no such row, exit
set row exec_status to executing_now (maybe set time_started to now)
execute whatever command the row contains
set row exec_status to completed
(maybe also store the command output/return as well, set time_finished to now)
end loop
[delete pid lock file (complementary to the starting pid lock check)]
This way, the script first checks if none of the commands is running, then runs first not-yet run command, until there are no more commands to be run at the given moment. Also, you can see what command is executing by querying the database.
A potential pitfall: if the cron script is killed, a scheduled task will remain in "executing_now" state. That's what the pid lock at beginning and end is for: to see if the cron script terminated properly. pseudocode of create/check pidlock:
if exists pidlockfile then
check if process id given in file exists
if not exists then
update myhappytable set exec_status = error_cronscript_died_while_executing_this
where exec_status == executing_now
delete pidlockfile
else (previous instance still running)
exit
endif
endif
create pidlockfile containing cron script process id
You can use the at(1) command inside your script to schedule its next run. Before it exits, it can check myhappyschedule for the next run time. You don't need cron at all, really.
I came across this question while researching for a solution to the queuing problem. For the benefit of anyone else searching here is my solution.
Combine this with a cron that starts jobs as they are scheduled (even if they are scheduled to run at the same time) and that solves the problem you described as well.
Problem
At most one instance of the script should be running.
We want to cue up requests to process them as fast as possible.
ie. We need a pipeline to the script.
Solution:
Create a pipeline to any script. Done using a small bash script (further down).
The script can be called as
./pipeline "<any command and arguments go here>"
Example:
./pipeline sleep 10 &
./pipeline shabugabu &
./pipeline single_instance_script some arguments &
./pipeline single_instance_script some other_argumnts &
./pipeline "single_instance_script some yet_other_arguments > output.txt" &
..etc
The script creates a new named pipe for each command. So the above will create named pipes: sleep, shabugabu, and single_instance_script
In this case the initial call will start a reader and run single_instance_script with some arguments as arguments. Once the call completes, the reader will grab the next request off the pipe and execute with some other_arguments, complete, grab the next etc...
This script will block requesting processes so call it as a background job (& at the end) or as a detached process with at (at now <<< "./pipeline some_script")
#!/bin/bash -Eue
# Using command name as the pipeline name
pipeline=$(basename $(expr "$1" : '\(^[^[:space:]]*\)')).pipe
is_reader=false
function _pipeline_cleanup {
if $is_reader; then
rm -f $pipeline
fi
rm -f $pipeline.lock
exit
}
trap _pipeline_cleanup INT TERM EXIT
# Dispatch/initialization section, critical
lockfile $pipeline.lock
if [[ -p $pipeline ]]
then
echo "$*" > $pipeline
exit
fi
is_reader=true
mkfifo $pipeline
echo "$*" > $pipeline &
rm -f $pipeline.lock
# Reader section
while read command < $pipeline
do
echo "$(date) - Executing $command"
($command) &> /dev/null
done