How to generate message boxes to display during and at the conclusion of a batch file process - batch-processing

I have created a batch file that generates a file listing output to a text file and then displays that text file in a browser. Since the listing takes awhile to generate, I want a message box to display during processing (and ideally another message box to display at the end indicating that the process completed) before it displays to the browser.

Place a line at the beginning of your script and at the end of your script.
'echo "Entry"'
'echo "Exit"'
Have the output sent to a log file which updates. This will be different depending on your language, but fairly easy to do.
As a separate process make an if statement that greps for Enrty and Exit. In Bash it would be:
grep_output=`grep Entry <file>`
if [ "$grep_output" == "" ]; then
entryoutput=true;
else
entryoutput=false;
fi
grep_output=`grep Exit <file>`
if [ "$grep_output" == "" ]; then
exitoutput=true;
else
exitoutput=false;
fi
while :; do
if $entryoutput=true and exitoutput=false; then
sleep 10
done
Not 100% on the syntax of that two condition if statement above.

Related

phpseclib: "CAT" command works randomly

I have a script that fetches data from a site.
Basically it is divided into two section.
1.executes commands on a remote machine and saves output in a file
2.read the contents of a file.
For some reason it works from time to time. Section 1 always works (checked the remote machine and found the files). The problem is related to cat.
I've added to my code the option to dump the results of "CAT" command to a file.
Sometimes it has info sometimes it doesn't. However the file is always created!
The nodes i'm querying are the same. Timeout of execution of Section 1 on a remote server is 11-12 secs.
Thanks beforehand.
$ssh->exec("rm toolkit/mybatch/$newfileid");
$ssh->exec("mobatch $newsiteid 'lt all;ue print -admitted;' toolkit/mybatch");
$ssh->setTimeout(15);
echo $ssh->exec('cat ' . escapeshellarg("toolkit/mybatch/$newfileid") . '| grep -A 10 \'$ ue print \' > toolkit/mybatch/traffic.txt');
$traffic = $ssh->exec("cat toolkit/mybatch/traffic.txt");
$traffic = substr($traffic,21,-16);
$ssh->disconnect();
echo $traffic;
I've updated the code above, however, it worked several times , but after deletion of old files, it only creates "traffic.txt" and sometimes it has info in it, sometimes no.
Also, the file "traffic.txtescapeshellarg" is not created anymore. So i was forced to go back to my previous solution and read the "traffic.txt".
So, the solution was very simple. All i needed to do is adding "timeout".
The final code:
$ssh->exec("rm toolkit/traffic/$newfileid");
$ssh->setTimeout(0);
$ssh->exec("mobatch $newsiteid 'lt all;ue print -admitted;' toolkit/traffic");
sleep(8);
$ssh->exec('cat ' . escapeshellarg("toolkit/traffic/$newfileid") . '| grep -A 10 \'$ ue print \' > toolkit/traffic/traffic.txt');
$traffic = $ssh->exec("cat toolkit/traffic/traffic.txt");
$traffic = substr($traffic,21,-83);
echo $traffic;

Ghostscript for PS integrity test: terminate at EOF, return error unless stack is empty

To test the integrity of PostScript files, I'd like to run Ghostscript in the following way:
Return 1 (or other error code) on error
Return 0 (success) at EOF if stack is empty
Return 1 (or other error code) otherwise
I could run gs in the background, and use a timeout to force termination if gs hangs with items left on the stack. Is there an easier solution?
Ghostscript won't hang if you send files as input (unless you write a program which enters an infinite loop or otherwise fails to reach a halting state). Having items on any of the stacks won't cause it to hang.
On the other hand, it won't give you an error if a PostScript program leaves operands on the operand stack (or dictionaries on the dictionary stack, clips on the clip stack or gstates on the graphics state stack). This is because that's not an error, and since PostScript interpreters normally run in a job server loop its not a problem either. Terminating the job returns control to the job server loop which does a save and restore round the total job, thereby clearing up anything left behind.
I'd suggest that if you really want to do this you need to adopt the same approach, you need to write a PostScript program which executes the PostScript program you want to 'test', then checks the operand stack (and other stacks if required) to see if anything is left. Note that you will want to execute the test program in a stopped context, as an error in the course of the program will clearly potentially leave stuff lying around.
Ghostscript returns 0 on a clean exit and a value less than 0 for errors, if I remember correctly. You would need to use signalerror in your test framework in order to raise an error if items are left at the end of a program.
[EDIT]
Anything supplied to Ghostscript on the command line by either -s or -d is defined in systemdict, so if we do -sInputFileName=/test.pdf then we will find in systemdict a key /InputFileName whose value is a string with the contents (/test.pdf). We can use that to pass the filename to our program.
The stopped operator takes an executable array as an argument, and returns either true or false depending on whether an error occurred while executing the array (3rd Edition PLRM, p 697).
So we need to run the program contained in the filename we've been given, and do it in a 'stopped' context. Something like this:
{InputFileName run} stopped
{
(Error occurred\n) print flush
%% Potentially check $error for more information.
}{
(program terminated normally\n) print flush
%% Here you could check the various stacks
} ifelse
The following, based 90% on KenS's answer, is 99% satisfactory:
Program checkIntegrity.ps:
{Script run} stopped
{
(\n===> Integrity test failed: ) print Script print ( has error\n\n) print
handleerror
(ignore this error which only serves to force a return value of 1) /syntaxerror signalerror
}{
% script passed, now check the stack
count dup 0 eq {
pop (\n===> Integrity test passed: ) print Script print ( terminated normally\n\n) print
} {
(\n===> Integrity test failed: ) print Script print ( left ) print
3 string cvs print ( item(s) on stack\n\n) print
Script /syntaxerror signalerror
} ifelse
} ifelse
quit
Execute with
gs -q -sScript=CodeToBeChecked.ps checkIntegrity.ps ; echo $?
For the last 1% of satisfaction I would need a replacement for
(blabla) /syntaxerror signalerror
It forces exit with return code 1, but is very verbous and distracts from the actual error in the checked script that is reported by handleerror. Therefore a cleaner way to exit(1) would be welcome.

hide error messages in dcl script

I have a test script I'm running that generates some errors,shown below, I expect these errors. Is there anyway I can prevent them from showing on the screen however? I use the
$ write sys$output
to display if there is an expected error.
I tried to use
$ DEFINE SYS$ERROR ERROR.LOG
but this then changed my entire error output log to this, if this is the correct way to handle it can I unset this at the end of my script somehow?
[error example]
%DCL-E-OPENIN, error opening TEST$DISK:[AAA]NOTTHERE.TXT; as input
-RMS-E-FNF, file not found
%DCL-E-OPENIN, error opening TEST$DISK:[AAA]NOTTHERE.TXT; as input
-RMS-E-FNF, file not found
%DCL-W-UNDFIL, file has not been opened by DCL - check logical name
DEFINE/USER creates a logical name that disappears when the next image exits.
So if you use that just before a command just to protect that command, then fine.
Otherwise I would prefer SET MESSAGE to control the output.
And of course yoy want to grab $STATUS and verify it after the command for success or for the expected error, reporting any unexpected error.
Better still... if you expect certain error conditions to occur,
then why not test for them?
For example:
$ file = F$SEARCH("TEST$DISK:[AAA]NOTTHERE.TXT")
$ IF file.NES."" THEN TYPE 'file'
Cheers,
Hein
To suppress Error message inside a script. try this command
$ DEFINE/USER SYS$ERROR NL:
NL: is a null device, so you don`t see any error messages displayed on your terminal.
good luck
This works interactively and in batch.
$ SET MESSAGE /NOTEXT /NOSEV /NOFAC /NOID
$ <DCL_Command>
$ SET MESSAGE /TEXT /SEV /FAC/ ID

Bad spawn_id while executing expect command

I am writing a script that will copy Valgrind onto whatever shelf that we enter on the command line. The syntax is as follows:
vgrindCopy [shelf number]
For some reason, the files will copy over without any issue, but after the copy completes the follow error is observed:
bad spawn_id (process died earlier?)
while executing
"expect "#""
Here is a copy of the relevant code:
function login_shelf {
expect -c "
set timeout 15
spawn $1
expect \"password:\"
send \"$PW\r\"
expect \"#\"
sleep 1
exit
"
}
# login and make the valgrind directory at /sfs/software/shelf/current
set -- /opt/swe/tools/ext/gnu/valgrind-3.7.0/i686-linux2.6/lib/valgrind/*
login_shelf "/opt/corp/projects/shelftools/bin/app rsync -Lau $* $shelf:/shelf/valgrind"
After playing around with the code, I found that if I remove the line "expect \"#\"", then the program doesn't copy any of the files over anymore. What odd as well is that I'm seeing the issue when I run the script, but a co-worker is not.
Has anyone had a similar issue and determined the cause? Any help would be greatly appreciated as always!
Your code is spawning the rsync and at the expect \"#\" is waiting for rsync to output a #, which it never does, so it exits and expect reports the error.
When you remove the expect \"#\" the expect script exits, terminating the rsync.
Instead of expect \"#\" you should wait for rsync to exit:
expect eof
wait

cron script to act as a queue OR a queue for cron?

I'm betting that someone has already solved this and maybe I'm using the wrong search terms for google to tell me the answer, but here is my situation.
I have a script that I want to run, but I want it to run only when scheduled and only one at a time. (can't run the script simultaneously)
Now the sticky part is that say I have a table called "myhappyschedule" which has the data I need and the scheduled time. This table can have multiple scheduled times even at the same time, each one would run this script. So essentially I need a queue of each time the script fires and they all need to wait for each one before it to finish. (sometimes this can take just a minute for the script to execute sometimes its many many minutes)
What I'm thinking about doing is making a script that checks myhappyschedule every 5 min and gathers up those that are scheduled, puts them into a queue where another script can execute each 'job' or occurrence in the queue in order. Which all of this sounds messy.
To make this longer - I should say that I'm allowing users to schedule things in myhappyschedule and not edit crontab.
What can be done about this? File locks and scripts calling scripts?
add a column exec_status to myhappytable (maybe also time_started and time_finished, see pseudocode)
run the following cron script every x minutes
pseudocode of cron script:
[create/check pid lock (optional, but see "A potential pitfall" below)]
get number of rows from myhappytable where (exec_status == executing_now)
if it is > 0, exit
begin loop
get one row from myhappytable
where (exec_status == not_yet_run) and (scheduled_time <= now)
order by scheduled_time asc
if no such row, exit
set row exec_status to executing_now (maybe set time_started to now)
execute whatever command the row contains
set row exec_status to completed
(maybe also store the command output/return as well, set time_finished to now)
end loop
[delete pid lock file (complementary to the starting pid lock check)]
This way, the script first checks if none of the commands is running, then runs first not-yet run command, until there are no more commands to be run at the given moment. Also, you can see what command is executing by querying the database.
A potential pitfall: if the cron script is killed, a scheduled task will remain in "executing_now" state. That's what the pid lock at beginning and end is for: to see if the cron script terminated properly. pseudocode of create/check pidlock:
if exists pidlockfile then
check if process id given in file exists
if not exists then
update myhappytable set exec_status = error_cronscript_died_while_executing_this
where exec_status == executing_now
delete pidlockfile
else (previous instance still running)
exit
endif
endif
create pidlockfile containing cron script process id
You can use the at(1) command inside your script to schedule its next run. Before it exits, it can check myhappyschedule for the next run time. You don't need cron at all, really.
I came across this question while researching for a solution to the queuing problem. For the benefit of anyone else searching here is my solution.
Combine this with a cron that starts jobs as they are scheduled (even if they are scheduled to run at the same time) and that solves the problem you described as well.
Problem
At most one instance of the script should be running.
We want to cue up requests to process them as fast as possible.
ie. We need a pipeline to the script.
Solution:
Create a pipeline to any script. Done using a small bash script (further down).
The script can be called as
./pipeline "<any command and arguments go here>"
Example:
./pipeline sleep 10 &
./pipeline shabugabu &
./pipeline single_instance_script some arguments &
./pipeline single_instance_script some other_argumnts &
./pipeline "single_instance_script some yet_other_arguments > output.txt" &
..etc
The script creates a new named pipe for each command. So the above will create named pipes: sleep, shabugabu, and single_instance_script
In this case the initial call will start a reader and run single_instance_script with some arguments as arguments. Once the call completes, the reader will grab the next request off the pipe and execute with some other_arguments, complete, grab the next etc...
This script will block requesting processes so call it as a background job (& at the end) or as a detached process with at (at now <<< "./pipeline some_script")
#!/bin/bash -Eue
# Using command name as the pipeline name
pipeline=$(basename $(expr "$1" : '\(^[^[:space:]]*\)')).pipe
is_reader=false
function _pipeline_cleanup {
if $is_reader; then
rm -f $pipeline
fi
rm -f $pipeline.lock
exit
}
trap _pipeline_cleanup INT TERM EXIT
# Dispatch/initialization section, critical
lockfile $pipeline.lock
if [[ -p $pipeline ]]
then
echo "$*" > $pipeline
exit
fi
is_reader=true
mkfifo $pipeline
echo "$*" > $pipeline &
rm -f $pipeline.lock
# Reader section
while read command < $pipeline
do
echo "$(date) - Executing $command"
($command) &> /dev/null
done