Tandem (Guardian OS) scheduler - tandem

I've an assignment to run the command on Tandem by periodic time
I've work on Windows and Unix before and know that OS have there own schedule task but I cannot find one on Tandem
I've ask HPE support they mention that I must by a tool name "NetBatch" to make a schedule
now I come up with a solution by create job to run command like this
1.Run command
2.Wait time
2.Run command
4.Wait time
Have anyone here have experience with schedule task on Tandem please advice
Thanks

You can create a tacl script with a loop, add all commands you want in that loop with #DELAY at end of the loop so that the script waits till next iteration of loop
To add persistance to your script you can configure it under pathway as below:
RESET SERVER
SET SERVER MAXSERVERS 1
SET SERVER NUMSTATIC 1
SET SERVER PROGRAM $SYSTEM.SYS01.TACL
SET SERVER TMF OFF
SET SERVER IN $receive
SET SERVER ASSIGN TACLCSTM, $vol.subvol.taclin ==The script that you want to execute
SET SERVER OUT $vol.subvol.uroutfil ==Output printed from your script
ADD SERVER mytacl

You could switch to from Guardian to OSS and use crontab on tandem OSS

Yes you can insert wait between two [tandem advanced command language] TACL commands by entering some other command or using something from history like opening files.
dsply pr, prc310
cmprfile -28, today, range
cmprtime 00,23
dsply pr, prc310, diff
NOTE: The last code executes with a time difference of 3 seconds.
This will surely delay the simultaneous command from being executed at the same time.
I also faced the same situation, and followed above hack to get it rectified.

Related

Write to Oracle concurrent request output / log from a SQLPlus program

I have an Oracle concurrent request that calls a SQLPlus program. The program itself is working correctly, but I would like to add some logging information to the concurrent request output / log in EBS.
I have tried a number of variations of:
set heading off
--set pagesize 0 embedded on
set pagesize 50000
set linesize 32767
set feedback off
set verify off
set term off
set echo off
set newpage none
set serveroutput on
dbms_output.enable(1000000);
--prepare data
EXECUTE program (&1,&2,&3,&4,&5);
--extract data
#"path/file.SQL";
fnd_file.put_line(FND_FILE.LOG,'do some logging here');
fnd_file.put_line(FND_FILE.OUTPUT,'do some logging here');
/
But everything I've tried so far results with either
no logging added to request output or log
no request output whatsoever
errors like:
SP2-0734: unknown command beginning "dbms_outpu..." - rest of line ignored.
and
PLS-00103: Encountered the symbol "ENABLE" when expecting one of the following: := . ( # % ;
Is it possible to write to the request output or log from a SQLPlus script that is called from concurrent manager?
First of all, your SQL*Plus script does not even run without your attempts at logging.
dbms_output enable(...) is missing a dot ('.').
Your anonymous PL/SQL block has no end; statement
#"path/file.SQL` is a SQL*Plus command -- it cannot be embedded in an anonymous PL/SQL block.
Aside from those basic problems, FND_FILE.PUT_LINE is only for PL/SQL concurrent programs. That is, concurrent programs whose executable points to a PL/SQL package procedure and not a .sql file under $APPL_TOP.
For SQL*Plus concurrent programs, i.e., running a .sql file under $APPL_TOP, FND_FILE.PUT_LINE does not work. Instead, your SQL*Plus output is automatically written to the request output. There is no standard way to write to the request log.
If you really need to write to the request log, you could maybe call FND_FILE.PUT_NAMES to cause FND_FILE.PUT_LINE to write to temporary files that you name. Then, knowing the concurrent request ID and the logic Oracle EBS uses to local output and log files, do a FND_FILE.CLOSE and host command to move the custom-named files you specified to the actual locations. That might work.
It'd be much better to redo your concurrent program as a PL/SQL package. Then FND_FILE works just fine. If you know how to call Java from the database, there is very little you can do in a .sql script that you cannot do in a PL/SQL package.
I have not written a .sql concurrent program in years, and I write concurrent programs all the time.
I have resolved this problem. The solution is incredibly simple - and now I'm bent out of shape because it took so long to realize.
Step 1 - SET ECHO ON
Step 2 - PROMPT whatever you want written to concurrent request output
The following sample writes 'Output is written to this folder' to the concurrent request output.
set heading off
--set pagesize 0 embedded on
set pagesize 50000
set linesize 32767
set feedback off
set verify off
set term off
set echo on
set newpage none
set serveroutput on
prompt Output is written to this folder
--prepare data
EXECUTE program (&1,&2,&3,&4,&5);
--extract data
#"path/file.SQL";
/
This is exactly what I was looking for. Maybe this will be useful to someone in another galaxy.
If this is for testing/debugging purposes, you can specify the location of the log and output files with the routine: FND_FILE.PUT_NAMES and as soon as you log all the required information you need to close the file with: FND_FILE.CLOSE
As Matthew mentioned, logging in SQL*Plus executables doesn't work well. If you can't move your code to a PL/SQL Stored Procedure for some reason, a Host script might work for you instead. From there, you can execute SQL, e.g. sqlplus -s $FCP_LOGIN ... and write log information as required.
If you just need to prepare data by PLSQL and then spool it to CSV via SQL, you can use our company's Blitz Report instead, which does this more convenient and is for free for such use. It also uses a Host type executable and calls sqlplus from there.

Monit for "cron-like" tasks

Have some batch-type jobs that I would like to move from cron to Monit but am struggling to get them to work properly. These scripts typically run once a day, but on occasion have to be re-ran later in the day. Goal is to take advantage of the monit & m/monit front-ends to re-run as well as be alerted on failure in similar fashion to other things under monit.
The below was my first attempt. I know the docs say to use range/wildcard for minute field but I have my monit daemon set to cycle every 20 seconds so thought I'd be able to get away with this.
check program test.sh
with path "/usr/local/bin/test.sh"
every "0 7 * * *"
if status != 0 then alert
This does not seem to work as it seems like it picks up the exit status of the program on the NEXT run. So I have a zombie process sitting around until 7am the next day, at which time I'll see the status from the previous day's run.
Would be nice if this ran immediate or if there was a way to schedule something as "batch" that would only run once when started (either from command line or web gui). Example below.
check program test.sh
with path "/usr/local/bin/test.sh"
mode batch
if status != 0 then alert
Is it possible to do what I want? Can a 'check program' be scheduled that will only run one time when started or using the 'every [cron]' type syntax supported by monit?
TIA for any suggestions.
The latest version of monit (5.18) now picks up the exit status on the next daemon cycle, not on the next execution of the program like in the past (which might not be until the next day).

Runing pentaho command from a scheduler (tidal)

I am trying to execute the pentaho job over the windows through TIDAL, but the TIDAL does not execute the job at all. But when i run seperately on CMD PROMPT is executes.
The below is command used, IT does not the read the parameters assigned to it.
Kindly suggest on what has to be done.
E:\apps\Pentaho\data-integration\kitchen.bat /rep:Merlin_Repository /user:admin /pass:admin /dir=wwclaims /job=J-CLAIMS /level:Basic
You forgot a slash in /dir: option and you must use : not = symbols in your command.
For example in a windows batch script command
#echo off
SET LOG_PATHFILE=C:\logs\KITCHEN_name_of_job_%DATETIME%.log
call Kitchen.bat /rep:"name_repository" /job:"name_of_job" /dir:/foo/sub_foo1 /user:dark /pass:vador /level:Detailed >> %LOG_PATHFILE%`

ASE ISQL output to file, occassionally is empty or blank

Give this unix script, which is scheduled batch run:
isql -U$USR -S$SRVR -P$PWD -w2000 < $SCRIPTS/sample_report.sql > $TEMP_DIR/sample_report.tmp_1
sed 's/-\{3,\}//g' $TEMP_DIR/sample_report.tmp_1 > $TEMP_DIR/sample_report.htm_1
uuencode $TEMP_DIR/sample_report.htm_1 sample_report.xls > $TEMP_DIR/sample_report.mail_1
mailx -s "Daily Sample Report" email#example.com < $TEMP_DIR/sample_report.mail_1
There are occasionally cases where the sample_report.xls attached in the mail, is empty, zero lines.
I have ruled out the following:
not command processing timeout - by adding the -t30 to isql, I get the xls and it contains the error, not empty
not sql error - by forcing an error in the sql, I get the xls and it contains the error, not empty
not sure of login timeout - by adding -l1, it does not timeout, but I can't specify a number lower than 1 second, so I can't say
I cannot reproduce this, as I do not know the cause. Has anyone else experienced this or have way to address this? Any suggestions how to find the cause? Is it the unix or the Sybase isql?
I found the cause. Since this is scheduled, and this particular report takes a long time to generate. Other scheduled scripts, I found have this line of code:
rm -f $TEMP_DIR/*
If the this long running report, overlaps with one of the scheduled scripts with the line above, the .tmp_1 can possibly be deleted, hence blank by the time it is mailed. I replicated this by manually deleting the .tmp_1 while the report was still writing the sql in there.

cron script to act as a queue OR a queue for cron?

I'm betting that someone has already solved this and maybe I'm using the wrong search terms for google to tell me the answer, but here is my situation.
I have a script that I want to run, but I want it to run only when scheduled and only one at a time. (can't run the script simultaneously)
Now the sticky part is that say I have a table called "myhappyschedule" which has the data I need and the scheduled time. This table can have multiple scheduled times even at the same time, each one would run this script. So essentially I need a queue of each time the script fires and they all need to wait for each one before it to finish. (sometimes this can take just a minute for the script to execute sometimes its many many minutes)
What I'm thinking about doing is making a script that checks myhappyschedule every 5 min and gathers up those that are scheduled, puts them into a queue where another script can execute each 'job' or occurrence in the queue in order. Which all of this sounds messy.
To make this longer - I should say that I'm allowing users to schedule things in myhappyschedule and not edit crontab.
What can be done about this? File locks and scripts calling scripts?
add a column exec_status to myhappytable (maybe also time_started and time_finished, see pseudocode)
run the following cron script every x minutes
pseudocode of cron script:
[create/check pid lock (optional, but see "A potential pitfall" below)]
get number of rows from myhappytable where (exec_status == executing_now)
if it is > 0, exit
begin loop
get one row from myhappytable
where (exec_status == not_yet_run) and (scheduled_time <= now)
order by scheduled_time asc
if no such row, exit
set row exec_status to executing_now (maybe set time_started to now)
execute whatever command the row contains
set row exec_status to completed
(maybe also store the command output/return as well, set time_finished to now)
end loop
[delete pid lock file (complementary to the starting pid lock check)]
This way, the script first checks if none of the commands is running, then runs first not-yet run command, until there are no more commands to be run at the given moment. Also, you can see what command is executing by querying the database.
A potential pitfall: if the cron script is killed, a scheduled task will remain in "executing_now" state. That's what the pid lock at beginning and end is for: to see if the cron script terminated properly. pseudocode of create/check pidlock:
if exists pidlockfile then
check if process id given in file exists
if not exists then
update myhappytable set exec_status = error_cronscript_died_while_executing_this
where exec_status == executing_now
delete pidlockfile
else (previous instance still running)
exit
endif
endif
create pidlockfile containing cron script process id
You can use the at(1) command inside your script to schedule its next run. Before it exits, it can check myhappyschedule for the next run time. You don't need cron at all, really.
I came across this question while researching for a solution to the queuing problem. For the benefit of anyone else searching here is my solution.
Combine this with a cron that starts jobs as they are scheduled (even if they are scheduled to run at the same time) and that solves the problem you described as well.
Problem
At most one instance of the script should be running.
We want to cue up requests to process them as fast as possible.
ie. We need a pipeline to the script.
Solution:
Create a pipeline to any script. Done using a small bash script (further down).
The script can be called as
./pipeline "<any command and arguments go here>"
Example:
./pipeline sleep 10 &
./pipeline shabugabu &
./pipeline single_instance_script some arguments &
./pipeline single_instance_script some other_argumnts &
./pipeline "single_instance_script some yet_other_arguments > output.txt" &
..etc
The script creates a new named pipe for each command. So the above will create named pipes: sleep, shabugabu, and single_instance_script
In this case the initial call will start a reader and run single_instance_script with some arguments as arguments. Once the call completes, the reader will grab the next request off the pipe and execute with some other_arguments, complete, grab the next etc...
This script will block requesting processes so call it as a background job (& at the end) or as a detached process with at (at now <<< "./pipeline some_script")
#!/bin/bash -Eue
# Using command name as the pipeline name
pipeline=$(basename $(expr "$1" : '\(^[^[:space:]]*\)')).pipe
is_reader=false
function _pipeline_cleanup {
if $is_reader; then
rm -f $pipeline
fi
rm -f $pipeline.lock
exit
}
trap _pipeline_cleanup INT TERM EXIT
# Dispatch/initialization section, critical
lockfile $pipeline.lock
if [[ -p $pipeline ]]
then
echo "$*" > $pipeline
exit
fi
is_reader=true
mkfifo $pipeline
echo "$*" > $pipeline &
rm -f $pipeline.lock
# Reader section
while read command < $pipeline
do
echo "$(date) - Executing $command"
($command) &> /dev/null
done