TWS Composer command to extract only active jobs - tivoli-work-scheduler

I have a couple of queries wrt the Tivoli Work Scheduler composer -
Below command extracts all jobstreams from workstation irrespective of their status i.e. both Activbe and Draft/Inactive jobs. How could I extract only the active jobstreams
create jobstreams.txt from sched=WORKSTATION##
Similarly I would need to extract the jobs associated to active jobstreams only.
create jobs.txt from jobs=WORKSTATION##

Related

How to automate to delete multiple GDG generations of different jobs on daily basis?

I am working in a environment where Batch jobs which triggers in daily basis are A to Z– total 24 jobs(Just an example)
All these jobs create QUESTION.XXXXXXXX.HELPME.G0002V000 (Where XXXXXXX=Jobnames(A_Z)) once the batch process is completed.
Case1: QUESTION.XXXXXXXX.HELPME.G0002V000 will be deleted automatically if the batch run is completed successfully.
Case2: on the other hand, If the job is failed, we need to delete QUESTION.XXXXXXXX.HELPME.G0002V000 manually.
Since, we are having huge number of batch jobs and plenty of them fail on daily basis. It’s really painful to delete all them every day.
I want to create a system which can help me in deleting all the generation files in one PS file or .txt files(failed jobs GDG generations) in one shot like submitting one JCL or executing one REXX component.
Note: I cant code this in my JCL because of few other constrains.
A couple of options
Use a BackUp-Delete option in DFDSS or what ever it is called now. Has the advantage of creating a backup of whatever is deleted. You can use generics in DFDSS
If you reference the GDG Base QUESTION.XXXXXXXX.HELPME in JCL it will pickup every generation. You can use DISP=(OLD,DELETE) (note if there is no generation you get a JCL error.
Use Rexx to check/delete
To do it in JCL
// PGM=IEFBR14
//C DD DSN=QUESTION.XXXXXXXX.HELPME(+1)....
//*
// PGM=IEFBR14
//D DD DSN=QUESTION.XXXXXXXX.HELPME,DISP=(OLD,DELETE)

How to monitor Databricks jobs using CLI or Databricks API to get the information about all jobs

I want to monitor the status of the jobs to see whether the jobs are running overtime or it failed. if you have the script or any reference then please help me with this. thanks
You can use the databricks runs list command to list all the jobs ran. This will list all jobs and their current status RUNNING/FAILED/SUCCESS/TERMINATED.
If you wanted to see if a job is running over you would then have to use databricks runs get --run-id command to list the metadata from the run. This will return a json which you can parse out the start_time and end_time.
# Lists job runs.
databricks runs list
# Gets the metadata about a run in json form
databricks runs get --run-id 1234
Hope this helps get you on track!

Run Snowflake Task when a data share view is refreshed

Snowflake's documentation illustrates to have a TASK run on a scheduled basis when there are inserts/updates/deletions or other DML operations run on a table by creating a STREAM on that specific table.
Is there any way to have a TASK run if a view from a external Snowflake data share is refreshed, i.e. dropped and recreated?
As part of this proposed pipeline, we receive a one-time refresh of a view within a specific time period in a day and the goal would be to start a downstream pipeline that runs at most once during that time period, when the view is refreshed.
For example for the following TASK schedule
'USING CRON 0,10,20,30,40,50 8-12 * * MON,WED,FRI America/New York', the downstream pipeline should only run once every Monday, Wednesday, and Friday between 8-12.
Yes, I can point you to the documentation if you would like to see if this works for the tables you might already have set up:
Is there any way to have a TASK run if a view from a external
Snowflake data share is refreshed, i.e. dropped and recreated?
If you create a stored procedure to monitor the existence of the table, I have not tried that before though, I will see if I can ask an expert.
Separately, is there any way to guarantee that the task runs at most
once on a specific day or other time period?
Yes, you can use CRON to schedule optional parameters with specific days of the week or time: an example:
CREATE TASK delete_old_data
WAREHOUSE = deletion_wh
SCHEDULE = 'USING CRON 0 0 * * * UTC';
Reference: https://docs.snowflake.net/manuals/user-guide/tasks.html more specifically: https://docs.snowflake.net/manuals/sql-reference/sql/create-task.html#optional-parameters
A TASK can only be triggered by a calendar schedule, either directly or indirectly via a predecessor TASK being run by a schedule.
Since the tasks are only run on a schedule, they will not run more often than the schedule says.
A TASK can't be triggered by a data share change, so you have to monitor it on a calendar schedule.
This limitation is bound to be lifted sometime, but is valid as of Dec, 2019.

GitLab-CI: run job only when all conditions are met

In GitLab-CI document, I read the following contents:
In this example, job will run only for refs that are tagged, or if a build is explicitly requested via an API trigger or a Pipeline Schedule:
job:
# use special keywords
only:
- tags
- triggers
- schedules
I noticed the document uses or instead of and, which means the job is run when either one condition is met. But what if I want to configure a job to only run when all conditions are met, for instance, in a Pipeline Schedule and on master branch?
If your specific question is how do I only run a pipeline on master when it was scheduled this should work:
job:
only:
- master
except:
- triggers
- pushes
- external
- api
- web
In this example you exclude all except for the schedules 'trigger' and only run for the master branch.

Check scheduled job status in SAP BODS

I am new to BODS, At present I have configured a job to execute every 2 mins to perform transaction from MySQL server and Load into HANA tables.
But sometimes when the data volume in MySQL is too large to transform and load into HANA within 2 Mins, the job is still executing my next iteration for the same job starts which results in BODS failure.
My question is: is there is any option BODS to check for the execution status of the scheduled JOB between runs?
Please help me out with this.
You can create a control/audit table to keep history of each run of bods job. The table should contain fields like eExtractionStart, ExtractionEnd, EndTime etc. And you need to make a change in the job, so that it reads status of previous run from this table before starting the load to Hana data flow. If previous run has not finished, the job can raise an exception.
Let me know if this has been helpful or if you need more information.