Do the restartable sequence jobs in datastage also rerun the job activities that were aborted not because of sequence faliure - sequence

I want to know that whether the restartable sequence jobs in datastage also rerun the job activities that were aborted but not due to the sequence failure?

In the Job Properties of a Sequence, in the 'General' Tab, you can activate "Add checkpoints so sequence is restartable on failure."
If a sequence is made restartable, the subsequent jobs are each getting a checkpoint.
When running the sequence the first time, it records which jobs have sucessfully finished. If the sequence is restarted, no matter for what reason it aborted before, the sequence will just skip the jobs that have been sucessful in the previous run.
If you need to have a job executed each time, even if it was already successful the last time, you can disable checkpinting for that job in the "Job" tab of it's Job Activity stage by checkiing the checkbox "Do not checkpoint run."

I think Justus's answer is right, but with a slight deviation:-
I found out that if a parallel job has aborted anywhere in a sequence, then it's rerun will be attempted by the controller (if the job activity is set to reset if required then run). (If we resume the sequence from Aborted/Restartable) or (Stopped/Restartable) state.
For example:-
let's say the job flow is as below:
a -> b -> c -> d -> e
if sequence is run and
job 'a' is finished,
job 'b' is aborted,
job 'c' is aborted, and the sequence stops due to user intervention,
Then if checkpointing is enabled earlier, our sequence, when restarted, will ignore job 'a' (already checkpointed) and rerun job 'b' and then job 'c' and the flow will resume.

Related

How to stop a SQL Agent job (running on an hourly schedule) to stop executing once it is successful

I have a SQL Agent job that runs on the 7th of each month on an hourly basis. I want to stop it from running further once it is successful.
For example, if the job is successful at 8:00 A.M, I dont want it to run any more untill the 7th of next month. Any scripts would be helpful for this.
I am working on trying to establish this rule through the use of MSDB sys.jobs and an idea that I have is to update the Enabled flag to a 0 once the run is complete. Once the 7th of next month hits, another job on the SQL Agent could go in an update the flag back to a 1 so it can be run. i
I suggest you create a new first job-step and, using the answer provided in the question sql-agent-job-last-run-status, you can check if the last execution succeeded.
You can then cancel the job from executing by using exec msdb.dbo.sp_stop_job
I use this method in an Availability group cluster (to check if SQL Agent is running on the primary for jobs the require it) and abort the job if not. It works well.
How about, create a table with a row for last successful run, populate it on a successful run.
Then first thing in the job, check that table, and if the last successful run is since the 7th, terminate.
Otherwise run the rest of the job.
The solution that I finally deployed was to add a step to the job that would disable it once all preceding steps succeeded. I added another job to the Agent that would go and enable this job on the 1st of each month so that they are ready to be executed. Just throwing this out so that someone else can benefit.

Task in same airflow DAG starts before previous task is committed

We have a DAG that as first task aggregates a table (A) into a staging table (B).
After that there is a task that reads from the staging table (B), and writes to another table (C).
However, the second task reads from the aggregated table (B) before it has been fully updated, which causes table C to have old data or sometimes it is empty. Airflow still logs everything as successful.
Updating table B is done as (pseudo):
delete all rows;
insert into table b
select xxxx from table A;
Task Concurrency is set as 10
pool size: 5
max_overflow: 10
Using local executor
Redshift seems to have a commit queue. Could it be that redshift tells airflow it has committed when the commit is in fact still in a queue, and the next task thus reads before the real commit takes place?
We have tried wrapping the update of table B in a transaction as (pseudo):
begin
delete all rows;
insert into table b
select xxxx from table A;
commit;
But even that does not work. For some reason airflow manages starting the second task before the first task is not fully committed.
UPDATE
It turned out there was a mistake in the dependencies. Downstream tasks were waiting for incorrect task to finish.
For future reference, never be 100 % sure you have checked everything. Check and recheck the whole flow.
You can achieve this goal by setting wait_for_downstream to True.
From https://airflow.apache.org/docs/stable/_api/airflow/operators/index.html :
when set to true, an instance of task X will wait for tasks
immediately downstream of the previous instance of task X to finish
successfully before it runs.
You can set this parameter at the default_dag_args level or at the tasks (operators) level.
default_dag_args = {
'wait_for_downstream': True,
}

Can we check if table in bigquery is in locked or DML operation is being performed

There is a BQ table which has multiple data load/update/delete jobs scheduled in. Since this is automated jobs many of it are failing due to concurrent update issue.
I need to know if we have a provision in BigQuery to check if the table is already locked by DML operation and can we serialize the queries so that no job fails
You could use the job ID generated by client code to keep track of the job status and only begin the next query when that job is done. This and this describe that process.
Alternatively you could try exponential backoff to retry the query a certain amount of times to prevent automatic failure of the query due to locked tables.

Locking database rows

I have a table in my database with defined jobs. One of jobs attribute is status, which can be [waiting, in_progress, done]. To process jobs i have defined master-worker relation between two servers, they work in the following way:
master looks for the first record in database with status 'waiting' and triggers a worker server to process the job.
Triggered worker sets status on the job to 'in_progress' in one transaction and starts executing the job (in the meantime server is looking for next job with status 'waiting' and triggers another worker)
when the job is done worker sets status on the job to 'done'
This is the perfect case scenario, however it might happen that during job execution one of the workers dies. In this case job needs to be restarted, however master has no way of verifying if job was done other than checking its status in database ('done'). Therefore if worker dies, in database the job has still status 'in_progress' and server has no idea that worker died and that job needs to be restarted. (we can't get any feedback from worker, we can not ping it and we can not get information what job is he currently working on)
My idea to solve this problem would be:
After worker changed job's status to 'in_progress' (transaction committed) he would open a new transaction with lock on the particular job.
Master, while looking for jobs to start would look for both 'waiting' and 'in_progress' which are not locked
If worker dies, transaction would break, releasing the lock from job record in database and master would reprocess it.
now i'm looking for a way to verify it this would indeed work. Possibly with SQL sctipt in SQL developer (two instances), i would like this to work in the following way:
Instance 1:
open transaction and create a row lock on row_1
sleep for 5 min
release lock
Instance 2:
Open transaction and look for row matching criteria (row_1 and row_2 match criteria)
return selected row
then i would kill instance 1 to simulate worker death and run instance 2 again.
Please advise if my approach is correct
P.S
Also could you point me to some good explanation how can i create script form instance 1?

Does oracle database start new job (from 'Scheduler') before last job (this same) is finished?

What happens if oracle database starts job(from 'Scheduler') before last job(this same) finishes? Does oracle add it to a stack or finally stops?
Oracle is smart enough to know not to start a new job instance before the previous job is finished.
From the Oracle docs:
http://docs.oracle.com/cd/B19306_01/server.102/b14231/scheduse.htm
Setting the Repeat Interval
...
Immediately after a job is started, the repeat_interval is evaluated to determine the next scheduled execution time of the job. It is possible that the next scheduled execution time arrives while the job is still running. A new instance of the job, however, will not be started until the current one completes.