Using an AWS CloudWatch alarm to monitor a daily batch job - amazon-cloudwatch

I have a once-daily job that either succeeds or fails. I publish a 0 to a CloudWatch custom metric if it fails, 1 if it succeeds.
How can I create a CloudWatch alarm that monitors this? At the moment, I use "Average < 1 for 1 day", but I worry that if the job runs slightly late, it might trigger an "Insufficient Data" condition.
What is the best period, statistic, etc parameters to use for an alarm to monitor this job?

Related

How to stop a SQL Agent job (running on an hourly schedule) to stop executing once it is successful

I have a SQL Agent job that runs on the 7th of each month on an hourly basis. I want to stop it from running further once it is successful.
For example, if the job is successful at 8:00 A.M, I dont want it to run any more untill the 7th of next month. Any scripts would be helpful for this.
I am working on trying to establish this rule through the use of MSDB sys.jobs and an idea that I have is to update the Enabled flag to a 0 once the run is complete. Once the 7th of next month hits, another job on the SQL Agent could go in an update the flag back to a 1 so it can be run. i
I suggest you create a new first job-step and, using the answer provided in the question sql-agent-job-last-run-status, you can check if the last execution succeeded.
You can then cancel the job from executing by using exec msdb.dbo.sp_stop_job
I use this method in an Availability group cluster (to check if SQL Agent is running on the primary for jobs the require it) and abort the job if not. It works well.
How about, create a table with a row for last successful run, populate it on a successful run.
Then first thing in the job, check that table, and if the last successful run is since the 7th, terminate.
Otherwise run the rest of the job.
The solution that I finally deployed was to add a step to the job that would disable it once all preceding steps succeeded. I added another job to the Agent that would go and enable this job on the 1st of each month so that they are ready to be executed. Just throwing this out so that someone else can benefit.

Do the restartable sequence jobs in datastage also rerun the job activities that were aborted not because of sequence faliure

I want to know that whether the restartable sequence jobs in datastage also rerun the job activities that were aborted but not due to the sequence failure?
In the Job Properties of a Sequence, in the 'General' Tab, you can activate "Add checkpoints so sequence is restartable on failure."
If a sequence is made restartable, the subsequent jobs are each getting a checkpoint.
When running the sequence the first time, it records which jobs have sucessfully finished. If the sequence is restarted, no matter for what reason it aborted before, the sequence will just skip the jobs that have been sucessful in the previous run.
If you need to have a job executed each time, even if it was already successful the last time, you can disable checkpinting for that job in the "Job" tab of it's Job Activity stage by checkiing the checkbox "Do not checkpoint run."
I think Justus's answer is right, but with a slight deviation:-
I found out that if a parallel job has aborted anywhere in a sequence, then it's rerun will be attempted by the controller (if the job activity is set to reset if required then run). (If we resume the sequence from Aborted/Restartable) or (Stopped/Restartable) state.
For example:-
let's say the job flow is as below:
a -> b -> c -> d -> e
if sequence is run and
job 'a' is finished,
job 'b' is aborted,
job 'c' is aborted, and the sequence stops due to user intervention,
Then if checkpointing is enabled earlier, our sequence, when restarted, will ignore job 'a' (already checkpointed) and rerun job 'b' and then job 'c' and the flow will resume.

Can we check if table in bigquery is in locked or DML operation is being performed

There is a BQ table which has multiple data load/update/delete jobs scheduled in. Since this is automated jobs many of it are failing due to concurrent update issue.
I need to know if we have a provision in BigQuery to check if the table is already locked by DML operation and can we serialize the queries so that no job fails
You could use the job ID generated by client code to keep track of the job status and only begin the next query when that job is done. This and this describe that process.
Alternatively you could try exponential backoff to retry the query a certain amount of times to prevent automatic failure of the query due to locked tables.

Locking database rows

I have a table in my database with defined jobs. One of jobs attribute is status, which can be [waiting, in_progress, done]. To process jobs i have defined master-worker relation between two servers, they work in the following way:
master looks for the first record in database with status 'waiting' and triggers a worker server to process the job.
Triggered worker sets status on the job to 'in_progress' in one transaction and starts executing the job (in the meantime server is looking for next job with status 'waiting' and triggers another worker)
when the job is done worker sets status on the job to 'done'
This is the perfect case scenario, however it might happen that during job execution one of the workers dies. In this case job needs to be restarted, however master has no way of verifying if job was done other than checking its status in database ('done'). Therefore if worker dies, in database the job has still status 'in_progress' and server has no idea that worker died and that job needs to be restarted. (we can't get any feedback from worker, we can not ping it and we can not get information what job is he currently working on)
My idea to solve this problem would be:
After worker changed job's status to 'in_progress' (transaction committed) he would open a new transaction with lock on the particular job.
Master, while looking for jobs to start would look for both 'waiting' and 'in_progress' which are not locked
If worker dies, transaction would break, releasing the lock from job record in database and master would reprocess it.
now i'm looking for a way to verify it this would indeed work. Possibly with SQL sctipt in SQL developer (two instances), i would like this to work in the following way:
Instance 1:
open transaction and create a row lock on row_1
sleep for 5 min
release lock
Instance 2:
Open transaction and look for row matching criteria (row_1 and row_2 match criteria)
return selected row
then i would kill instance 1 to simulate worker death and run instance 2 again.
Please advise if my approach is correct
P.S
Also could you point me to some good explanation how can i create script form instance 1?

scheduling SQL jobs one after other

I have two jobs at same time Let say a and b....
I need to run the jobs in a sequence
first =-----a
second=----b
both a and b scheduling times should be different so that I cant use them in single job
when I schedule them they are running parallel I required a sequence of execution.
One job every 30 minutes to do Task A starting 00:15
Other job every 30 minute do Tasks A and then B staring 00:00
If the actual requirement is that two separate activities should not take place at the same time, but that they have completely different scheduling requirements, you may be able to achieve this using an application lock.
This would require that all activity for each job happens within a single stored procedure (or, in some other way, is forced to use a single database session).
At the start of each activity, the code would call sp_getapplock, something like:
EXEC sp_getapplock N'D1852F12-F213-4BD3-A87C-10FB56506EF8',
N'Exclusive',
N'Session'
(Ideally, the lock is released afterwards using sp_releaseapplock)