Is there a way to kill Hive job as end user (without admin rights)?
If no, is there a easy way to find a job run by specific user? Or add additional info to every query run by user in order to identify it and write some script to kill a specific job?
Regards,
Karol
hadoop job -kill [job_id]
You can find the job ID once the job gets triggered in Hive or from the Job Tracker
Related
I have a SQL Agent job that runs on the 7th of each month on an hourly basis. I want to stop it from running further once it is successful.
For example, if the job is successful at 8:00 A.M, I dont want it to run any more untill the 7th of next month. Any scripts would be helpful for this.
I am working on trying to establish this rule through the use of MSDB sys.jobs and an idea that I have is to update the Enabled flag to a 0 once the run is complete. Once the 7th of next month hits, another job on the SQL Agent could go in an update the flag back to a 1 so it can be run. i
I suggest you create a new first job-step and, using the answer provided in the question sql-agent-job-last-run-status, you can check if the last execution succeeded.
You can then cancel the job from executing by using exec msdb.dbo.sp_stop_job
I use this method in an Availability group cluster (to check if SQL Agent is running on the primary for jobs the require it) and abort the job if not. It works well.
How about, create a table with a row for last successful run, populate it on a successful run.
Then first thing in the job, check that table, and if the last successful run is since the 7th, terminate.
Otherwise run the rest of the job.
The solution that I finally deployed was to add a step to the job that would disable it once all preceding steps succeeded. I added another job to the Agent that would go and enable this job on the 1st of each month so that they are ready to be executed. Just throwing this out so that someone else can benefit.
There is a BQ table which has multiple data load/update/delete jobs scheduled in. Since this is automated jobs many of it are failing due to concurrent update issue.
I need to know if we have a provision in BigQuery to check if the table is already locked by DML operation and can we serialize the queries so that no job fails
You could use the job ID generated by client code to keep track of the job status and only begin the next query when that job is done. This and this describe that process.
Alternatively you could try exponential backoff to retry the query a certain amount of times to prevent automatic failure of the query due to locked tables.
I have a table in my database with defined jobs. One of jobs attribute is status, which can be [waiting, in_progress, done]. To process jobs i have defined master-worker relation between two servers, they work in the following way:
master looks for the first record in database with status 'waiting' and triggers a worker server to process the job.
Triggered worker sets status on the job to 'in_progress' in one transaction and starts executing the job (in the meantime server is looking for next job with status 'waiting' and triggers another worker)
when the job is done worker sets status on the job to 'done'
This is the perfect case scenario, however it might happen that during job execution one of the workers dies. In this case job needs to be restarted, however master has no way of verifying if job was done other than checking its status in database ('done'). Therefore if worker dies, in database the job has still status 'in_progress' and server has no idea that worker died and that job needs to be restarted. (we can't get any feedback from worker, we can not ping it and we can not get information what job is he currently working on)
My idea to solve this problem would be:
After worker changed job's status to 'in_progress' (transaction committed) he would open a new transaction with lock on the particular job.
Master, while looking for jobs to start would look for both 'waiting' and 'in_progress' which are not locked
If worker dies, transaction would break, releasing the lock from job record in database and master would reprocess it.
now i'm looking for a way to verify it this would indeed work. Possibly with SQL sctipt in SQL developer (two instances), i would like this to work in the following way:
Instance 1:
open transaction and create a row lock on row_1
sleep for 5 min
release lock
Instance 2:
Open transaction and look for row matching criteria (row_1 and row_2 match criteria)
return selected row
then i would kill instance 1 to simulate worker death and run instance 2 again.
Please advise if my approach is correct
P.S
Also could you point me to some good explanation how can i create script form instance 1?
I have made a sql server agent backup job with 'Verify Backup Integrity' checked. I want to call the job from within a stored procedure using sp_start_job. Then if the backup integrity fails I want to do something (rollback/show error message..something like that). How will i go about doing this? Will sp_start_job return an error or return 1 or what?
I figured out that the job does not return 1, because the job takes time to complete and get to the job failed part. Found this useful link to wait for job to finish then you can catch the error.
http://www.interworks.com/blogs/bbickell/2010/01/15/how-execute-and-monitor-agent-job-using-t-sql-sql-server-20052008
I am using following query to enable the oracle job:
exec dbms_scheduler.disable('23');
where '23' is my Job id. But this seems to be not working for id. I read that, job name should be given in place of '23'. But my job does not have name. It has only Id.
So how to enable it using job Id? any other command to execute?
older DBMS_JOBS had IDs, not scheduler ones. ie you may want this(if your seeing your job in DBA_JOBS view):
begin
dbms_job.broken(23, true);
commit;
end;
/