Let's say I have the following scenario:
In SQL Server Agent, there is a scheduled job that runs everyday at 6 AM and it works fine everyday. One day the server fails at 5 AM til 8 AM, but when the server is up again is 2 hours later than when the job that was scheduled to run.
How can I check that the job should have had run previously and indicate some kind of "Missed Schedule" status?
Approach:
Run EXEC MSDB.DBO.SP_GET_COMPOSITE_JOB_INFO #job_id and check that last_run_time & last_run_time exists in [msdb].[dbo].[sysjobhistory], if it doesn't exist it means that job was scheduled but it did not run. Is this approach correct?
Any suggestions/comments?
Thanks in advance.
I was just trying to figure this seemingly - had to already be solved problem - out. Seems like it isn't as common a need as one might think. Anyhow, I was about to embark on the journey of writing my own check when I found the below. It seems to work, in that it caught the job I wanted it to find on my server.
The issue with your approach is that if it is a fairly quickly recurring job or it runs every day but skipped Saturday and you check on Monday. It seemed to me to do this right, you needed to get into the frequency and build a table of when it ought to have run then go check history and see if it ran. The below is long and will take time to understand but it does seem to be doing that.
http://www.nuronconsulting.com/TemplateScripts_SQL_Agent_JobsJobScript_FindMissedJobs_sql.aspx
Related
I m getting the above error when running the Web Job in multi-threaded environment. I m calling one stored procedure to perform some action, stored procedure has code which Inserts/Updates/Delete records from pretty big temporal tables(3-4M records[not sure if its relevant here]). Every time the job is run it deals with(Insert/Update) around 40K-80K records based on condition. When the single thread is running everything goes fine. But as soon as number of parallel jobs count is set to 2 or more I m getting the error. From initial analysis seems like issue is with Auto generated column values with for SysStartTime and SysEndTime in history table. I have tried one of the solution from internet to reduce 1 second from the date to be saved in those columns as below
DEFAULT (dateadd(second,(-1),sysutcdatetime()))
But its not working. I have read few articles where it says temporal tables does not work properly in multi-threaded environment. Now I m not sure why the issue is happening and how to resolve this in multi-threaded environment.
Can someone here please help me understanding the reason behind the error and how to fix it.
NOTE: I can't make my code to run on single thread. Minimum three threads are required. Converting to single thread is not solution in this case.
I can successfully run Bigquery Scheduled Queries with #Run_time, #run_date parameters.
You can review Google's inadequate documentation on https://cloud.google.com/bigquery/docs/scheduling-queries
But when I try, manual run fails; "Error in starting transfer runs: Request contains an invalid argument. Dismiss" not any detail :(
Example code: (please be attentions i use #run_date)
Destination table: test_{run_time|"%Y%m%d"}
The parameter named table serves to create a different table for each day.
For example
test_20181112
test_20181113 etc.
SELECT
#run_date AS mydate,
title,
author,
text
FROM `bigquery-public-data.hacker_news.stories`
LIMIT
10
I think the problem is caused by the #run_date parameter in the query during manual operation.
My project is a little more complicated, I've added this code so everyone can try it easily.
As I mentioned above, this scheduled task works correctly in the initial setup. But when I try to run manual, it gives an error.
Can you show me the way?
Thanks for your helps.
I think that there is a bug in manunal run.
You should choose carefully the start date and the end date ( the same as a previous run ) in order to not get this error
I just wanted to add something here, since this is a "gotcha" in the BigQuery UI: using today's date as the end date in a scheduled query run will cause an issue, but setting it to one day ahead (i.e. tomorrow) should allow you to create a request with #run_date set to today.
I have a sql query that I need to loop through the system views sys.dm_exec_requests and sys.dm_exec_sessions every 60 seconds to pull specific information and dump it into a separate table. After a specified time I would like it to kill the loop. How would the loop be formatted?
This sounds like a SQL Agent job. If so, the short form of the answer is:
Create the job with one step that runs the query
Add a Schedule that runs it once a minute, starting whenever you want it to start
Set the schedule to stop running it when the cut-off time is reached
The long form, of course, is all the detail work behind creating a SQL Agent job. Best to read up on them in Books Online (here)
Don't do this in a loop. Do it with a job.
Write a sproc that does the query and save the results and then call it from a job.
I think you should use a job as well. But some work environments that is not practical. So you could have something like:
WHILE #StopTime < getdate()
BEGIN
exec LogCurrentData
WAITFOR DELAY '00:01:00'; -- wait 1 minute
END
I think the best way is creating a Job
There is a post that explain how to create a job step by step (with images) in SQL Server.
You can visit the post here
If you prefer a video tutorial, you can visit this link
While configuring incoming mail server in OpenERP 7, i get this following error.
Error: Record cannot be modified right now .This cron task is
currently being executed and may not be modified, please try again in
a few minutes.
If the job keeps running, you won't get a chance to change the configuration of the cron job. I met same issue, and find a way to solve it.
There is a DB lock on that row.
If you run following sql query to check current processes:
select * from pg_stat_activity where query like '%ir_cron%';
You can see some query like this (in the query field of the result):
select * from ir_cron where id = 100 for update nowait;
Get the pid from the query result, and terminate it with PG_TERMINATE_BACKEND. It will come back soon, so it's better to do the terminating and updating in one query, such as:
update ir_cron set active = false where PG_TERMINATE_BACKEND(57078) and id = 100;
I understand that original asker may not be interested anymore but for the sake of others :-
I faced the same error, while updating a module being developed.
So, the cron job related to my module had to be manually deleted first from the scheduler.
Settings -> Scheduler -> Scheduler Actions
delete the cron job you were trying to modify.
And update the module again.
First set the scheduler for fetching mail to inactive.Its time gap is 5mins. So make it inactive. then edit the incoming mail server.
I had a similar issue that kept me from upgrading a module. I solved it by stopping the odoo server and restarting postgresql and then starting odoo again. This gave me time to both mark the cron job as inactive and upgrade the module.
sudo service odoo-server stop
sudo service postgresql restart
sudo service odoo-server start
We want to setup a hudson job that will execute a query every 10 minutes looking for a date column to equal current date. If this condition is false the job should just loop and run again in 10 minute intervals until the condition is true at which point we want the job to move on to a 2nd step which will execute another sql to update a table. Is it possible to set this up in a single job? I have been searching but have not found an example of this scenario anywhere.
Of course you can.It can be pretty simple actually.
Make a new job in Hudson which automatically runs every 10 minutes.As for the job content,write some shell script to do the sql search. Then if the code returns true, continue with the 2nd step,if not,just end the job.It will be started by Hudson 10 minutes later.