I am facing issue with sql agent , on changing the server date it does not start at its schedule time while job is schedule daily at fixed time. No logs are found on this . This issue also occurs while system date has been changed to its real date . I have to restart the sql agent after that to invoke the job at its schedule time.
SQL Agent scheduled jobs will have last run date/time and next run date/time. Each time the jobs run, these values get updated. Please look in MSDB and you will see these details. You can also look in Job history.
When you manually reset the system clock to a date later than now, then your next run date/time will be in the past. As you have mentioned, bouncing the agent service should start the jobs again.
Raj
Related
I have a query which will run if I simply run it through console or from code.
When I created Scheduled Query for the Query, it would not run. The Scheduled Query is successfully created, and the interval I set (every 2 hours) is correctly implemented but only the jobs are not created (I can see in Scheduled query that the time to run is being incremented by 2 hours every time it is supposed to run).
These are the properties when running query from Scheduled query:
Overwrite table, Processing location: US, Allow large results, Batch priority
If I do a Schedule Backfill, it creates 12 jobs which fails with an error messages similar to the following:
Exceeded CPU limit 125%
Exceeded memory
If I cancel all the created jobs and leave one to run, it would run successfully. The Scheduled Query itself would not create any jobs.
I started the Scheduled query at 12:00 and made it to run for every 2 hours in repeats.
I assumed the jobs would run at the start time but apparently it is not the case. Scheduled Query ran perfectly as intended from 14:00 following with 16:00 and so on.
The errors regarding maximum CPU/memory usage is because the query I wrote had ORDER BY statement which was causing this issue. Removing that cleared the issue.
Snowflake's documentation illustrates to have a TASK run on a scheduled basis when there are inserts/updates/deletions or other DML operations run on a table by creating a STREAM on that specific table.
Is there any way to have a TASK run if a view from a external Snowflake data share is refreshed, i.e. dropped and recreated?
As part of this proposed pipeline, we receive a one-time refresh of a view within a specific time period in a day and the goal would be to start a downstream pipeline that runs at most once during that time period, when the view is refreshed.
For example for the following TASK schedule
'USING CRON 0,10,20,30,40,50 8-12 * * MON,WED,FRI America/New York', the downstream pipeline should only run once every Monday, Wednesday, and Friday between 8-12.
Yes, I can point you to the documentation if you would like to see if this works for the tables you might already have set up:
Is there any way to have a TASK run if a view from a external
Snowflake data share is refreshed, i.e. dropped and recreated?
If you create a stored procedure to monitor the existence of the table, I have not tried that before though, I will see if I can ask an expert.
Separately, is there any way to guarantee that the task runs at most
once on a specific day or other time period?
Yes, you can use CRON to schedule optional parameters with specific days of the week or time: an example:
CREATE TASK delete_old_data
WAREHOUSE = deletion_wh
SCHEDULE = 'USING CRON 0 0 * * * UTC';
Reference: https://docs.snowflake.net/manuals/user-guide/tasks.html more specifically: https://docs.snowflake.net/manuals/sql-reference/sql/create-task.html#optional-parameters
A TASK can only be triggered by a calendar schedule, either directly or indirectly via a predecessor TASK being run by a schedule.
Since the tasks are only run on a schedule, they will not run more often than the schedule says.
A TASK can't be triggered by a data share change, so you have to monitor it on a calendar schedule.
This limitation is bound to be lifted sometime, but is valid as of Dec, 2019.
I have a SQL Server job that has run for almost 2 years.
It's connecting to a bad Oracle database that keeps disconnecting, it always fails due to that. And when I run it again after 10 or 15 minutes, it works successfully. I'm getting bored of checking it every day...
Is there a way that make the job run to connect to that Oracle source until it succeeds, or another job that looks over this job status and if it failed, then it runs it again until it succeeds?
A solution we are using is something like this:
Wrap your Oracle query in an SSIS package, and after the query, have the package update a SQL table that keeps either a history of executions, or just a single row that tracks the last time the job ran successfully. In short, if the Oracle query was successful, then put something in a table saying the query ran successfully today. If it was not successful, then don't put anything in the table for today.
Then at the beginning of the package, BEFORE the Oracle query, check to see if the query has been run successfully today. If it has already run successfully, then do nothing and exit the package. If it has not run successfully today, then go ahead and try to run it, following the post-query steps described above. If you have any other conditions about when the package should run (like "only after 10 am" or anything like that) you would include that logic here.
Finally, schedule the job to call the package, and schedule to run every 15 minutes, or however often you like. It will try every 15 minutes until it runs successfully, and after that it will stop doing anything until the next day.
As a bonus, you can use this same package and job to initiate all tasks that you want handled the same way. You just need to keep meta data about all these tasks in your history/metadata table.
an alternative is to create the job step and leave it unscheduled, and create an ssis job that acts as the master to all your jobs and it runs every minute checking all job steps from a config table that have yet to succeed today and any it finds execute using sp_start_job.
if they do run successfully log the stats to a log table and this prevents them ever being launched again until the next day. This prevents all yours jobs needing to be scheduled every 15 minutes etc, they launch asap, and you can add extra logic to handle dependencies, number parallel running, importance level etc, start time, latest start time, max number to retty etc
I am new to BODS, At present I have configured a job to execute every 2 mins to perform transaction from MySQL server and Load into HANA tables.
But sometimes when the data volume in MySQL is too large to transform and load into HANA within 2 Mins, the job is still executing my next iteration for the same job starts which results in BODS failure.
My question is: is there is any option BODS to check for the execution status of the scheduled JOB between runs?
Please help me out with this.
You can create a control/audit table to keep history of each run of bods job. The table should contain fields like eExtractionStart, ExtractionEnd, EndTime etc. And you need to make a change in the job, so that it reads status of previous run from this table before starting the load to Hana data flow. If previous run has not finished, the job can raise an exception.
Let me know if this has been helpful or if you need more information.
In my app I have a Budget model with daily_avg column.
Also there are BudgetIncome and BudgetSpending models.
daily_avg = (BudgetIncome.sum(:debit) - BudgetSpending.sum(:credit))/Time.days_in_month(current_month)
I need to update this column every 1st day of month, cause in every month there're different quantity of days.
So, I'll write the script, but I don't know:
Where to put this script?
How to start it every 1st day of month?
I'm using PostgreSQL and deploying my app on Heroku.
Thanks for any help.
It's the first time I'm working with scripts for DB, so I don't know what information I need to provide you to help me with it.
Heroku has a scheduler you can use:
https://devcenter.heroku.com/articles/scheduler
"Scheduler is an add-on for running jobs on your app at scheduled time intervals, much like cron in a traditional server environment."
Install with: $ heroku addons:add scheduler:standard