Re-execute SQL Server job in case of failure - sql

Currently I am creating a SQL Server job. My requirement is whenever the job fails, it needs to run one more time. Is it possible in SQL Server?

I'm not sure if this is the best way to do it, but you can configure every single step of a job to re-run a specific number of times after a specific number of minutes (in case of network troubles, for example). Open the step configuration in SQL Server Management Studio and set the "Retry attempts" and "Retry interval (minutes)" according to your preferences.
Of course, this will not work if you want to re-run the whole job from the beginning and it will not retry infinitely.

Related

Throttle a specific client (background job) in AzureSQL?

We have a background job that runs nightly (our timezone), but of course that is "middle of the day" somewhere else on the planet. This background job uses up all our available AzureSQL resources to run as fast as possible - and by doing so blocks our most important user-facing queries during that time.
Is there a way to throttle specific clients in AzureSQL? We have full control over the background job and can adjust its connection string or even the code if necessary. We want to run it only if there are no other queries at the moment. Optimally some kind of priorization value where we put our user-facing services at level 1000 and the background job at 10 or something like that.
Note: We cannot move the background job to a second replica of the database though, it has to run on the main database.
On SQL Server instances we have the option to use Resource Governor to limit resources (CPU, RAM) to specific workloads. Resource Governor is part of SQL Azure protection mechanisms, but is not available for us as a feature we can configure.
People is voting here for this feature to be available for us on SQL Azure.
You can use the sys.dm_db_resource_stats dynamic management view to identify when your Azure SQL database is not being used to start the background job. If you can divide the process on many parts that take 2-3 minutes of execution each one, and run each part in sequence and start each one when the database is idle, then this may be an option. You can run the same procedure and if the database is idle, it then may check on a status table the last part/step that ran successfully, and trigger execution of the next one.

How to log the SQL Job failure to the Log Table?

SQL Jobs are running from the SQL Agent. At this moment, we don't have the Email Notification configured for failure from the SQL Job.
I am considering to fetch the errors from the log table. Further then to have a stored procedure to generate the error report.
select top 10 * from msdb.dbo.sysjobhistory
How to log the SQL Job failure to the Log Table? What will happen, when a job is scheduled to run for every 5 mins and the error will get updated (or) error will get inserted as new record?
Follow these steps:
Open Properties of job
Go under Steps
Get into the step by pressing Edit button
Navigate under Advanced
Change the on Failure Action to Quit the job reporting failure
Then Check Log To table
I ran into the limitations of the built-in logs, job history, etc., where it doesn't capture when a step fails, but the job itself doesn't.
Based upon this sqlshack article, I built and implemented something very similar which maintains a permanent history of job/step failures, etc., even if you delete a job/step. This solution will even notify when this job fails.
Best of luck!

How to check if SQL Server crashed

I'm building a fully automated process for my company, which includes 2 processes. One, where a 3rd party application that off a stored procedure, at certain times per day. Two, the stored procedure then controls kicking off other processes. The procedure is controlled by a table with the list of jobs that will be kicked off for the day. If the status for the job item is set to Queue, the procedure will start running that item and set the status to Running. My problem is, if for some reason SQL Server crashes, whether it be a power outage or some odd reason. If the 3rd party application goes and kicks of that stored procedure another day, there might be a job that still says running which should've failed or set back to Queue since the server crashed.
Is there a way in SQL where I can check if the server crashed during the time a process is being ran?
you may put your script and 3rd party application in one SQL job on the SQL server. that may resolve the issue.

How to start a job or process when a job on another server finished?

I want to send a mail or automatically start a job on my server as soon as a job on another server has finished successfully. I have access to the other server and can view the job status but I cannot change the job itself, which is running an SSIS package.
Basically, I want to start refreshing my database (via running an ETL through job) as soon as source has stopped refreshing itself. I would love to have suggestion beside this windows service implementation.
I took the liberty of editing the question and title to make it more explicit. As I understand it from your description, you want to run a job (or start some other process) on server A when a job on server B has completed successfully. You cannot change the job definition on server B, but you can log on to it and view the job history.
If you can't change the job or anything else on server B, that means it cannot notify server A when the job is complete. Therefore, you need to query server B from server A, using a Windows service or possibly a simple script that runs every few minutes (or hours, or whatever is appropriate).
You can query the status of a job from .NET or PowerShell using the SMO Job class, or from TSQL using the sp_help_job procedure. Which of these is a better solution depends on how you want to implement your polling mechanism.

SSIS - Connection Management Within a Loop

I have the following SSIS package:
alt text http://www.freeimagehosting.net/uploads/5161bb571d.jpg
The problem is that within the Foreach loop a connection is opened and closed for each iteration.
On running SQL Profiler I see a series of:
Audit Login
RPC:Completed
Audit Logout
The duration for the login and the RPC that actually does the work is minimal. However, the duration for the logout is significant, running into several seconds each. This causes the JOB to run very slowly - taking many hours. I get the same problem when running either on a test server or stand-alone laptop.
Could anyone please suggest how I may change the package to improve performance?
Also, I have noticed that when running the package from Visual Studio, it looks as though it continues to run with the component blocks going amber then green but actually all the processing has been completed and SQL profiler has dropped silent?
Thanks,
Rob.
Have you tried running your data flow task in parallel vs serial? You can most likely break up your for loops to enable you to run each 'set' in parallel, so while it might still be expensive to login/out, you will be doing it N times simultaneously.
SQL Server is most performant when running a batch of operations in a single query. Is it possible to redesign your package so that it batches updates in a single call, rather than having a procedural workflow with for-loops, as you have it here?
If the design of your application and the RPC permits (or can be refactored to permit it), this might be the best solution for performance.
For example, instead of something like:
for each Facility
for each Stock
update Qty
See if you can create a structure (using SQL, or a bulk update RPC with a single connection) like:
update Qty
from Qty join Stock join Facility
...
If you control the implementation of the RPC, the RPC could maintain the same API (if needed) by delegating to another which does the batch operation, but specifies a single-record restriction (where record=someRecord).
Have you tried doing the following?
In your connection managers for the connection that is used within the loop, right click and choose properties. In the properties for the connection, find "RetainSameConnection" and change it to True from the default of False. This will let your package maintain the connection throughout your package run. Your profiler would then probably look like:
Audit Login
RPC:Completed
RPC:Completed
RPC:Completed
RPC:Completed
RPC:Completed
RPC:Completed
...
Audit Logout
With the final Audit Logout happening at the end of package execution.