I have following scenraio:
I need to fire updating procedures in fixed time intervals. But some of the procedure might take longer than the interval. I want to avoid stacking calls to procedures (in case one want to start before its previous execution hasn't been finished).
My colleague advised me to create in database additional table with two columns:
one with name of procedures and IsActive column (bit). So, before executing any procedure, I check the corresponding value of IsActive. If it's 1, then abort execution.
Now, the problem:
when I get to execution, I need to set the value of IsActive to 1 for the procedure, which I try to do like this:
UPDATE ProcActivity SET IsActive = 1 WHERE ProcedureName = 'proc_name'
EXEC proc_name
UPDATE ProcActivity SET IsActive = 0 WHERE ProcedureName = 'proc_name'
But, SQL is executing batches, so the value of 1 isn't visible (the UPDATE isn't commited) until the procedure is finished.
So, how to commit this UPDATE? I tried with COMMIT, but didn't work... I can't use GO, because it's wrapped in IF statement...
Don't use transactions this way because of visibility, unless you want to use extra hints. WHich I would not do personally.
If you want "only one stored proc execution current" then I would consider sp_getapplock and sp_releaseapplock in the stored procedure.
This will enforce force "single threaded" execution.
Other questions here that show how to use it
And, to abort other calls, set the #LockTimeout=0, so if the result code is different from zero, then you know that you need to abort current call.
Related
We have the following situation:
A Stored Procedure is invoked by a middleware and is given a XML file as parameter. The Procedure then parses the XML file and inserts values into temporary tables inside a loop. After looping, the values inside the temporary tables are inserted into physical tables.
Problem is, the Stored Procedure has a relatively long run-time (about 5 Minutes). In this period, it is likely that it is being invoked a second time, which would cause both processes to be suspended.
Now my question:
How can we avoid a second execution of a Stored Procedure if it is already running?
Best regards
I would recommend designing your application layer to prevent multiple instances of this process being run at once. For example, you could move the logic into a queue that is processed 1 message at a time. Another option would be locking at the application level to prevent the database call from being executed.
SQL Server does have a locking mechanism to ensure a block of code is not run multiple times: an "app lock". This is similar in concept to the lock statement in C# or other semaphores you might see in other languages.
To acquire an application lock, call sp_getapplock. For example:
begin tran
exec sp_getapplock #Resource = 'MyExpensiveProcess', #LockMode = 'Exclusive', #LockOwner = 'Transaction'
This call will block if another process has acquired the lock. If a second RPC call tries to run this process, and you would rather have the process return a helpful error message, you can pass in a #LockTimeout of 0 and check the return code.
For example, the code below raises an error if it could not acquire the lock. Your code could return something else that the application interprets as "process is already running, try again later":
begin tran
declare #result int
exec #result = sp_getapplock #Resource = 'MyExpensiveProcess', #LockMode = 'Exclusive', #LockOwner = 'Transaction', #LockTimeout = 0
if #result < 0
begin
rollback
raiserror (N'Could not acquire application lock', 16, 1)
end
To release the lock, call sp_releaseapplock.
exec sp_releaseapplock #Resource = 'MyExpensiveProcess'
Stored procedures are meant to be run multiple times and in parallel as well. The idea is to reuse the code.
If you want to avoid multiple run for same input, you need to take care of it manually. By implementing condition check for the input or using some locking mechanism.
If you don't want your procedure to run in parallel at all (regardless of input) best strategy is to acquire lock using some entry in DB table or using global variables depending on DBMS you are using.
You can check if the stored procedure is already running using exec sp_who2. This may be an approach to consider. In your SP, check this first and simply exit if it is. It will run again the next time the job executes.
You would need to filter out the current thread, make sure the count of that SP is 1 (1 will be for the current process, 2 means already running), or have a helper SP that is called first.
Here are other ideas: Check if stored procedure is running
In order to retrieve an ID, I first do a select and then an update, in two consequent queries.
The problem is that I am having problems with locked rows. I've read that putting both this statements, Select and Update in one stored procedure it helps with the locks. Is this true?
The queries I run are:
select counter
from dba.counter_list
where table_name = :TableName
update dba.counter_list
set counter = :NewCounter
where table_name = :TableName
The problem is that it can happen that multiple users are selecting the same row and also possible that they update the same row.
Assumptions:
you're using Sybase ASE
your select returns a single value for counter
you may want the old counter value for some purpose other than performing the update
Consider the following update statement which should eliminate any race conditions that may occur with multiple users running your select/update logic concurrently:
declare #counter int -- change to the appropriate datatype
update dba.counter_list
set #counter = counter, -- grab current value
counter = :NewCounter -- set to new value
where table_name = :TableName
select #counter -- send previous counter value to client
the update obtains an exclusive lock on the desired row (or page/table depending on table design and locking scheme)
with an exclusive lock in place you're able to retrieve the current value and set the new value with a single statement
Whether you submit the above via a SQL batch or a stored proc call is up to you and your DBA to decide ...
if statement cache is disabled, a SQL batch will need to be compiled each time it's submitted to the dataserver
if statement cache is enabled, and you submit this SQL batch on a regular basis then there's a chance the previous query plan is still in statement/procedure cache thus eliminating the (costly) compilation step
if a copy of previous stored proc (query) plan is not in procedure cache then you'll incur the (costly) compilation step when loading a (proc) query plan into procedure cahe
a stored proc is typically easier to replace in the event of a syntax/logic/performance issue (as opposed to editing, and possibly compiling, a front-end application)
... add your (least) favorite argument for SQL batch vs stored proc (vs prepared statement?) vs ??? ...
Is the table counter_list accessed by multiple clients concurrently ?
The best practices for OLTP is to call a stored procedure that will perform the update logic in one transaction.
Check that the table dba.counter_list has an index on column table_name.
Check also that it is row level locked.
So I'm tracking down a potential bug in a sync process I'm in charge of (written by someone else). When viewing one of the stored procedures that is being called, I noticed something peculiar. Based on my understanding of returns, anything after the return will not be returned. However, I am not positive if this is the case in SQL. Based on the chunk of SQL below, will the delete statement ever run? Or does the SP return information to signify whether rows were deleted (such as how many rows, whether it was successful, etc.)? I am assuming this is a bug in the SP, but want to confirm before taking action. Thanks in advance.
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROCEDURE [dbo].[DeleteSalesforce_Contacts]
AS
Return
Delete From dbo.Contacts Where IsDeleted = 1
GO
The documentation is pretty clear on this:
"Exits unconditionally from a query or procedure. RETURN is immediate
and complete and can be used at any point to exit from a procedure,
batch, or statement block. Statements that follow RETURN are not
executed."
The delete statement won't be executed.
The return statement takes an optional parameter, but to use a query as value you would need to use a select in parentheses. Example:
return (select top 1 id from SomeTable)
The delete would never happen when the proc is executed.
The only time a statement after the return is ever executed when a proc is run is if it was related to a goto process and the code was sent there and bypassed the return. This kind of code sometimes used to be written before Try Catch blocks were allowed in SQL Server to do something with errors.
We have introduced a new data access framework for calling SQL Stored procedures. When calling a stored procedure that returns a recordset, we've run into problems where that stored procedure also performs an update (insert/update/delete) of some sort:
Cannot change the ActiveConnection
property of a Recordset object which
has a Command object as its source.
The solution to this is to add 'SET NOCOUNT ON' to the top of the stored procedure. This works just fine, and, of course, it also has a touted performance enhancement.
We are recommending to developers that when they want to write code to call an existing stored procedure, they must also refactor the stored procedure itself to include SET NOCOUNT ON.
But, this got me into wondering, what would be the potential consequences/risks of performing a blanket update of all stored procedures to include SET NOCOUNT ON. Under what scenarios would this break an SPs functionality? (given that ##ROWCOUNT function is updated even when SET NOCOUNT is ON)
Help, as always, much appreciated.
I think the main danger would be if any of your existing processes look for and/or assume that the rowcount will be returned without explicitly querying the value of ##ROWCOUNT.
It's possible that somewhere in your code is a stored proc that gets executed, and the application waits for the return row value to know that it completed, in which case the app would hang indefinitely.
I currently am working on a legacy application and have inherited some shady SQL with it. The project has never been put into production, but now is on it's way. During intial testing I found a bug. The application calls a stored procedure that calls many other stored procedures, creates cursors, loops through cursors, and many other things. FML.
Currently the way the app is designed, it calls the stored procedure, then reloads the UI with a fresh set of data. Of course, the data we want to display is still being processed on the SQL server side, so the UI results are not complete when displayed. To fix this, I just made a thread sleep for 30 seconds, before loading the UI. This is a terrible hack and I would like to fix this properly on the SQL side of things.
My question is...is it worthwhile to convert the branching stored procedures to functions? Would this make the main-line stored procedure wait for a return value, before processing on?
Here is the stored procedure:
ALTER PROCEDURE [dbo].[ALLOCATE_BUDGET]
#budget_scenario_id uniqueidentifier
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
DECLARE #constraint_type varchar(25)
-- get project cache id and constraint type
SELECT #constraint_type = CONSTRAINT_TYPE
FROM BUDGET_SCENARIO WHERE BUDGET_SCENARIO_ID = #budget_scenario_id
-- constraint type is Region by Region
IF (#constraint_type = 'Region by Region')
EXEC BUDGET_ALLOCATE_SCENARIO_REGIONBYREGION #budget_scenario_id
-- constraint type is City Wide
IF (#constraint_type = 'City Wide')
EXEC BUDGET_ALLOCATE_SCENARIO_CITYWIDE #budget_scenario_id
-- constraint type is Do Nothing
IF (#constraint_type = 'Do Nothing')
EXEC BUDGET_ALLOCATE_SCENARIO_DONOTHING #budget_scenario_id
-- constraint type is Unconstrained
IF (#constraint_type = 'Unconstrained')
EXEC BUDGET_ALLOCATE_SCENARIO_UNCONSTRAINED #budget_scenario_id
--set budget scenario status to "Allocated", so reporting tabs in the application are populated
EXEC BUDGET_UPDATE_SCENARIO_STATUS #budget_scenario_id, 'Allocated'
END
To avoid displaying an incomplete resultset in the calling .NET application UI, before the cursors in the branching calls are completed, is it worthwile to convert these stored procedures into functions, with return values? Would this force SQL to wait before completing the main call to the [ALLOCATED_BUDGET] stored procedure?
The last SQL statement call in the stored procedure sets a status to "Allocated". This is happening before the cursors in the previous calls are finished processing. Does making these calls into function calls affect how the stored procedure returns focus to the application?
Any feedback is greatly appreciated. I have a feeling I am correct in going towards SQL functions but not 100% sure.
** additional information:
Executing code uses [async=true] in the connection string
Executing code uses the [SqlCommand].[ExecuteNonQuery] method
How are you calling the procedure? I'm going to guess that you are using ExecuteNonQuery() to call the procedure. Try calling the procedure using ExecuteScalar() and modify the procedure like the following:
ALTER PROCEDURE [dbo].[ALLOCATE_BUDGET]
#budget_scenario_id uniqueidentifier
AS
BEGIN
...
RETURN True
END
This should cause your data execution code in .NET to wait for the procedure to complete before continuing. If you don't want your UI to "hang" during the procedure execution, use a BackgroundWorkerProcess or something similar to run the query on a separate thread and look for the completed callback to update the UI with the results.
You could also try using the RETURN statement in your child stored procedures, which can be used to return a result code back to the parent procedure. You can call the child procedure by something along the lines of "exec #myresultcode = BUDGET_ALLOCATE_SCENARIO_REGIONBYREGION()". I think this should force the parent procedure to wait for the child procedure to finish.
I have never heard that it's possible for a stored procedure to return to the caller while still executing in the background.
In fact, I'll go as far as to say I don't believe that's happening. If you're seeing a difference between the UI and what you believe the SP should have done, then I believe it has a different cause.
Does the connection string have async=true in it? Is the SP being executed by using BeginExecuteReader or Begin-anything else?
At the risk of sounding to simple, I suggest you could create a table which can store the status of the stored proc. Somehow, a flag that can indicate that the entire process & sub-process has finished executing.
You could query this from UI to see if things are done by polling this status code.
Does making these calls into function calls affect how the stored procedure returns focus to the application?
No.
The stored procedure has no idea that its caller is a UI application. There is nothing in the stored procedure that can influence the behavior of the UI application.
Most likely the UI application is calling the stored procedure on one connection, and then refreshing its data on another connection. There's a plethora of ways of getting the UI to delay refreshing, but the one I'll push is that there should be a single database connection.
Personally, I would be far more concerned about replacing those cursors than converting this to functions.
And I would not run the last proc until checking for a valid return code from the previous procs (this thing is in real trouble if one of the preceding procs dies!)
Also consider if this should all be in a transaction (are these procs changing data in a table?)
(Am I the only one who finds it funny you have a proc to run the process for Do Nothing?)