I have a stored procedure in oracle (which schedules a one-time job to run another procedure, if this is relevant). The job calls another stored procedure which runs for a few minutes, and performs inserts updates and deletes and also uses loops. Now while the long procedure is running, if there is another call for it to run, is it possible to prevent them from executing simultaneously? And even better, to make the second one execute once the previous one has finished, like queue them?
To prevent two stored procedures to run at the same time, you could use DBMS_LOCK to get an exclusive lock (or just try to update the same row in a given table).
For your purpose the procedure DBMS_LOCK.ALLOCATE_UNIQUE was designed.
Assign some unique lockname string and call the procedure at the beginning of the critical sequence in your procedure. You will get lockhandle as an output.
Then call DBMS_LOCK.REQUEST to start the unique processing
DBMS_LOCK.ALLOCATE_UNIQUE( v_lockname, v_lockhandle);
v_res := DBMS_LOCK.REQUEST( lockhandle=>v_lockhandle, release_on_commit => TRUE);
At the end you must release the handle to be able to process the next run
v_res := DBMS_LOCK.RELEASE (v_lockhandle);
A good practice is to release it also in the EXCEPTION section to be not blocked after the failure.
Please check the possible options in the documentation such as for release_on_commit and adjust it for your need.
Some care should be taken with the return parameters of the REQUEST and RELEASE procedures.
Related
Using MS SQL Server, a Trigger calls a Stored Procedure which internally makes a select, will the return values be the new or old ones?
I know that inside the trigger I can access them by FROM INSERTED i inner join DELETED, but in this case I want to reuse (cannot change it) an existing Stored Procedure that internally makes a select on the triggered table and processes some logic with them. I just want to know if I can be sure that the existing logic will work or not (by accessing the NEW values).
I can simply try to simulate it with one update... But maybe there are other cases (example: using transactions or something else) that I maybe not be aware and never test it that could result in a different case.
I decided to ask someone else that might know better. Thank you.
AFTER triggers (the default) fire after the DML action. When the proc is called within the trigger, the tables will reflect changes made by the statement that fired the trigger as well changes made within the trigger before calling the proc.
Note changes are uncommitted until the trigger completes or explict transaction later committed.
Since the procedure is running in the same transaction as the (presumably, "after") trigger, it will see the uncommitted data.
I hope you see the implications of that: the trigger is executing as part of the transaction started by the DML statement that caused it to fire, so the stored procedure is part of the same transaction, so a "complicated" stored procedure means that transaction stays open longer, holding locks longer, making responses back to users slower, etc etc.
Also, you said
internally makes a select on the triggered table and processes some logic with them.
if you just mean that the procedure is selecting the data in order to do some complex processing and then write it to somewhere else inside the database, ok, that's not great (for reasons given above), but it will "work".
But just in case you mean you are doing some work on the data in the procedure and then returning that back to the client application, Don't do that
The ability to return results from triggers will be removed in a future version of SQL Server. Triggers that return result sets may cause unexpected behavior in applications that aren't designed to work with them. Avoid returning result sets from triggers in new development work, and plan to modify applications that currently do. To prevent triggers from returning result sets, set the disallow results from triggers option to 1.
I've written a stored procedure called FooUpsert that inserts and updates data in various tables. It takes a number of numeric and string parameters that provide the data. This procedure is in good shape and I don't want to modify it.
Next, I'm writing another stored procedure that servers as a sort of bulk insert/update.
It is of tantamount importance that the procedure do its work as an atomic transaction. It would be unacceptable for some data to be inserted/updated and some not.
It seemed to me that the appropriate way of doing this would be to set up a table-valued procedure, say FooUpsertBulk. I began to write this stored procedure with a table parameter that holds data similar to what is passed to FooUpsert, the idea being that I can read it one row at a time and invoke FooUpsert for the values in each row. I realize that this may not be the best practice for this, but once again, FooUpsert is already written, plus FooUpsertBulk will be run at most a few times a day.
The problem is that in FooUpsertBulk, I don't know how to iterate the rows and pass the values in each row as parameters to FooUpsert. I do realize that I could change FooUpsert to accept a table-values parameter as well, but I don't want to rewrite FooUpsert.
Can one of you SQL ninjas out there please show me how to do this?
My SQL server is MS-SQL 2008.
Wrapping various queries into an explicit transaction (i.e. BEGIN TRAN ... COMMIT or ROLLBACK) makes all of it an atomic operation. You can:
start the transaction from the app code (assuming that FooUpsert is called by app code) and hence have to deal with the commit and rollback there as well. this still leaves lots of small operations, but a single transaction and no code changes needed.
start the transaction in a proc, calling FooUpsert in a loop that is contained in a TRY / CATCH so that you can handle the ROLLBACK if any call to FooUpsert fails.
copy the code from FooUpsert into a new FooUpsertBulk that accepts a TVP from the app code and handles everything as set-based operations. Adapt each of the queries in FooUpsertBulk from handling various input params to getting fields from the TVP table variables once the TVP is joined into the query. Keep FooUpsert in place until FooUpsertBulk is working.
I have a stored procedure which is supposed to execute at a regular interval to do some heavy background processing on the backend. The amount of data the stored procedure has to deal with is variable.
I intend to set up the stored procedure as a scheduled job.
Because the processing must be done sequentially, I need to ensure only one instance of the stored procedure runs at any time.
Given how heavy the data is, it is possible that the scheduled job may activate another instance before the first one has had time to complete.
My question is: how does one check for other instances of the stored procedure and abort if one exists already?
Create semaphore table to set flags and check for them in your procedure.
I have a stored procedure that causes blocking on my SQL server database. Whenever it does block for more than X amount of seconds we get notified with what query is being run, and it looks similar to below.
CREATE PROC [dbo].[sp_problemprocedure] (
#orderid INT
--procedure code
How can I tell what the value is for #orderid? I'd like to know the value because this procedure will run 100+ times a day but only cause blocking a handful of times, and if we can find some sort of pattern between the order id's maybe I'd be able to track down the problem.
The procedure is being called from a .NET application if that helps.
Have you tried printing it from inside the procedure?
http://msdn.microsoft.com/en-us/library/ms176047.aspx
If it's being called from a .NET application you could easily log out the parameter being passed from the .net app, but if you don't have access, also you can use SQL Server profiling. Filters can be set on the command type i.e. proc only as well as the database that is being hit otherwise you will be overwhelmed with all the information a profile can produce.
Link: Using Sql server profiler
rename the procedure
create a logging table
create a new one (same signature/params) which calls the original but first logs the params and starting timestamp and logs after the call finishes the end timestamp
create a synonym for this new proc with the name of the original
Now you have a log for all calls made by whatever app...
You can disbale/enable the logging anytime by simply redefining the synonym to point to the logging wrapper or to the original...
The easiest way would be to run a profiler trace. You'll want to capture calls to the stored procedure.
Really though, that is only going to tell you part of the story. Personally I would start with the code. Try and batch big updates into smaller batches. Try and avoid long-running explicit transactions if they're not necessary. Look at your triggers (if any) and cascading Foreign keys and make sure those are efficient.
easiest way is to do the following:
1) in .NET, grab the date-time just before running the procedure
2) in .Net, after the procedure is complete grab the date-time
3) in .NET, do some date-time math, and if it is "slow", write to a file (log) those start and end date-times, user info, all the the parameters, etc.
I currently am working on a legacy application and have inherited some shady SQL with it. The project has never been put into production, but now is on it's way. During intial testing I found a bug. The application calls a stored procedure that calls many other stored procedures, creates cursors, loops through cursors, and many other things. FML.
Currently the way the app is designed, it calls the stored procedure, then reloads the UI with a fresh set of data. Of course, the data we want to display is still being processed on the SQL server side, so the UI results are not complete when displayed. To fix this, I just made a thread sleep for 30 seconds, before loading the UI. This is a terrible hack and I would like to fix this properly on the SQL side of things.
My question is...is it worthwhile to convert the branching stored procedures to functions? Would this make the main-line stored procedure wait for a return value, before processing on?
Here is the stored procedure:
ALTER PROCEDURE [dbo].[ALLOCATE_BUDGET]
#budget_scenario_id uniqueidentifier
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
DECLARE #constraint_type varchar(25)
-- get project cache id and constraint type
SELECT #constraint_type = CONSTRAINT_TYPE
FROM BUDGET_SCENARIO WHERE BUDGET_SCENARIO_ID = #budget_scenario_id
-- constraint type is Region by Region
IF (#constraint_type = 'Region by Region')
EXEC BUDGET_ALLOCATE_SCENARIO_REGIONBYREGION #budget_scenario_id
-- constraint type is City Wide
IF (#constraint_type = 'City Wide')
EXEC BUDGET_ALLOCATE_SCENARIO_CITYWIDE #budget_scenario_id
-- constraint type is Do Nothing
IF (#constraint_type = 'Do Nothing')
EXEC BUDGET_ALLOCATE_SCENARIO_DONOTHING #budget_scenario_id
-- constraint type is Unconstrained
IF (#constraint_type = 'Unconstrained')
EXEC BUDGET_ALLOCATE_SCENARIO_UNCONSTRAINED #budget_scenario_id
--set budget scenario status to "Allocated", so reporting tabs in the application are populated
EXEC BUDGET_UPDATE_SCENARIO_STATUS #budget_scenario_id, 'Allocated'
END
To avoid displaying an incomplete resultset in the calling .NET application UI, before the cursors in the branching calls are completed, is it worthwile to convert these stored procedures into functions, with return values? Would this force SQL to wait before completing the main call to the [ALLOCATED_BUDGET] stored procedure?
The last SQL statement call in the stored procedure sets a status to "Allocated". This is happening before the cursors in the previous calls are finished processing. Does making these calls into function calls affect how the stored procedure returns focus to the application?
Any feedback is greatly appreciated. I have a feeling I am correct in going towards SQL functions but not 100% sure.
** additional information:
Executing code uses [async=true] in the connection string
Executing code uses the [SqlCommand].[ExecuteNonQuery] method
How are you calling the procedure? I'm going to guess that you are using ExecuteNonQuery() to call the procedure. Try calling the procedure using ExecuteScalar() and modify the procedure like the following:
ALTER PROCEDURE [dbo].[ALLOCATE_BUDGET]
#budget_scenario_id uniqueidentifier
AS
BEGIN
...
RETURN True
END
This should cause your data execution code in .NET to wait for the procedure to complete before continuing. If you don't want your UI to "hang" during the procedure execution, use a BackgroundWorkerProcess or something similar to run the query on a separate thread and look for the completed callback to update the UI with the results.
You could also try using the RETURN statement in your child stored procedures, which can be used to return a result code back to the parent procedure. You can call the child procedure by something along the lines of "exec #myresultcode = BUDGET_ALLOCATE_SCENARIO_REGIONBYREGION()". I think this should force the parent procedure to wait for the child procedure to finish.
I have never heard that it's possible for a stored procedure to return to the caller while still executing in the background.
In fact, I'll go as far as to say I don't believe that's happening. If you're seeing a difference between the UI and what you believe the SP should have done, then I believe it has a different cause.
Does the connection string have async=true in it? Is the SP being executed by using BeginExecuteReader or Begin-anything else?
At the risk of sounding to simple, I suggest you could create a table which can store the status of the stored proc. Somehow, a flag that can indicate that the entire process & sub-process has finished executing.
You could query this from UI to see if things are done by polling this status code.
Does making these calls into function calls affect how the stored procedure returns focus to the application?
No.
The stored procedure has no idea that its caller is a UI application. There is nothing in the stored procedure that can influence the behavior of the UI application.
Most likely the UI application is calling the stored procedure on one connection, and then refreshing its data on another connection. There's a plethora of ways of getting the UI to delay refreshing, but the one I'll push is that there should be a single database connection.
Personally, I would be far more concerned about replacing those cursors than converting this to functions.
And I would not run the last proc until checking for a valid return code from the previous procs (this thing is in real trouble if one of the preceding procs dies!)
Also consider if this should all be in a transaction (are these procs changing data in a table?)
(Am I the only one who finds it funny you have a proc to run the process for Do Nothing?)