I currently am working on a legacy application and have inherited some shady SQL with it. The project has never been put into production, but now is on it's way. During intial testing I found a bug. The application calls a stored procedure that calls many other stored procedures, creates cursors, loops through cursors, and many other things. FML.
Currently the way the app is designed, it calls the stored procedure, then reloads the UI with a fresh set of data. Of course, the data we want to display is still being processed on the SQL server side, so the UI results are not complete when displayed. To fix this, I just made a thread sleep for 30 seconds, before loading the UI. This is a terrible hack and I would like to fix this properly on the SQL side of things.
My question is...is it worthwhile to convert the branching stored procedures to functions? Would this make the main-line stored procedure wait for a return value, before processing on?
Here is the stored procedure:
ALTER PROCEDURE [dbo].[ALLOCATE_BUDGET]
#budget_scenario_id uniqueidentifier
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
DECLARE #constraint_type varchar(25)
-- get project cache id and constraint type
SELECT #constraint_type = CONSTRAINT_TYPE
FROM BUDGET_SCENARIO WHERE BUDGET_SCENARIO_ID = #budget_scenario_id
-- constraint type is Region by Region
IF (#constraint_type = 'Region by Region')
EXEC BUDGET_ALLOCATE_SCENARIO_REGIONBYREGION #budget_scenario_id
-- constraint type is City Wide
IF (#constraint_type = 'City Wide')
EXEC BUDGET_ALLOCATE_SCENARIO_CITYWIDE #budget_scenario_id
-- constraint type is Do Nothing
IF (#constraint_type = 'Do Nothing')
EXEC BUDGET_ALLOCATE_SCENARIO_DONOTHING #budget_scenario_id
-- constraint type is Unconstrained
IF (#constraint_type = 'Unconstrained')
EXEC BUDGET_ALLOCATE_SCENARIO_UNCONSTRAINED #budget_scenario_id
--set budget scenario status to "Allocated", so reporting tabs in the application are populated
EXEC BUDGET_UPDATE_SCENARIO_STATUS #budget_scenario_id, 'Allocated'
END
To avoid displaying an incomplete resultset in the calling .NET application UI, before the cursors in the branching calls are completed, is it worthwile to convert these stored procedures into functions, with return values? Would this force SQL to wait before completing the main call to the [ALLOCATED_BUDGET] stored procedure?
The last SQL statement call in the stored procedure sets a status to "Allocated". This is happening before the cursors in the previous calls are finished processing. Does making these calls into function calls affect how the stored procedure returns focus to the application?
Any feedback is greatly appreciated. I have a feeling I am correct in going towards SQL functions but not 100% sure.
** additional information:
Executing code uses [async=true] in the connection string
Executing code uses the [SqlCommand].[ExecuteNonQuery] method
How are you calling the procedure? I'm going to guess that you are using ExecuteNonQuery() to call the procedure. Try calling the procedure using ExecuteScalar() and modify the procedure like the following:
ALTER PROCEDURE [dbo].[ALLOCATE_BUDGET]
#budget_scenario_id uniqueidentifier
AS
BEGIN
...
RETURN True
END
This should cause your data execution code in .NET to wait for the procedure to complete before continuing. If you don't want your UI to "hang" during the procedure execution, use a BackgroundWorkerProcess or something similar to run the query on a separate thread and look for the completed callback to update the UI with the results.
You could also try using the RETURN statement in your child stored procedures, which can be used to return a result code back to the parent procedure. You can call the child procedure by something along the lines of "exec #myresultcode = BUDGET_ALLOCATE_SCENARIO_REGIONBYREGION()". I think this should force the parent procedure to wait for the child procedure to finish.
I have never heard that it's possible for a stored procedure to return to the caller while still executing in the background.
In fact, I'll go as far as to say I don't believe that's happening. If you're seeing a difference between the UI and what you believe the SP should have done, then I believe it has a different cause.
Does the connection string have async=true in it? Is the SP being executed by using BeginExecuteReader or Begin-anything else?
At the risk of sounding to simple, I suggest you could create a table which can store the status of the stored proc. Somehow, a flag that can indicate that the entire process & sub-process has finished executing.
You could query this from UI to see if things are done by polling this status code.
Does making these calls into function calls affect how the stored procedure returns focus to the application?
No.
The stored procedure has no idea that its caller is a UI application. There is nothing in the stored procedure that can influence the behavior of the UI application.
Most likely the UI application is calling the stored procedure on one connection, and then refreshing its data on another connection. There's a plethora of ways of getting the UI to delay refreshing, but the one I'll push is that there should be a single database connection.
Personally, I would be far more concerned about replacing those cursors than converting this to functions.
And I would not run the last proc until checking for a valid return code from the previous procs (this thing is in real trouble if one of the preceding procs dies!)
Also consider if this should all be in a transaction (are these procs changing data in a table?)
(Am I the only one who finds it funny you have a proc to run the process for Do Nothing?)
Related
long time reader, first time poster.
We're observing something strange in our production instance of SQL Server 2012 after we migrated our application's databases from dev, to test, to production for our recent go-live.
We have a number of stored procedures are called from a WCF web service to do various operations; some return result sets, others do not.
Some of these procedures call other (sub) procedures within them. These sub procedures have defined OUTPUT parameters. When the parent procedures are called in dev and test, they execute as expected and the returned result set is the final select statement.
But in our production environment, the parent procedures ARE running to completion when called, but instead of returning the expected result set like before, they return the sub procedure's OUTPUT parameter.
Below is an example excerpt from one of our procedures that demonstrates the issue:
CREATE PROCEDURE [dbo].[CHANGE_USER_DEPTID]
#USERID VARCHAR(10),
#DEPTID VARCHAR(10)
AS
BEGIN
SET NOCOUNT ON;
DECLARE #ALTERNATEUSERID VARCHAR(10)
DECLARE #EMPLID INT
EXEC GET_ALTERNATEUSERID_SP #USERID, #ALTERNATEUSERID OUTPUT
SELECT #EMPLID = EMPLID FROM ACTIVEDIRECTORY_VW WHERE ALTID = #ALTERNATEUSERID
...DO SOME VARIOUS PROCESSING...
UPDATE DEPARTMENT_TABLE
SET EMPLID = ...
WHERE ...
END
Ok, so this procedure's final statement is an update, and in our dev and test environments, when this CHANGE_USER_DEPTID procedure is called, it simply returns "Command(s) Completed Successfully". But in our production environment, the procedure returns a result set; the #ALTERNATEUSERID which is the output parameter of the sub procedure. How is that possible? The main procedure is never even selecting that variable, and the main procedure doesn't even HAVE an output parameter defined.
This isn't a big deal on some of the calls in the WCF service, because the .NET method uses them in a cmd.ExecuteNonQuery() statement. But it IS causing problems for some others where we're expecting the final select statement to return a certain result set at the end of the procedure (an integer, for example), but instead it returns the output parameter of the sub procedure (say, a string), which is causing various things to blow up further down the line.
Has anyone ever experienced, or even heard of this issue?
To wrap this up:
It turned out that one of the child stored procedures inside the parent procedure (about three levels deep) had an extra SELECT statement at the end which someone had uncommented for testing purposes. That extra select statement was being returned all the way up the stack in the topmost stored procedure.
I'm running a stored procedure on server1 from my application. The stored procedure does a bunch of stuff and populate a table on server2 with the result from the procedure.
I'm using linked server to accomplish this.
When the stored procedure is done running the application continues and tries to do some manipulation of the result from the stored procedure.
My problem is that the results from the stored procedure has not been completely inserted into the tables yet, so the manipulation of the tables fails.
So my question is. Is it possible to ensure the insert into on the linked server is done synchronous? I would like to have the stored procedure not return until the tables on the linked server actually is done.
You can use an output parameter of the first procedure. When the table is create on the second server the output parameter value will be return to your application and indicates the operation is ready.
If the things are difficult then this you can try setting a different isolation level of your store procedure:
http://msdn.microsoft.com/en-us/library/ms173763.aspx
I found the reason for this strange behavior. There was a line of code in my stored procedure added during debug that did a select on a temporary mem table before the data in the same table was written to the linked server.
When the select statement was run, the control was given back to my application and at the same time the stored procedure continued running. I guess the stored procedure was running synchronously from the start.
I have a stored procedure that causes blocking on my SQL server database. Whenever it does block for more than X amount of seconds we get notified with what query is being run, and it looks similar to below.
CREATE PROC [dbo].[sp_problemprocedure] (
#orderid INT
--procedure code
How can I tell what the value is for #orderid? I'd like to know the value because this procedure will run 100+ times a day but only cause blocking a handful of times, and if we can find some sort of pattern between the order id's maybe I'd be able to track down the problem.
The procedure is being called from a .NET application if that helps.
Have you tried printing it from inside the procedure?
http://msdn.microsoft.com/en-us/library/ms176047.aspx
If it's being called from a .NET application you could easily log out the parameter being passed from the .net app, but if you don't have access, also you can use SQL Server profiling. Filters can be set on the command type i.e. proc only as well as the database that is being hit otherwise you will be overwhelmed with all the information a profile can produce.
Link: Using Sql server profiler
rename the procedure
create a logging table
create a new one (same signature/params) which calls the original but first logs the params and starting timestamp and logs after the call finishes the end timestamp
create a synonym for this new proc with the name of the original
Now you have a log for all calls made by whatever app...
You can disbale/enable the logging anytime by simply redefining the synonym to point to the logging wrapper or to the original...
The easiest way would be to run a profiler trace. You'll want to capture calls to the stored procedure.
Really though, that is only going to tell you part of the story. Personally I would start with the code. Try and batch big updates into smaller batches. Try and avoid long-running explicit transactions if they're not necessary. Look at your triggers (if any) and cascading Foreign keys and make sure those are efficient.
easiest way is to do the following:
1) in .NET, grab the date-time just before running the procedure
2) in .Net, after the procedure is complete grab the date-time
3) in .NET, do some date-time math, and if it is "slow", write to a file (log) those start and end date-times, user info, all the the parameters, etc.
We have introduced a new data access framework for calling SQL Stored procedures. When calling a stored procedure that returns a recordset, we've run into problems where that stored procedure also performs an update (insert/update/delete) of some sort:
Cannot change the ActiveConnection
property of a Recordset object which
has a Command object as its source.
The solution to this is to add 'SET NOCOUNT ON' to the top of the stored procedure. This works just fine, and, of course, it also has a touted performance enhancement.
We are recommending to developers that when they want to write code to call an existing stored procedure, they must also refactor the stored procedure itself to include SET NOCOUNT ON.
But, this got me into wondering, what would be the potential consequences/risks of performing a blanket update of all stored procedures to include SET NOCOUNT ON. Under what scenarios would this break an SPs functionality? (given that ##ROWCOUNT function is updated even when SET NOCOUNT is ON)
Help, as always, much appreciated.
I think the main danger would be if any of your existing processes look for and/or assume that the rowcount will be returned without explicitly querying the value of ##ROWCOUNT.
It's possible that somewhere in your code is a stored proc that gets executed, and the application waits for the return row value to know that it completed, in which case the app would hang indefinitely.
I'm Trying to make a long stored procedure a little more manageable, Is it wrong to have a stored procedures that calls other stored procedures for example I want to have a sproc that inserts data into a table and depending on the type insert additional information into table for that type, something like:
BEGIN TRANSACTION
INSERT INTO dbo.ITSUsage (
Customer_ID,
[Type],
Source
) VALUES (
#Customer_ID,
#Type,
#Source
)
SET #ID = SCOPE_IDENTITY()
IF #Type = 1
BEGIN
exec usp_Type1_INS #ID, #UsageInfo
END
IF #TYPE = 2
BEGIN
exec usp_Type2_INS #ID, #UsageInfo
END
IF (##ERROR <> 0)
ROLLBACK TRANSACTION
ELSE
COMMIT TRANSACTION
Or is this something I should be handling in my application?
We call procs from other procs all the time. It's hard/impossible to segment a database-intensive (or database-only) application otherwise.
Calling a procedure from inside another procedure is perfectly acceptable.
However, in Transact-SQL relying on ##ERROR is prone to failure. Case in point, your code. It will fail to detect an insert failure, as well as any error produced inside the called procedures. This is because ##ERROR is reset with each statement executed and only retains the result of the very last statement. I have a blog entry that shows a correct template of error handling in Transact-SQL and transaction nesting. Also Erland Sommarskog has an article that is, for long time now, the reference read on error handling in Transact-SQL.
No, it is perfectly acceptable.
Definitely, no.
I've seen ginormous stored procedures doing 20 different things that would have really benefited from being refactored into smaller, single purposed ones.
As long as it is within the same DB schema it is perfectly acceptable in my opinion. It is reuse which is always favorable to duplication. It's like calling methods within some application layer.
not at all, I would even say, it's recommended for the same reasons that you create methods in your code
One stored procedure calling another stored procedure is fine. Just that there is a limit on the level of nesting till which you can go.
In SQL Server the current nesting level is returned by the ##NESTLEVEL function.
Please check the Stored Procedure Nesting section here http://msdn.microsoft.com/en-us/library/aa258259(SQL.80).aspx
cheers
No. It promotes reuse and allows for functionality to be componentized.
As others have pointed out, this is perfectly acceptable and necessary to avoid duplicating functionality.
However, in Transact-SQL watch out for transactions in nested stored procedure calls: You need to check ##TRANCOUNT before issuing rollback transaction because it rolls back all nested transactions. Check this article for an in-depth explanation.
Yes it is bad. While SQL Server does support and allow one stored procedures to call another stored procedure. I would generally try to avoid this design if possible. My reason?
single responsibility principle
In our IT area we use stored procedures to consolidate common code for both stored procedures and triggers (where applicable). It's also virtually mandatory for avoiding SQL source duplication.
The general answer to this question is, of course, No - it's normal and even preferred way of coding SQL stored procedures.
But it could be that in your specific case it is not such a good idea.
If you maintain a set of stored procedures that support data access tier (DAO) in your application (Java, .Net, etc.) then having database tier (let's call stored procedures that way) streamlined and relatively thin would benefit your overall design. Thus, having extensive graph of stored procedure calls may indeed be bad for maintaining and supporting overall data access logic in such application.
I would lean toward more uniform distribution of logic between DAO and database tier so that stored procedure code would fit inside single functional call.
Adding to the correct comments of other posters, there is nothing wrong in principle but you need to watch out on the execution time in case the procedure is being called for instance by an external application which is conforming to a specific timeout.
Typical example if you call the stored procedure from a web application: when the default timeout kicks in since your chain of executions takes longer you get a failure in the web application even when the stored procedure committs correctly.
Same happens if you call from an external service.
This can lead to an inconsistent behaviour in your application, triggering error management routines in external services etc.
If you are in situations like this what I do is breaking the chain of calls redirecting the long execution children calls to different processes using a Service Broker.