SQL Server runs very slow when called from .NET application - sql

I have SQL call to stored procedure in my ASP.NET application using nHibernate:
GetNamedQuery("MyProc")
.SetString("param1", value1)
.SetString("param2", value2)
...
SQL Server 2005 used here. It runs well on our testing environment, this call takes about 2 seconds to complete. But when we move it to new server it starts to take very long time for this and I got timeout exception in my application.
However, I catch calls in SQL Server Profiler, find that this one runs for 30 sec. But when I copied the same query and just run it on server it completes in 2 sec.
So the question is what can affect working queries from .NET application?

Hands down, the most complete solution(s) to this type of problem is found here, one of the best written pieces on this. IF you're passing parameters into a stored procedure from an external application, one quick hack to this that works 80% of the time is to localize the parameters in the procedure:
CREATE PROCEDURE sp_Test
#VarOne INT, #VarTwo INT
AS
BEGIN
DECLARE #VOne INT, #VTwo INT
SET #VOne = #VarOne
SET #VTwo = #VarTwo
/* Rest of code only uses #VOne and #VTwo for parameters */
END
This is assumes, though, that you have parameters in your application that the stored procedure needs (which it looks like from the brief snippet of code you've posted). Otherwise, the provided link also delineates some other oversights and I highly recommend it to anyone troubleshooting performance from an external application.

Related

How to effective version store procedures?

i am part of database development team wotking for big eshop. We are using MS SQL 2016 and ASP.NET. SQL Server is used by clients from 10+ IIS servers using connection pooling (we have aprox 7-10k batch/sec) in production environment and we are using 18 DEV/TESTING IIS servers (but only one DEV database because multi TB size).
We develop a new functionality that forces us to make changes to existing stored procedures quite often.
If we are deploying a change to a production environment, it is a part of changing both the modification of the application to IIS and the change in the database procedures. When deploying, they are always changed to 5 IIS servers, then to 5 more and more. In the meantime, both old and new versions exist on IIS servers. These versions must coexist for some time while using the procedures in the database at the same time. At the database level, we solve this situation by using several versions for the procedure. The old app version calls EXEC dbo.GetProduct, the new app version uses dbo.GetProduct_v2. After you deploy a new version of the application to all IIS, everyone is using dbo.GetProduct_v2. During the next deployment, the situation will be reversed and dbo.GetProduct will contain a new version. A similar situation lies in the development and testing environment.
I fully realize that this solution is not ideal and I would like to be inspired.
We consider separating the data part and the logical part. In one database there will be data tables, other databases will contain only procedures and other program objects. When deploying a new version, we simply deploy a new version of the entire database containing logic and will not need to create a version of the procedure. Procedures from the logic database will query the database with data.
However, the disadvantage of this solution is the impossibility of using natively compiled procedures that we plan to use next year because they do not support querying in other databases.
Another option is using one database and separate procedure versions in different schemas...
If you have any ideas, pros/cons or you know tools what can help us and manage/deploy/use multiple proc versions, please make comment.
Thank you so much
Edit : We are using TFS and Git, but this do not solve versioning of procedures in SQL database. My main question is how to deal with the need to manage multiple versions of IIS applications using multiple versions of the procedures in the database.
Versioning is easy with SSDT or SQL Compare and source control. So are deployments.
Your problem is not versioning.
You need two different stored procedures with the same name, probably same parameters but different code and maybe different results. It's more achievable in, say, .net code because you can use overloading to a point.
Your problem is phased deployments using different code:
Two versions of the same proc must co-exist.
In your case, I would consider using synonyms to mask the actual stored procedure name.
So you have these stored procedures.
dbo.GetProduct_v20170926 (last release)
dbo.GetProduct_v20171012 (this release)
dbo.GetProduct_v20171025 (next release)
dbo.GetProduct_v20171113 (one after)
Then you have
CREATE SYNONYMN dbo.GetProductBlue FOR dbo.GetProduct_v20171012;
CREATE SYNONYMN dbo.GetProductGreen FOR dbo.GetProduct_v20170926;
Your phased IIS deployments refer to one of the SYNONYMNs
Next release...
DROP SYNONYMN dbo.GetProductBlue;
CREATE SYNONYMN dbo.GetProductBlue FOR dbo.GetProduct_v20171025;
then
DROP SYNONYMN dbo.GetProductGreen;
CREATE SYNONYMN dbo.GetProductGreen FOR dbo.GetProduct_v20171113;
Using a different schema is the same result but you'd end up with
- Blue.GetProduct
- Green.GetProduct
Or code your release date into the schema name.
- Codev20171025.GetProduct
- Codev20171113.GetProduct
You'd have the same problem even you had another set of IIS servers and keep one code base on each set of servers:
Based on the blue/green deployment model
A couple assumptions.
You have a version number in your IIS code somewhere - perhaps an App.config or Web.config file and that version number can be referenced in your .NET code
Your goal is not to change your IIS .NET SP names on every release but have it call the correct version of the SP in the DB
All versions of the SP take the same parameters
Different version of the SP can return different results
Ultimately there is no way around having multiple versions of the stored procedure in the DB. The idea is to abstract that away, as much as possible, from IIS (I am assuming).
Based on the above, I am thinking you could add another parameter to your SP which accepts a version number (which you would likely get from Web.config in IIS).
Then your stored proc dbo.GetProduct becomes a "controller" or "routing" stored procedure whose sole purpose is to take the version number and pass the remaining parameters to the appropriate underlying SP.
So you would have 1 SP per version (use whatever naming convention you wish). And dbo.GetProduct would call the appropriate one based on the version number passed in. An example is below.
create proc dbo.GetProduct_v1 #Param1 int, #Param2 int
as
begin
--do whatever is needed for v1
select 1
end
go
create proc dbo.GetProduct_v2 #Param1 int, #Param2 int
as
begin
--do whatever is needed for v2
select 2
end
go
create proc dbo.GetProduct #VersionNumber int, #Param1 int, #Param2 int
as
begin
if #VersionNumber = 1
begin
exec dbo.GetProduct_v1 #Param1, #Param2
end
if #VersionNumber = 2
begin
exec dbo.GetProduct_v2 #Param1, #Param2
end
end
Another thought would be to dynamically build your SP name in IIS (based on the version number in Web.config) instead of hard coding the SP name.

How can I tell what the parameter values are for a problem stored procedure?

I have a stored procedure that causes blocking on my SQL server database. Whenever it does block for more than X amount of seconds we get notified with what query is being run, and it looks similar to below.
CREATE PROC [dbo].[sp_problemprocedure] (
#orderid INT
--procedure code
How can I tell what the value is for #orderid? I'd like to know the value because this procedure will run 100+ times a day but only cause blocking a handful of times, and if we can find some sort of pattern between the order id's maybe I'd be able to track down the problem.
The procedure is being called from a .NET application if that helps.
Have you tried printing it from inside the procedure?
http://msdn.microsoft.com/en-us/library/ms176047.aspx
If it's being called from a .NET application you could easily log out the parameter being passed from the .net app, but if you don't have access, also you can use SQL Server profiling. Filters can be set on the command type i.e. proc only as well as the database that is being hit otherwise you will be overwhelmed with all the information a profile can produce.
Link: Using Sql server profiler
rename the procedure
create a logging table
create a new one (same signature/params) which calls the original but first logs the params and starting timestamp and logs after the call finishes the end timestamp
create a synonym for this new proc with the name of the original
Now you have a log for all calls made by whatever app...
You can disbale/enable the logging anytime by simply redefining the synonym to point to the logging wrapper or to the original...
The easiest way would be to run a profiler trace. You'll want to capture calls to the stored procedure.
Really though, that is only going to tell you part of the story. Personally I would start with the code. Try and batch big updates into smaller batches. Try and avoid long-running explicit transactions if they're not necessary. Look at your triggers (if any) and cascading Foreign keys and make sure those are efficient.
easiest way is to do the following:
1) in .NET, grab the date-time just before running the procedure
2) in .Net, after the procedure is complete grab the date-time
3) in .NET, do some date-time math, and if it is "slow", write to a file (log) those start and end date-times, user info, all the the parameters, etc.

Why are there performance differences when a SQL function is called from .Net app vs when the same call is made in Management Studio

We are having a problem in our test and dev environments with a function that runs quite slowly at times when called from an .Net Application. When we call this function directly from management studio it works fine.
Here are the differences when they are profiled:
From the Application:
CPU: 906
Reads: 61853
Writes: 0
Duration: 926
From SSMS:
CPU: 15
Reads: 11243
Writes: 0
Duration: 31
Now we have determined that when we recompile the function the performance returns to what we are expecting and the performance profile when run from the application matches that of what we get when we run it from SSMS. It will start slowing down again at what appear to random intervals.
We have not seen this in prod but they may be in part because everything is recompiled there on a weekly basis.
So what might cause this sort of behavior?
Edit -
We finally were able to tackle this and restructuring the varables to deal with parameter sniffing appears to have done the trick...a snippet of what we did here: Thanks for your help.
-- create set of local variables for input parameters - this is to help performance - vis a vis "parameter sniffing"
declare #dtDate_Local datetime
,#vcPriceType_Local varchar(10)
,#iTradingStrategyID_Local int
,#iAccountID_Local int
,#vcSymbol_Local varchar(10)
,#vcTradeSymbol_Local varchar(10)
,#iDerivativeSymbolID_Local int
,#bExcludeZeroPriceTrades_Local bit
declare #dtMaxAggregatedDate smalldatetime
,#iSymbolID int
,#iDerivativePriceTypeID int
select #dtDate_Local = #dtDate
,#vcPriceType_Local = #vcPriceType
,#iTradingStrategyID_Local = #iTradingStrategyID
,#iAccountID_Local = #iAccountID
,#vcSymbol_Local = #vcSymbol
,#vcTradeSymbol_Local = #vcTradeSymbol
,#iDerivativeSymbolID_Local = #iDerivativeSymbolID
,#bExcludeZeroPriceTrades_Local = #bExcludeZeroPriceTrades
I had similar problem with stored procedures, and for me it turned out to be 'parameter sniffing'. Google that and see if it solves your problem, for me it was dramatic speed-up once I fixed it.
In my case, I fixed it by declaring a local variable for each parameters that was passed in, and then assigned the local variable to that parameter value, and the rest of the proc used the local variables for processing...for whatever reason, this defeated the parameter sniffing.
This is usually because you are getting a different execution plan in your SSMS connection. Often related to parameter sniffing issues where when the plan gets generated with a specific value that is sub optimal for other values of the parameters. This also explains why recompiling would resolve the issue. This thread seems to have a good explanation Parameter Sniffing (or Spoofing) in SQL Server
A likely cause is out of date statistics and/or parameter sniffing causing a cached query plan to be re-used that is sub-optimal.
SSMS emits pre-amble statements that you don't see, that cause the submitted query to be re-compiled each time, thus eliminating the possibility of using an incorrect cached plan.
This will update all statistics and refresh views and stored procs (but be careful about running on a Production machine):
EXEC sp_updatestats
EXEC sp_refreshview
EXEC sp_msForEachTable 'EXEC sp_recompile ''?'''

SQL user-defined functions vs. stored procedure branching

I currently am working on a legacy application and have inherited some shady SQL with it. The project has never been put into production, but now is on it's way. During intial testing I found a bug. The application calls a stored procedure that calls many other stored procedures, creates cursors, loops through cursors, and many other things. FML.
Currently the way the app is designed, it calls the stored procedure, then reloads the UI with a fresh set of data. Of course, the data we want to display is still being processed on the SQL server side, so the UI results are not complete when displayed. To fix this, I just made a thread sleep for 30 seconds, before loading the UI. This is a terrible hack and I would like to fix this properly on the SQL side of things.
My question is...is it worthwhile to convert the branching stored procedures to functions? Would this make the main-line stored procedure wait for a return value, before processing on?
Here is the stored procedure:
ALTER PROCEDURE [dbo].[ALLOCATE_BUDGET]
#budget_scenario_id uniqueidentifier
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
DECLARE #constraint_type varchar(25)
-- get project cache id and constraint type
SELECT #constraint_type = CONSTRAINT_TYPE
FROM BUDGET_SCENARIO WHERE BUDGET_SCENARIO_ID = #budget_scenario_id
-- constraint type is Region by Region
IF (#constraint_type = 'Region by Region')
EXEC BUDGET_ALLOCATE_SCENARIO_REGIONBYREGION #budget_scenario_id
-- constraint type is City Wide
IF (#constraint_type = 'City Wide')
EXEC BUDGET_ALLOCATE_SCENARIO_CITYWIDE #budget_scenario_id
-- constraint type is Do Nothing
IF (#constraint_type = 'Do Nothing')
EXEC BUDGET_ALLOCATE_SCENARIO_DONOTHING #budget_scenario_id
-- constraint type is Unconstrained
IF (#constraint_type = 'Unconstrained')
EXEC BUDGET_ALLOCATE_SCENARIO_UNCONSTRAINED #budget_scenario_id
--set budget scenario status to "Allocated", so reporting tabs in the application are populated
EXEC BUDGET_UPDATE_SCENARIO_STATUS #budget_scenario_id, 'Allocated'
END
To avoid displaying an incomplete resultset in the calling .NET application UI, before the cursors in the branching calls are completed, is it worthwile to convert these stored procedures into functions, with return values? Would this force SQL to wait before completing the main call to the [ALLOCATED_BUDGET] stored procedure?
The last SQL statement call in the stored procedure sets a status to "Allocated". This is happening before the cursors in the previous calls are finished processing. Does making these calls into function calls affect how the stored procedure returns focus to the application?
Any feedback is greatly appreciated. I have a feeling I am correct in going towards SQL functions but not 100% sure.
** additional information:
Executing code uses [async=true] in the connection string
Executing code uses the [SqlCommand].[ExecuteNonQuery] method
How are you calling the procedure? I'm going to guess that you are using ExecuteNonQuery() to call the procedure. Try calling the procedure using ExecuteScalar() and modify the procedure like the following:
ALTER PROCEDURE [dbo].[ALLOCATE_BUDGET]
#budget_scenario_id uniqueidentifier
AS
BEGIN
...
RETURN True
END
This should cause your data execution code in .NET to wait for the procedure to complete before continuing. If you don't want your UI to "hang" during the procedure execution, use a BackgroundWorkerProcess or something similar to run the query on a separate thread and look for the completed callback to update the UI with the results.
You could also try using the RETURN statement in your child stored procedures, which can be used to return a result code back to the parent procedure. You can call the child procedure by something along the lines of "exec #myresultcode = BUDGET_ALLOCATE_SCENARIO_REGIONBYREGION()". I think this should force the parent procedure to wait for the child procedure to finish.
I have never heard that it's possible for a stored procedure to return to the caller while still executing in the background.
In fact, I'll go as far as to say I don't believe that's happening. If you're seeing a difference between the UI and what you believe the SP should have done, then I believe it has a different cause.
Does the connection string have async=true in it? Is the SP being executed by using BeginExecuteReader or Begin-anything else?
At the risk of sounding to simple, I suggest you could create a table which can store the status of the stored proc. Somehow, a flag that can indicate that the entire process & sub-process has finished executing.
You could query this from UI to see if things are done by polling this status code.
Does making these calls into function calls affect how the stored procedure returns focus to the application?
No.
The stored procedure has no idea that its caller is a UI application. There is nothing in the stored procedure that can influence the behavior of the UI application.
Most likely the UI application is calling the stored procedure on one connection, and then refreshing its data on another connection. There's a plethora of ways of getting the UI to delay refreshing, but the one I'll push is that there should be a single database connection.
Personally, I would be far more concerned about replacing those cursors than converting this to functions.
And I would not run the last proc until checking for a valid return code from the previous procs (this thing is in real trouble if one of the preceding procs dies!)
Also consider if this should all be in a transaction (are these procs changing data in a table?)
(Am I the only one who finds it funny you have a proc to run the process for Do Nothing?)

Why does the SqlServer optimizer get so confused with parameters?

I know this has something to do with parameter sniffing, but I'm just perplexed at how something like the following example is even possible with a piece of technology that does so many complex things well.
Many of us have run into stored procedures that intermittently run several of orders of magnitude slower than usual, and then if you copy out the sql from the procedure and use the same parameter values in a separate query window, it runs as fast as usual.
I just fixed a procedure like that by converting this:
alter procedure p_MyProc
(
#param1 int
) as -- do a complex query with #param1
to this:
alter procedure p_MyProc
(
#param1 int
)
as
declare #param1Copy int;
set #param1Copy = #param1;
-- Do the query using #param1Copy
It went from running in over a minute back down to under one second, like it usually runs. This behavior seems totally random. For 9 out of 10 #param1 inputs, the query is fast, regardless of how much data it ends up needing to crunch, or how big the result set it. But for that 1 out of 10, it just gets lost. And the fix is to replace an int with the same int in the query?
It makes no sense.
[Edit]
#gbn linked to this question, which details a similar problem:
Known issue?: SQL Server 2005 stored procedure fails to complete with a parameter
I hesitate to cry "Bug!" because that's so often a cop-out, but this really does seem like a bug to me. When I run the two versions of my stored procedure with the same input, I see identical query plans. The only difference is that the original takes more than a minute to run, and the version with the goofy parameter copying runs instantly.
The 1 in 10 gives the wrong plan that is cached.
RECOMPILE adds an overhead, masking allows each parameter to be evaluated on it's own merits (very simply).
By wrong plan, what if the 1 in 10 generates an scan on index 1 but the other 9 produce a seek on index 2? eg, the 1 in 10 is, say, 50% of the rows?
Edit: other questions
Known issue?: SQL Server 2005 stored procedure fails to complete with a parameter
Stored Procedure failing on a specific user
Edit 2:
Recompile does not work because the parameters are sniffed at compile time.
From other links (pasted in):
This article explains...
...parameter values are sniffed during compilation or recompilation...
Finally (edit 3):
Parameter sniffing was probably a good idea at the time and probably works well mostly. We use it across the board for any parameter that will end up in a WHERE clause.
We don't need to use it because we know that only a few (more complex eg reports or many parameters) could cause issues but we use it for consistency.
And the fact that it will come back and bite us when the users complain and we should have used masking...
It's probably caused by the fact that SQL Server compiles stored procedures and caches execution plans for them and the cached execution plan is probably unsuitable for this new set of parameters. You can try WITH RECOMPILE option to see if it's the cause.
EXECUTE MyProcedure [parameters] WITH RECOMPILE
WITH RECOMPILE option will force SQL Server to ignore the cached plan.
I have had this problem repeatedly on moving my code from a test server to production - on two different builds of SQL Server 2005. I think there are some big problems with the parameter sniffing in some builds of SQL Server 2005. I never had this problem on the dev server, or on two local developer edition boxes. I've never seen it it be such a big problem on SQL Server 2000 or any version going back to 6.5 either.
The cases where I found it, the only workaround was to use parameter masking, and I'm still hoping the DBAs will patch up the production server to SP3 so it will maybe go away. Things which did not work:
using the WITH RECOMPILE hint on EXEC or in the SP itself.
dropping and recreating the SP
using sp_recompile
Note that in the case I was working on, the data was not changing since an earlier invocation - I had simply scripted the code onto the production box which already had data loaded. All the invocations came with no changes to the data since before the SPs existed.
Oh, and if SQL Server can't handle this without masking, they need to add a parameter modifier NOSNIFF or something. What happens if you mask all your parameters, so you have #Something_parm and #Something_var and someone changes the code to use the wrong one and all of a sudden you have a sniffing problem again? Plus you are polluting the namespace within the SP. All these SPs I am "fixing" drive me nuts because I know they are going to be a maintenance nightmare for the less experienced satff I will be handing this project off to one day.
Could you check on the SQL Profiler how many reads and execution time when it is quick and when it is slow? It could be related to the number of rows fetched depending on the parameter value. It doesn't sound like a cache plan issue.
I know this is a 2 year old thread, but it might help someone down the line.
Once you analyze the query execution plans and know what the difference is between the two plans (query by itself and query executing in the stored procedure with a flawed plan), you can modify the query within the stored procedure with a query hint to resolve the issue. This works in a scenario where the query is using the incorrect index when executed in the stored procedure. You would add the following after the table in the appropriate location of your procedure:
SELECT col1, col2, col3
FROM YourTableHere WITH (INDEX (PK_YourIndexHere))
This will force the query plan to use the correct index which should resolve the issue. This does not answer why it happens but it does provide a means to resolve the issue without worrying about copying the parameters to avoid parameter sniffing.
As indicated it be a compilation issue. Does this issue still occur if you revert the procedure? One thing you can try if this occurs again to force a recompilation is to use:
sp_recompile [ #objname = ] 'object'
Right from BOL in regards to #objname parameter:
Is the qualified or unqualified name of a stored procedure, trigger, table, or view in the current database. object is nvarchar(776), with no default. If object is the name of a stored procedure or trigger, the stored procedure or trigger will be recompiled the next time that it is run. If object is the name of a table or view, all the stored procedures that reference the table or view will be recompiled the next time they are run.
If you drop and recreate the procedure you could cause clients to fail if they try and execute the procedure. You will also need to reapply security settings.
Is there any chance that the parameter value being provided is sometimes not int?
Is every query reference to the parameter comparing it with int values, without functions and without casting?
Can you increase the specificity of any expressions using the parameter to make the use of multifield indexes more likely?
It is a problem with plan caching, and it isn't always related to parameters, as it was in your scenario.
(Parameter Sniffing problems occur when a proc is called with unusual parameters the FIRST time it runs, and so the cached plan works great for those odd values, but lousy for most other times the proc is called.)
We had a similar situation when the app team deleted all old records from a highly-used log table on a production server. Removing records improves performance, right? Nope, performance immediately tanked.
Turns out that a frequently-used stored proc was recompiled right when the table was nearly empty, and it cached an extremely poor execution plan ("hey, there's only 50 records here, might as well do a Table Scan!"). Would have happened no matter what the initial parameters.
Our fix was to force a recompile with sp_recompile.