My function GetProductDesc (when called) returns a different result after commenting out USE DatabaseName GO. I don't even know where to start debugging this. The pictures tell the story. I had to blur out a lot but you can see that the results are clearly different. Keep in mind that the pictures are not the function code, they are calling the function GetProductDesc
So strange. Any suggestions? I have an expert helping me later today but I had to share.
EDIT:
The function uses another lookup table in the same database. There is no Top or Order By clause. It calculates the product description based on the input components (numbers). It will return a different result if the input numbers are different, but here the input numbers are the same!
The function has been in place and working for over 5 years. I believe the problem started at about the time the version of SQL Server was updated recently.
EDIT 2 with partial answer:
The problem is caused by ##RowCount. It appears to be a breaking change caused by our recent migration to SQL Server 2019 although I haven't found the problem documented. The function returns a different product description based on ##RowCount following a Select statement. Internally the function does something like this:
SELECT Fields FROM Table WHERE Field = #Variable
IF ##Rowcount = 1
Return ProdDesc1
ELSE
Return ProdDesc2
After the SQL Server migration ##RowCount here was different depending on whether
USE DatabaseName
GO
was present.
The solution was to replace ##Rowcount with a variable #RowCount. This new code works:
DECLARE #RowCount INT = 0
SELECT Fields, #RowCount = #RowCount + 1
FROM Table WHERE Field = #Variable
IF #RowCount = 1
Return ProdDesc1
ELSE
Return ProdDesc2
If you have SQL Server 2019 installed try this to recreate the problem:
USE Master
GO
Select ##ROWCOUNT
The result here is ##ROWCOUNT = 0
Now comment out the two top lines:
--USE Master
--GO
Select ##ROWCOUNT
The result is now ##ROWCOUNT = 1
Anybody know why?
There is a SQL Server 2019 cumulative update from Microsoft that fixes this problem.
Related
I have started creating a stored procedure that will search through my database table based on the passed parameters. So far I already heard about potential problems with kitchen sink parameter sniffing. There are a few articles that helped understand the problem but I'm still not 100% that I have a good solution. I have a few screens in the system that will search different tables in my database. All of them have three different criteria that the user will select and search on. First criteria are Status that can be Active,Inactive or All. Next will be Filter By, this can offer different options to the user depends on the table and the number of columns. Usually, users can select to filter by Name,Code,Number,DOB,Email,UserName or Show All. Each search screen will have at least 3 filters and one of them will be Show All. I have created a stored procedure where the user can search Status and Filter By Name,Code or Show All. One problem that I have is Status filter. Seems that SQL will check all options in where clause so If I pass parameter 1 SP returns all active records if I pass 0 then only inactive records. The problem is if I pass 2 SP should return all records (active and inactive) but I see only active records. Here is an example:
USE [TestDB]
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER PROC [dbo].[Search_Master]
#Status BIT = NULL,
#FilterBy INT = NULL,
#Name VARCHAR(50) = NULL,
#Code CHAR(2) = NULL
WITH RECOMPILE
AS
DECLARE #MasterStatus INT;
DECLARE #MasterFilter INT;
DECLARE #MasterName VARCHAR(50);
DECLARE #MasterCode CHAR(2);
SET #MasterStatus = #Status;
SET #MasterFilter = #FilterBy;
SET #MasterName = #Name;
SET #MasterCode = #Code;
SELECT RecID, Status, Code, Name
FROM Master
WHERE
(
(#MasterFilter = 1 AND Name LIKE '%'+#MasterName+'%')
OR
(#MasterFilter = 2 AND Code = #MasterCode)
OR
(#MasterFilter = 3 AND #MasterName IS NULL AND #MasterCode IS NULL)
)
AND
(
(#MasterStatus != 2 AND MasterStatus = #Status)
OR
(#MasterStatus = 2 AND 1=1)
);
Other than problem with Status filter I'm wondering if there is any other issues that I might have with parameter sniffing? I found a blog that talks about preventing sniffing and one way to do that is by declaring local variables. If anyone have suggestions or solution for Status filter please let me know.
On your Status issue, I believe the problem is that your BIT parameter isn't behaving as you're expecting it to. Here's a quick test to demonstrate:
DECLARE #bit BIT;
SET #bit = 2
SELECT #bit AS [2=What?];
--Results
+---------+
| 2=What? |
+---------+
| 1 |
+---------+
From our friends at Microsoft:
Converting to bit promotes any nonzero value to 1.
When you pass in your parameter as 2, the engine does an implicit conversion from INTEGER to BIT, and your non-zero value becomes a 1.
You'll likely want to change the data type on that parameter, then use some conditional logic inside your procedure to deal with the various possible values as you want them to be handled.
On the issue of parameter sniffing, 1) read the article Sean suggests in the comments, but 2) if you keep that WITH RECOMPILE on your procedure, parameter sniffing can't happen.
The issue (but still read the article) is that SQL Server uses the first set of parameters you send through the proc to store an execution plan, but subsequent parameters require substantially different plans. Adding WITH RECOMPILE is forcing a new execution plan on every iteration, which has some overhead, but is may well be exactly what you want to do in your situation.
As a closing thought, SQL Server 2008 ended mainstream support in 2014 and extended support ends on 7/9/2019. An upgrade might be a good idea.
I'm using SQL Server 2008.
I have an interesting scenario where a stored procedure (written by a "power user") has an okay runtime of (around 4 seconds) if there's data in the primary table. If the search value doesn't exist, the run time averages out at about 3 minutes. Because of how the process works, and the web application that uses the procedure, it requires an empty result set in the case of no data.
I've tested the logic below with values that have data and values that don't and the flow seems to work; however, when I put my actual query in the else statement, it seems like that part is always being evaluated despite my knowing that logic branch shouldn't execute.
DECLARE #spId int
SELECT #spId = td.mainId
FROM dbo.PRIMARYTABLE
WHERE td.longId = #searchVal
IF #spId < 1 OR #spId IS NULL
BEGIN
select 'RETURN EMPTY RESULT SET' as test
END
ELSE
BEGIN
SELECT 'DO ACTUAL QUERY' as test
END
When I test this with a dummy value, such as 1111, the select 'RETURN EMPTY RESULT SET' as test is returned. When I use a value that I know exists, the SELECT 'DO ACTUAL QUERY' as test is returned. When I replace "SELECT 'DO ACTUAL QUERY' as test" with the actual heavy duty query and use the same non-existent dummy value, it still looks like the ELSE clause is reached.
What am I missing here?
Perhaps you are not showing everything. There is an counter-intuitive thing about assignment in select where no rows are returned - the value of variable will not be cleared. Paste this in SSMS:
declare #searchVal as int
set #searchVal=111
DECLARE #spId int
set #spId = 2134
SELECT #spId = td.mainId
FROM (select 839 as mainId, 0 as longid) td
where td.longId = #searchVal
print #spid
#spid will be 2134. This is why you should always test using ##rowcount, in you case
IF ##rowcount = 0 or #spId < 1 or #spId is null
BEGIN
select 'RETURN EMPTY RESULT SET' as test
END
ELSE
BEGIN
SELECT 'DO ACTUAL QUERY' as test
END
There is also a possibility of duplicated data by longId, returning random mainid from rows that satisfy #searchval condition.
Other than that, I would not know.
Thank you all for your suggestions. I apologize for the lack of posting the entire stored procedure, but I'm not allowed to share that exact code. The snippet I began with was psuedo code (well, real code with tables and fields renamed).
I think Nikola Markovinović may be onto something with his answer and article link. This entire ordeal has been sort of maddening. I googled, debugged, and did the whole thing again, then search on stack overflow. After a few changes from your suggestions, the procedure magically started responding with the run time I thought it should. I don't think some of the initial changes took or maybe they weren't be cached by sql server correctly; I've got nothing but guesses.
It's very strange because, for a good hour or more, it was running as if it had never been changed (performance wise)...then it just kicked into gear. I wonder if this isn't my fault and maybe I didn't alter the one on staging like I did the one on Test...that seems the most feasible explanation.
Anyhow, thank you for your suggestions. I've learned a few things so that's always good.
Trying to assign a variable inside an if exists clause for TSQL
DECLARE #myvar int
IF EXISTS (SELECT #myvar = theTable.varIWant..... )
I thought this would work, but apparently not? Or perhaps (more likely) I'm doing it wrong.
In my installation of SQL Server 2008 R2, it simply doesn't compile. The parser complains about there being incorrect syntax near =.
I believe it must have something to do with mixing value assignment and data retrieval in a single SELECT statement, which is not allowed in SQL Server: you can have either one or the other. Since, when you assign values, the row set is not returned but the EXISTS predicate expects it to be, the assignment cannot be allowed in that context, so, to avoid confusion, perhaps, the limitation must have been imposed explicitly.
Your workaround, which you are talking about in a comment, is a decent one, but might not work well somewhere in the middle of a batch when the variable has already got a value before the assignment. So I would probably use this workaround instead:
SELECT #myvar = ...
IF ##ROWCOUNT > 0 ...
As per MSDN, the ##ROWCOUNT system function returns the number of rows read by the query.
Rather than doing IF EXISTS, you could just do
DECLARE #myvar int
SELECT #myvar = theTable.varIWant.....;
IF #myvar IS NULL
BEGIN...
It will not work just because in EXISTS construction sql server just validates if any row exists and it does not matter the select-columns or assignment section.
This is done for optimizing the performance.
Have you tried count?
SELECT #Exists = CASE WHEN COUNT(*) > 0 THEN 1 ELSE 0 END
FROM [dbname].[dbo].[tableorviewname];
Working with Sql Server. Writing a stored procedure. Here is the pseudocode for what I want to achieve:
IF EXISTS ( SELECT field1
FROM t1
WHERE field1 = ... AND field2 = ...)
BEGIN
SELECT field1
FROM t1
WHERE field1 = ... AND field2 = ...
END
any better way of doing this? Any help appreciated.
Chirayu
Update: The problem is that the same query is executed twice. I cannot also just the run query once and return null (if the result is null i would like to return an alternative result).
I have done this before using a CTE and table variable, it requires more lines of code but the query is only written once, therefore your logic exists in a single place.
DECLARE #Results TABLE (Result INT);
WITH ResultsCTE AS
(
--Your query goes here
SELECT 1 as Result
WHERE 1 = 1
)
INSERT INTO #Results
SELECT Result
FROM ResultsCTE
IF (SELECT COUNT(*) FROM #Results) > 0
BEGIN
SELECT * FROM #Results
END
ELSE BEGIN
SELECT 'Do Something Else or Do Nothing!'
END
You could check ##ROWCOUNT after running the query once to determine whether or not to return the value:
http://msdn.microsoft.com/en-us/library/ms187316.aspx
If the select doesn't yield any results, no results will be returned. I don't see any reason to use a condition here, unless I'm missing something...
A stored procedure that sometimes returns a result while sometimes it doesn't would be a nightmare to use from any API. The client side API has different entry points depending on whether you return a result set (SqlCommand.ExecuteReader) or it does not return a result set (SqlCommand.ExecuteNonQuery). It would be impossible for the application to know ahead of time which API to use! Modeling tools use the SET FMTONLY option to analyze the metadata of returned result sets and the modeling tools are very confused when your returned result set start changing shape at random. In other words, you are down the wrong path, stop and turn around.
Just run the query, it no rows match your criteria it will simply return an empty result set. Which is exactly what every client API and modeling tool expects from your procedure.
Update: Problem solved, and staying solved. If you want to see the site in action, visit Tweet08
I've got several queries that act differently in SSMS versus when run inside my .Net application. The SSMS executes fine in under a second. The .Net call times out after 120 seconds (connection default timeout).
I did a SQL Trace (and collected everything) I've seen that the connection options are the same (and match the SQL Server's defaults). The SHOWPLAN All, however, show a huge difference in the row estimates and thus the working version does an aggressive Table Spool, where-as the failing call does not.
In the SSMS, the datatypes of the temp variables are based on the generated SQL Parameters in the .Net, so they are the same.
The failure executes under Cassini in a VS2008 debug session. The success is under SSMS 2008 . Both are running against the same destination server form the same network on the same machine.
Query in SSMS:
DECLARE #ContentTableID0 TINYINT
DECLARE #EntryTag1 INT
DECLARE #ContentTableID2 TINYINT
DECLARE #FieldCheckId3 INT
DECLARE #FieldCheckValue3 VARCHAR(128)
DECLARE #FieldCheckId5 INT
DECLARE #FieldCheckValue5 VARCHAR(128)
DECLARE #FieldCheckId7 INT
DECLARE #FieldCheckValue7 VARCHAR(128)
SET #ContentTableID0= 3
SET #EntryTag1= 8
SET #ContentTableID2= 2
SET #FieldCheckId3= 14
SET #FieldCheckValue3= 'igor'
SET #FieldCheckId5= 33
SET #FieldCheckValue5= 'a'
SET #FieldCheckId7= 34
SET #FieldCheckValue7= 'a'
SELECT COUNT_BIG(*)
FROM dbo.ContentEntry AS mainCE
WHERE GetUTCDate() BETWEEN mainCE.CreatedOn AND mainCE.ExpiredOn
AND (mainCE.ContentTableID=#ContentTableID0)
AND ( EXISTS (SELECT *
FROM dbo.ContentEntryLabel
WHERE ContentEntryID = mainCE.ID
AND GetUTCDate() BETWEEN CreatedOn AND ExpiredOn
AND LabelFacetID = #EntryTag1))
AND (mainCE.OwnerGUID IN (SELECT TOP 1 Name
FROM dbo.ContentEntry AS innerCE1
WHERE GetUTCDate() BETWEEN innerCE1.CreatedOn AND innerCE1.ExpiredOn
AND (innerCE1.ContentTableID=#ContentTableID2
AND EXISTS (SELECT *
FROM dbo.ContentEntryField
WHERE ContentEntryID = innerCE1.ID
AND (ContentTableFieldID = #FieldCheckId3
AND DictionaryValueID IN (SELECT dv.ID
FROM dbo.DictionaryValue AS dv
WHERE dv.Word LIKE '%' + #FieldCheckValue3 + '%'))
)
)
)
OR EXISTS (SELECT *
FROM dbo.ContentEntryField
WHERE ContentEntryID = mainCE.ID
AND ( (ContentTableFieldID = #FieldCheckId5
AND DictionaryValueID IN (SELECT dv.ID
FROM dbo.DictionaryValue AS dv
WHERE dv.Word LIKE '%' + #FieldCheckValue5 + '%')
)
OR (ContentTableFieldID = #FieldCheckId7
AND DictionaryValueID IN (SELECT dv.ID
FROM dbo.DictionaryValue AS dv
WHERE dv.Word LIKE '%' + #FieldCheckValue7 + '%')
)
)
)
)
Trace's version of .Net call (some formatting added):
exec sp_executesql N'SELECT COUNT_BIG(*) ...'
,N'#ContentTableID0 tinyint
,#EntryTag1 int
,#ContentTableID2 tinyint
,#FieldCheckId3 int
,#FieldCheckValue3 varchar(128)
,#FieldCheckId5 int
,#FieldCheckValue5 varchar(128)
,#FieldCheckId7 int
,#FieldCheckValue7 varchar(128)'
,#ContentTableID0=3
,#EntryTag1=8
,#ContentTableID2=2
,#FieldCheckId3=14
,#FieldCheckValue3='igor'
,#FieldCheckId5=33
,#FieldCheckValue5='a'
,#FieldCheckId7=34
,#FieldCheckValue7='a'
It is not your indexes.
This is parameter-sniffing, as it usually happens to parametrized stored procedures. It is not widely known, even among those who know about parameter-sniffing, but it can also happen when you use parameters through sp_executesql.
You will note that the version that you are testing in SSMS and the version the the profiler is showing are not identical because the profiler version shows that your .Net application is executing it through sp_executesql. If you extract and execute the full sql text that is actually being run for your application, then I believe that you will see the same performance problem with the same query plan.
FYI: the query plans being different is the key indicator of parameter-sniffing.
FIX: The easiest way to fix this one assuming it is executing on SQL Server 2005 or 2008 is to add the clause "OPTION (RECOMPILE)" as the last line of you SELECT statement. Be forewarned, you may have to execute it twice before it works and it does not always work on SQL Server 2005. If that happens, then there are other steps that you can take, but they are a little bit more involved.
One thing that you could try is to check and see if "Forced Parameterization" has been turned on for your database (it should be in the SSMS Database properties, under the Options page). To tunr Forced Parameterization off execute this command:
ALTER DATABASE [yourDB] SET PARAMETERIZATION SIMPLE
I ran into this situation today and the fix that solved my problem is to use WITH (NOLOCK) while doing a select on tables:
Eg: If your stored proc has T-SQL that looks like below:
SELECT * FROM [dbo].[Employee]
Change it to
SELECT * FROM [dbo].[Employee] WITH (NOLOCK)
Hope this helps.
I've had off-hours jobs fubar my indexes before and I've gotten the same result as you describe. sp_recompile can recompile a sproc... or, if that doesn't work, the sp_recompile can be run on the table and all sprocs that act on that table will be recompiled -- works for me every time.
I ran into this problem before as well. Sounds like your indexes are out of whack. To get the same behavior in SSMS, add this before the script
SET ARITHABORT OFF
Does it timeout as well? If so, it's your indexing and statistics
It's most likely index-related. Had a similar issue with .Net app vs SSMS (specifically on a proc using a temp table w/ < 100 rows). We added a clustered index on the table and it flew from .Net thereafter.
Checked and this server, a development server, was not running SQL Server 2005 SP3. Tried to install that (with necessary reboot), but it didn't install. Oddly now both code and SSMS return in subsecond time.
Woot this is a HEISENBUG.
I've seen this behavior before and it can be a big problem with o/r mappers that use sp_executesql. If you examine the execution plans you'll likely find that the sp_executesql query is not making good use of indexes. I spent a fair amount of time trying to find a fix or explanation for this behavior but never got anywhere.
Most likely your .Net programs pass the variables as NVARCHAR, not as VARCHAR. Your indexes are on VARCHAR columns I assume (judging from your script), and a condition like ascii_column = #unicodeVariable is actually not SARG-able. The plan has to generate a scan in this case, where in SSMS would generate a seek because the variable is the right type.
Make sure you pass all your string as VARCHAR parameters, or modify your query to explicitly cast the variables, like this:
SELECT dv.ID
FROM dbo.DictionaryValue AS dv
WHERE dv.Word LIKE '%' + CAST(#FieldCheckValue5 AS VARCHAR(128)) + '%'