Using .NET to construct JSON? - sql

My manager assigned me a project whereby I have been using jQuery Calendar plugin to display calendar data stored across various tables in our SQL database.
It's just a jQuery plugin that takes static json data and renders it on a calendar. I had to integrate it with .net and our SQL database in such a way that the calendar could render the data from the SQL database (Microsoft SQL.)
Initially we put this together in such a way that we fetched all the data from the SQL server and then used .Net to construct the json and then pass it onto the jQuery calendar plugin.
Although in principle this worked well, it was extremely slow and IIS was often timing out. Not to mention, every time any of us wanted to view the calendar we had to wait around 3mins since the number of entries is approaching 3000.
The queries were quite complex, they're using on the fly Dateadd and DateDiff functions and alsorts of manners of operations. Execution time on the SQL server alone was around 90 seconds for the query. In total query size was around 160kb.
We then split the query into 3 parts (for different departments), but the amount of time we have to wait for the calendar to draw is still over a minute.
Here is an example of just one of the queries but there are over 100 of these per department
CREATE TABLE #AnnualLastMonImportantCustomDate(
Title varchar(550) COLLATE Latin1_General_CI_AS NULL,
AllocatedDate varchar(550) COLLATE Latin1_General_CI_AS NULL,
EndDateTime varchar(550) COLLATE Latin1_General_CI_AS NULL,
url varchar(550) COLLATE Latin1_General_CI_AS NULL,
width varchar(10) COLLATE Latin1_General_CI_AS NULL,
height varchar(550) COLLATE Latin1_General_CI_AS NULL,
AllDay varchar(550) COLLATE Latin1_General_CI_AS NULL,
description varchar(550) COLLATE Latin1_General_CI_AS NULL,
color varchar(550) COLLATE Latin1_General_CI_AS NULL,
textColor varchar(550) COLLATE Latin1_General_CI_AS NULL
)
DECLARE db_cursor CURSOR FOR SELECT AlertDate FROM xsCRMAlerts
WHERE AlertType='InternalImportantDate'
-- cursor is the results row when table goes through fetch process
SET #MyTableName='xsCRMAlerts'
OPEN db_cursor -- opens the table and stores id, which is the primary key in the table
FETCH NEXT FROM db_cursor INTO #MyTableName -- #MyTableName in this case is the result row.
WHILE ##FETCH_STATUS = 0 -- 0 is for success -1 is for too many results -2 is for the row fetched is missing
BEGIN
-- Below between begin and end the statement is linked to a function, which gives the dates tabled based on a start date. This table is then cross joined to produce desired result.
SET #startDate = #MyTableName -- we can set the start date to all the data we recieved because we have only asked for one field in our #MyTableName query when db_cursor was being drawn
INSERT INTO #AnnualLastMonImportantCustomDate
SELECT
'Important Date : ' + [Title] as 'Title',
dr.date as 'AllocatedDate',
dr.date as 'EndDateTime' ,
'xsCRM_Dates_Edit.aspx?id=' + cast(id as varchar) as 'url' ,
'515px' as 'width',
'410px' as 'height',
'true' as 'allDay',
'Important date' as 'description', /* This is a static entry and will not show on the calendar. Used when redering object*/
'yellow' as 'color',
'black' as 'textColor'
FROM [DelphiDude].[dbo].[xsCRMAlerts]
cross JOIN
dateTable(
DATEADD(yy,DATEDIFF(yy,0,GETDATE()),0)
,
DateAdd(yy,1,DATEADD(ms,-3,DATEADD(yy,0,DATEADD(yy,DATEDIFF(yy,0,GETDATE())+1,0))))
) dr -- You can specify intervals by calling DateTable_Month, DateTable_Quarter, DateTable_BiAnnual and DateTable_Annual
WHERE
(AlertType='InternalImportantDate') and
(occurring='765') and
(Datepart(m,date) = 12) and
(Datepart(day,date) > 24) and
(Datepart(dw,date) = 2) and
(Datepart(year,date) = (Datepart(year,getDate()) + 1))
FETCH NEXT FROM db_cursor INTO #MyTableName -- gets the next record from the table
END
CLOSE db_cursor
DEALLOCATE db_cursor
We really do need these queries.
We've now thought about limiting the result set to just the prev and next 30 days.
But each time we optimise a query, I then (even if its just using find and replace) have to replicate that change across 100 queries per module.
Is there a way we can optimise these queries and speed up the execution and calendar rendering time thats definitive and improves it by a long shot? And is there a way that I can apply the changes in such a way that it replicates across each of the queries?
I suggested using caching, db caching and object caching to my boss but he said the data would be changing often and data from here needs to be passed onto other modules and therefore if it is cached could be inaccurate. I dont have enough experience to contest what he was saying.
Any advice anyone?

in the query that you post, the cursor is useless because you never use the #startDate or #MyTableName variable in the insert query.
So a lot of duplicate rows are potentially inserted in your temp table.
Also, try to use either a CTE or a table variable instead of the "#Temporary table" because the data of "#Temporary tables" are stored phisically on the filesystem and cost a lot of I/O increasing the execution time.
Last advice : don't forget to create clustered/non-clustered indexes on your xsCRMAlerts table. If you are using SQL Server Management Studio, the execution plan or the Database Engine Tunning Advisor tool can help you a lot to find missing indexes.
Hope this helps :)

Related

Struggling with CONTAINS (SQL Server 2008)

I am trying to search through a column in one table (Asset_Database_Current) with the details from a column in another table (DNI_Names). I've written a loop to go through each DNI_Name, and then a SELECT CONTAINS statement to (hopefully) compare DNI_Name against the relevant column in Asset_Database_Current.
I've set up a FullText Catalog and have indexed the Asset_Database_Current table to be searchable.
This is the code I have:
DECLARE
#Asset varchar(8)
, #software_Name VARCHAR(64)
, #dniSW VARCHAR(64)
, #DNI VARCHAR(64)
CREATE TABLE #temp_naughtyChildren (naughtyAsset CHAR(8), naughtySW VARCHAR(64))
-- ^ where the Naughty People eventually get put
DECLARE #dni_cur CURSOR FOR
SELECT DNI_Names.DNI_Name
FROM DNI_Names
OPEN #dni_cur
FETCH NEXT FROM #dni_cur
INTO #DNI
WHILE ##FETCH_STATUS = 0
BEGIN -- we go through DNI_Names, selecting each instance of DNI_Name one at a time...
INSERT INTO #temp_naughtyChildren (naughtyAsset, naughtySW)
SELECT Asset_Number, Software_Name
FROM Asset_Database_Current
WHERE CONTAINS (Asset_Database_Current.Software_Name, #DNI)
FETCH NEXT FROM #dni_cur
INTO #DNI
END
DEALLOCATE #dni_cur
-- #temp_NaughtyChildren is now full of all the Asset Numbers and swName's of people who
-- have been very, very naughty. The table can now be used as required (emailed, sent to
-- another d/b, etc.).
SELECT * from #temp_NaughtyChildren ORDER BY naughtyAsset
DROP TABLE #temp_naughtyChildren
END
GO
The above code completes successfully, but when I try and execute the query I get the following error twice:
Msg 7630, Level 15, State 3, Procedure DNIChecker, Line 44
Syntax error near 'Manager' in the full-text search condition 'App Manager'.
My boss has also executed the query (he has full permissions on the database) and he gets the exact same error.
Any suggestions you can give me will be very gratefully received (I haven't done SQL since 1998, I've realised that when they teach you SQL at Uni they don't actually teach you much SQL, and I have spent the last 2 1/2 weeks trawling various sites working on this. Only yesterday did I finally stumble across an article that mentioned there was a wizard to set up full-text search!!).
Thanks, M
CONTAINS require qoutes around the search condition if you use a full phrase, otherwise you should use NEAR, AND, OR between the single words.
SET #DNI = '"' + #DNI + '"';
INSERT INTO #temp_naughtyChildren (naughtyAsset, naughtySW)
SELECT Asset_Number, Software_Name
FROM Asset_Database_Current
WHERE CONTAINS (Asset_Database_Current.Software_Name, #DNI)

SQL Server - find SPs which don't drop temp tables

(1) Is there a good/reliable way to query the system catalogue in order
to find all stored procedures which create some temporary tables in their
source code bodies but which don't drop them at the end of their bodies?
(2) In general, can creating temp tables in a SP and not dropping
them in the same SP cause some problems and if so, what problems?
I am asking this question in the contexts of
SQL Server 2008 R2 and SQL Server 2012 mostly.
Many thanks in advance.
Not 100% sure if this is accurate as I don't have a good set of test data to work with. First you need a function to count occurrences of a string (shamelessly stolen from here):
CREATE FUNCTION dbo.CountOccurancesOfString
(
#searchString nvarchar(max),
#searchTerm nvarchar(max)
)
RETURNS INT
AS
BEGIN
return (LEN(#searchString)-LEN(REPLACE(#searchString,#searchTerm,'')))/LEN(#searchTerm)
END
Next make use of the function like this. It searches the procedure text for the strings and reports when the number of creates doesn't match the number of drops:
WITH CreatesAndDrops AS (
SELECT procedures.name,
dbo.CountOccurancesOfString(UPPER(syscomments.text), 'CREATE TABLE #') AS Creates,
dbo.CountOccurancesOfString(UPPER(syscomments.text), 'DROP TABLE #') AS Drops
FROM sys.procedures
JOIN sys.syscomments
ON procedures.object_id = syscomments.id
)
SELECT * FROM CreatesAndDrops
WHERE Creates <> Drops
1) probably no good / reliable way -- though you can extract the text of sp's using some arcane ways that you can find in other places.
2) In general - no this causes no problems -- temp tables (#tables) are scope limited and will be flagged for removal when their scope disappears.
and table variables likewise
an exception is for global temp tables (##tables) which are cleaned up when no scope holds a reference to them. Avoid those guys -- there are usually (read almost always) better ways to do something than with a global temp table.
Sigh -- if you want to go down the (1) path then be aware that there are lots of pitfalls in looking at code inside sql server -- many of the helper functions and information tables will truncate the actual code down to a NVARCHAR(4000)
If you look at the code of sp_helptext you'll see a really horrible cursor that pulls the actual text..
I wrote this a long time ago to look for strings in code - you could run it on your database -- look for 'CREATE TABLE #' and 'DROP TABLE #' and compare the outputs....
DECLARE #SearchString VARCHAR(255) = 'DELETE FROM'
SELECT
[ObjectName]
, [ObjectText]
FROM
(
SELECT
so.[name] AS [ObjectName]
, REPLACE(comments.[c], '#x0D;', '') AS [ObjectText]
FROM
sys.objects AS so
CROSS APPLY (
SELECT CAST([text] AS NVARCHAR(MAX))
FROM syscomments AS sc
WHERE sc.[id] = so.[object_id]
FOR XML PATH('')
)
AS comments ([c])
WHERE
so.[is_ms_shipped] = 0
AND so.[type] = 'P'
)
AS spText
WHERE
spText.[ObjectText] LIKE '%' + #SearchString + '%'
Or much better - use whatever tool of choice you like on your codebase - you've got all your sp's etc scripted out into source control somewhere, right.....?
I think SQL Search tool from red-gate would come handy in this case. You can download from here. This tool will find the sql text within stored procedures, functions, views etc...
Just install this plugin and you can find sql text easily from SSMS.

sql server collation for multi lingual data

I am working with sql server 2012 and its the back end for an asp.net mvc multi lingual application.
I have set the collation on the database that powers the front end to "sql_latin1_general_cp1_ci_as".
This database will stored, english, russian, arabic etc data and therefore I will run in the collation problems as my stored procedures that will access my data has where clauses, order by clauses etc.
I have on set of stored procedures for which I access the data for language with. I am looking for the options as how to get around my issue.
I was thinking creating a view for each set of data and specify the collation on this. I will never have the scenario where I will be querying across different languages. Alternatively I could specify the collation in the stored procedure but this will mean my stored procedures will be different for each collation.
Any suggestions or idea's as to how the collation challenge here?
For starters, you'll want all of your columns to be nvarchar. That will take care of your storage problems. As for sorting and filtering, then collations become important as you say.
Depending on what you're doing with the data and how many columns you need to use for filtering and sorting and how you're doing the operations, one way to do it is with dynamic sql. You can do something like
declare #collation sysname = 'Latin1_General_CI_AS'
declare #cmd nvarchar(max)
set #cmd = 'select * from person order by last_name collate ' + #collation
exec sp_executesql #cmd
That's not a great solution, but it works. You can also throw the collation after any field in a view, so as you mentioned, that's an option. Something like this, and then you can query it without having to specify collations.
create view v_Person_RU as
select first_name collate Cyrillic_General_CI_AI, last_name collate Cyrillic_General_CI_AI...
create view v_Person_AR as
select first_name collate Arabic_CI_AI, last_name collate Arabic_CI_AI...
Then you could use just pick the right view to use for querying.
Possible this be helpful for you -
SELECT DATABASEPROPERTYEX('<your_db>', 'collation')
SELECT *
FROM dbo.table1 t
ORDER BY string_column COLLATE database_default -- i.e. sql_latin1_general_cp1_ci_as
try this with appropriate collation
collate + #collation
thanks, hope it help

Increase the speed of SQL Server database access for millions of records

Morning All,
I have a website I am working on that is around 2000 pages of code it is a social media site for businesses. It has the potential for millions of users. Currently we have around 80,000 users and the site access is getting sluggish. I am using 98% stored procedures in the site to improve speed. What I want to know is what can I do to improve data extraction speed and increase the site loading times. I am my knowledge the Member table of the database is not using full text indexing, would that make a difference? I guess it would for searching. But, for example, when logging in it takes a while to load. Here is the login SP script:
SELECT
a.MemberID,
CAST (ISNULL(a.ProfileTypeID,0) AS bit) AS HasProfile,
a.TimeOffsetDiff * a.TimeOffsetUnits AS TimeOffset,
b.City,
b.StateName AS State,
b.StateAbbr AS abbr,
b.Domain,
b.RegionID,
a.ProfileTypeID,
sbuser.sf_DisplayName(a.MemberID) AS DisplayName,
a.UserName,
a.ImgLib,
a.MemberREgionID AS HomeRegionID,
a.StateID,
a.IsSales,
a.IsAdmin
FROM Member a
INNER JOIN Region b ON b.RegionID = a.MemberRegionID
WHERE a.MemberID = #MemberID
UPDATE Member SET NumberLogins = (NumberLogins + 1) WHERE MemberID = #MemberID
Considering this is hunting through only 80,000 members and can take up to 15 secs to login, I consider that to be real slow. Any thoughts on how I can increase login speed?
Obviously, extracting member lists into pages can be laborious too. I recently update outdated tf scripting that contained temporary datasets and the like for paging and replaced it with the following example:
IF #MODE = 'MEMBERSEARCHNEW'
DECLARE #TotalPages INT
BEGIN
SELECT #TotalPages = COUNT(*)/#PageSize
FROM Member a
LEFT JOIN State b ON b.StateID = a.StateID
WHERE (sbuser.sf_DisplayName(a.MemberID) LIKE #UserName + '%')
AND a.MemberID <> #MemberID;
WITH FindSBMembers AS
(
SELECT ROW_NUMBER() OVER(ORDER BY a.Claimed DESC, sbuser.sf_MemberHasAvatar(a.MemberID) DESC) AS RowNum,
a.MemberID, -- 1
a.UserName, -- 2
a.PrCity, -- 3
b.Abbr, -- 4
sbuser.sf_MemberHasImages(a.MemberID) AS MemberHasImages, -- 5
sbuser.sf_MemberHasVideo(a.MemberID) AS MemberHasVideo, -- 6
sbuser.sf_MemberHasAudio(a.MemberID) AS MemberHasAudio, -- 7
sbuser.sf_DisplayName(a.MemberID) AS DisplayName, -- 8
a.ProfileTypeID, -- 9
a.Zip, -- 10
a.PhoneNbr, -- 11
a.PrPhone, -- 12
a.Claimed, -- 13
#TotalPages AS TotalPages -- 14
FROM Member a
LEFT JOIN State b ON b.StateID = a.StateID
WHERE (sbuser.sf_DisplayName(a.MemberID) LIKE #UserName + '%')
AND a.MemberID <> #MemberID
)
SELECT *
FROM FindSBMembers
WHERE RowNum BETWEEN (#PG - 1) * #PageSize + 1
AND #PG * #PageSize
ORDER BY Claimed DESC, sbuser.sf_MemberHasAvatar(MemberID) DESC
END
Is there any further way I can squeeze any more speed out of this script..?
I have had other suggestions including gzip compression, break the Member table into 26 tables based on letters of the alphabet. I am interested to know how the big companies do it, how do they arrange their data, sites like Facebook, Yelp, Yellow Pages, Twitter. I am currently running on a shared hosting server, would an upgrade to VPS or Dedicated server help improve speed.
The site is written in Classic ASP, utilizing SQL Server 2005.
Any help that any of you can provide will be greatly appreciated.
Best Regards and Happy Coding!
Paul
**** ADDITION START:
set ANSI_NULLS ON
set QUOTED_IDENTIFIER ON
GO
ALTER FUNCTION [sbuser].[sf_DisplayName](#MemberID bigint)
RETURNS varchar(150)
AS
BEGIN
DECLARE #OUT varchar(150)
DECLARE #UserName varchar(50)
DECLARE #FirstName varchar(50)
DECLARE #LastName varchar(50)
DECLARE #BusinessName varchar(50)
DECLARE #DisplayNameTypeID int
SELECT
#FirstName = upper(left(FirstName, 1)) + right(FirstName, len(FirstName) - 1),
#LastName = upper(left(LastName, 1)) + right(LastName, len(LastName) - 1) ,
#BusinessName = upper(left(BusinessName, 1)) + right(BusinessName, len(BusinessName) - 1),
#UserName = upper(left(UserName, 1)) + right(UserName, len(UserName) - 1),
/*
#FirstName = FirstName,
#LastName = LastName,
#BusinessName = BusinessName,
#UserName = UserName,
*/
#DisplayNameTypeID = DisplayNameTypeID
FROM Member
WHERE MemberID = #MemberID
IF #DisplayNameTypeID = 2 -- FIRST / LAST NAME
BEGIN
/*SET #OUT = #FirstName + ' ' + #LastName*/
SET #OUT = #LastName + ', ' + #FirstName
END
IF #DisplayNameTypeID = 3 -- FIRST NAME / LAST INITIAL
BEGIN
SET #OUT = #FirstName + ' ' + LEFT(#LastName,1) + '.'
END
IF #DisplayNameTypeID = 4 -- BUSINESS NAME
BEGIN
SET #OUT = #BusinessName + ''
END
RETURN #OUT
END
**** ADDITION END
80000 isn't a whole lot of records, unless you either have no indexes, or your data types are huge. if that query really is your bottle next, then you might want to consider creating covering indexes on the members table and the region table.
create an index on the member table with memberid as the index, and include profiletypeid, timeoffsetdiff, timeoffsetunits, profiletypeid, memberid, username, imglib, memberregionid, stateid, issales, isadmin.
also, jsut noticed your function sbuser.sf_DisplayName(a.memberID). you might explore into that function to make sure that that isn't your true bottle neck.
First option to speed up sf_DisplayName is to add FirstName, LastName etc from members as parameters and use that to build the DisplayName instead of doing a lookup against the member table.
After that you could consider to add DisplayName as a computed and persisted column to the member table. That means that the DisplayName will be calculated when the member is saved and the saved value will be used when you do the query. You can also add a index on the DisplayName column.
GetDisplayName function must be created with with schemabinding
create function dbo.GetDisplayName(
#FirstName varchar(50),
#LastName varchar(50),
#DisplayNameType int)
returns varchar(102) with schemabinding
as
begin
declare #Res varchar(102)
set #Res = ''
if #DisplayNameType = 1
set #Res = #FirstName+' '+#LastName
if #DisplayNameType = 2
set #Res = #LastName+', '+#FirstName
return #Res
end
The table with the persisted column DisplayName
CREATE TABLE [dbo].[Member](
[ID] [int] NOT NULL,
[FirstName] [varchar](50) NOT NULL,
[LastName] [varchar](50) NOT NULL,
[DisplayNameType] [int] NOT NULL,
[DisplayName] AS ([dbo].[GetDisplayName]([FirstName],[LastName],[DisplayNameType])) PERSISTED,
CONSTRAINT [PK_Member] PRIMARY KEY CLUSTERED
(
[ID] ASC
)
)
The index on DisplayName
CREATE INDEX [IX_Member_DisplayName] ON [dbo].[Member]
(
[DisplayName] ASC
)
You should also have a closer look at what you are doing in sf_MemberHasImages, sf_MemberHasVideo and sf_MemberHasAudio. They are used in the column list of the cte. Not as bad as used in the where clause but they could still cause you problems.
The last one I spotted as a potential problem is sf_MemberHasAvatar. It is used in a order by at two places. But the order by in row_number() is used like a where because of the filtering in the main query where clause WHERE RowNum BETWEEN (#PG - 1) * #PageSize + 1.
The technique described with persisted column might be possible to use on the other functions as well.
Quick n dirty way to take the UDF call out of "every row"
SELECT *, sbuser.sf_DisplayName(MemberID) FROM (
SELECT
a.MemberID,
CAST (ISNULL(a.ProfileTypeID,0) AS bit) AS HasProfile,
a.TimeOffsetDiff * a.TimeOffsetUnits AS TimeOffset,
b.City,
b.StateName AS State,
b.StateAbbr AS abbr,
b.Domain,
b.RegionID,
a.ProfileTypeID,
a.UserName,
a.ImgLib,
a.MemberREgionID AS HomeRegionID,
a.StateID,
a.IsSales,
a.IsAdmin
FROM Member a
INNER JOIN Region b ON b.RegionID = a.MemberRegionID
WHERE a.MemberID = #MemberID
)
another way, if you don't want to modify any tables, is to just put the udf logic in the select statement:
case DisplayNameTypeID
when 2 then upper(left(LastName, 1)) + right(LastName, len(LastName) - 1) + ', ' + upper(left(FirstName, 1)) + right(FirstName, len(FirstName) - 1)
when 3 then upper(left(FirstName, 1)) + right(FirstName, len(FirstName) - 1) + ' ' + upper(left(LastName, 1))
when 4 then upper(left(BusinessName, 1)) + right(BusinessName, len(BusinessName) - 1)
end as DisplayName
yeah it looks a bit gorey, but all you have to do is modify the sp.
Put indexes on the primary and foreign keys (MemberID, RegionID, MemberRegionID)
#Tom Gullen - In this instance the fact that Classic ASP is used would seem to be an irrelevance, since the actual cost in terms of computing in this instance seems to be with SQL (or whatever db tech this is running on).
#the question - I'd agree with Cosmin that indexing the relevant fields within the tables would provide a definite performance gain, assuming they're not already done.
We had this case about a week ago where my boss was trying to do multiple conditional inserts from a batch file which was taking forever. We place a single index on a userid fields, and hey presto, the same script took about a minute to execute.
Indexing!
Initial Thoughts
The problem here probably isn't with your stored procedures. Especially in regards to the login script, you are focussing your attention in a small and irrelevant place, as a login command is a one off cost and you can have a much much higher tolerance for script execution time of those sort of pages.
You are using classic ASP, which is quite out of date now. When you are dealing with so many visitors, your server is going to need a lot of power to manage all those requests that it is interpreting. Interpreted pages will run slower than compiled pages.
Time Events
If you are convinced the database is being slow, use times in your script. Add a general timer at the top of the page, and an SQL timer.
Page start load, initialise general time. When you reach a stored procedure, start the SQL timer. When query has finished, stop the SQL timer. At the end of the page you have two timers, one totalling the time spent running SQL, and the other timer - SQL timer gives you total time for executing code. This helps you separate your database from your code in regards to efficiency.
Improving ASP Page Performance
I've detailed good ASP page design here:
VBScript Out Of Memory Error
Also consider:
Use Option Explicit at the top of your pages.
Set Response.Buffer = True
Use response.write inside <% %>, repeatedly opening and closing these is slow
I'll re-iterate what I said in the linked answer, by far, by far the best thing you can do for performance is to dump recordset results into an array with .getRows(). Do not loop recordsets. Do not select fields in queries you do not use. Only have 1 recordset, and 1 ado connection per page. I really recommend you read the link for good ASP page design.
Upgrade if no Glaring Issues
What are the specs of the server? Upgrading the hardware is probably your best route to increased performance in this instance, and most efficient in regards to cost/reward.
To replace the UDF, if that is the problem, I recommend having one field in the Member table to store the DisplayName as the data seems to be rather static from the looks of your function. You only need to update the field once in the beginning, and from then on only when someone registers or DisplayNameTypeID is changed. I hope this is helpful for you.

Stored Procedure; Insert Slowness

I have an SP that takes 10 seconds to run about 10 times (about a second every time it is ran). The platform is asp .net, and the server is SQL Server 2005. I have indexed the table (not on the PK also), and that is not the issue. Some caveats:
usp_SaveKeyword is not the issue. I commented out that entire SP and it made not difference.
I set #SearchID to 1 and the time was significantly reduced, only taking about 15ms on average for the transaction.
I commented out the entire stored procedure except the insert into tblSearches and strangely it took more time to execute.
Any ideas of what could be going on?
set ANSI_NULLS ON
go
ALTER PROCEDURE [dbo].[usp_NewSearch]
#Keyword VARCHAR(50),
#SessionID UNIQUEIDENTIFIER,
#time SMALLDATETIME = NULL,
#CityID INT = NULL
AS
BEGIN
SET NOCOUNT ON;
IF #time IS NULL SET #time = GETDATE();
DECLARE #KeywordID INT;
EXEC #KeywordID = usp_SaveKeyword #Keyword;
PRINT 'KeywordID : '
PRINT #KeywordID
DECLARE #SearchID BIGINT;
SELECT TOP 1 #SearchID = SearchID
FROM tblSearches
WHERE SessionID = #SessionID
AND KeywordID = #KeywordID;
IF #SearchID IS NULL BEGIN
INSERT INTO tblSearches
(KeywordID, [time], SessionID, CityID)
VALUES
(#KeywordID, #time, #SessionID, #CityID)
SELECT Scope_Identity();
END
ELSE BEGIN
SELECT #SearchID
END
END
Why are you using top 1 #SearchID instead of max (SearchID) or where exists in this query? top requires you to run the query and retrieve the first row from the result set. If the result set is large this could consume quite a lot of resources before you get out the final result set.
SELECT TOP 1 #SearchID = SearchID
FROM tblSearches
WHERE SessionID = #SessionID
AND KeywordID = #KeywordID;
I don't see any obvious reason for this - either of aforementioned constructs should get you something semantically equivalent to this with a very cheap index lookup. Unless I'm missing something you should be able to do something like
select #SearchID = isnull (max (SearchID), -1)
from tblSearches
where SessionID = #SessionID
and KeywordID = #KeywordID
This ought to be fairly efficient and (unless I'm missing something) semantically equivalent.
Enable "Display Estimated Execution Plan" in SQL Management Studio - where does the execution plan show you spending the time? It'll guide you on the heuristics being used to optimize the query (or not in this case). Generally the "fatter" lines are the ones to focus on - they're ones generating large amounts of I/O.
Unfortunately even if you tell us the table schema, only you will be able to see actually how SQL chose to optimize the query. One last thing - have you got a clustered index on tblSearches?
Triggers!
They are insidious indeed.
What is the clustered index on tblSearches? If the clustered index is not on primary key, the database may be spending a lot of time reordering.
How many other indexes do you have?
Do you have any triggers?
Where does the execution plan indicate the time is being spent?