How can I solve a performance issue in my stored procedure? - sql

I have a performance problem with a stored procedure. Because when I check my benchmark's result I realized that "MatchxxxReferencesByIds" has '240.25' ms Average LastElapsedTimeInSecond. How can I improve my procedure?
ALTER PROCEDURE [Common].[MatchxxxReferencesByIds]
(#refxxxIds VARCHAR(MAX),
#refxxxType NVARCHAR(250))
BEGIN
SET NOCOUNT ON;
BEGIN TRAN
DECLARE #fake_tbl TABLE (xxxid NVARCHAR(50))
INSERT INTO #fake_tbl
SELECT LTRIM(RTRIM(split.a.value('.', 'NVARCHAR(MAX)'))) AS fqdn
FROM
(SELECT
CAST ('<M>' + REPLACE(#refxxxIds, ',', '</M><M>') + '</M>' AS XML) AS data
) AS a
CROSS APPLY
data.nodes ('/M') AS split(a)
SELECT [p].[ReferencedxxxId]
FROM [Common].[xxxReference] AS [p]
WHERE ([p].[IsDeleted] = 0)
AND (([p].[ReferencedxxxType] COLLATE Turkish_CI_AS = #refxxxType COLLATE Turkish_CI_AS )
AND [p].[ReferencedxxxId] COLLATE Turkish_CI_AS IN (SELECT ft.xxxid COLLATE Turkish_CI_AS FROM #fake_tbl ft))
COMMIT;
END;

One can only make assumptions without knowing the table's schema, indexes and data sizes.
Hard-coding collations can prevent the query optimizer from using any indexes on the ReferencedEntityId column. The field name and sample data '423423,423423,423432,23423' suggest this is a numeric column anyway (int? bigint?). The collation shouldn't be needed and the variable's column type should match the table's type.
Finally, a.value can return an int or bigint directly, which means the splitting query can be rewritten as:
declare #refEntityIds nvarchar(max)='423423,423423,423432,23423';
DECLARE #fake_tbl TABLE (entityid bigint PRIMARY KEY, INDEX IX_TBL(Entityid))
INSERT INTO #fake_tbl
SELECT split.a.value('.', 'bigint') AS fqdn
FROM
(SELECT
CAST ('<M>' + REPLACE(#refEntityIds, ',', '</M><M>') + '</M>' AS XML) AS data
) AS a
CROSS APPLY
data.nodes ('/M') AS split(a)
The input data contains some duplicates so entityid can't be a PRIMARY KEY.
After that, the query can change to :
SELECT [p].[ReferencedEntityId]
FROM [Common].[EntityReference] AS [p]
WHERE [p].[IsDeleted] = 0
AND [p].[ReferencedEntityType] COLLATE Turkish_CI_AS = #refEntityType COLLATE Turkish_CI_AS
AND [p].[ReferencedEntityId] IN (SELECT ft.entityid FROM #fake_tbl ft)
The next problem is the hard-coded collation. Unless it matches the column's actual collation, this prevents the server from using any indexes that cover that column. How to fix this depends on the actual data statistics. Perhaps the column's collation has to change or perhaps the rows after filtering by ReferencedEntityId are so few that there's no benefit to this.
Finally, IsDeleted can't be indexed. It's either a bit columns whose values are 1/0 or another numberic column that still contains 0/1. An index that's so bad at selecting rows won't be used by the query optimizer because it's actually faster to just scan the rows returned by the other conditions.
A general rule is to put the most selective index column first. The database combines all columns to create one "key" value and construct a B+-tree index from it. The more selective the key, the fewer index nodes need to be scanned.
IsDeleted can still be used in a filtered index to index only the non-deleted columns. This allows the query optimizer to eliminate unwanted columns from the search. The resulting index will be smaller too, which means the same number of IO operations will load more index pages in memory and allow faster seeking.
All of this means that EntityReference should have an index like this one.
CREATE NONCLUSTERED INDEX IX_EntityReference_ReferenceEntityID
ON Common.EntityReference (ReferenceEntityId, ReferenceEntityType)
WHERE IsDeleted =0;
If the collations don't match, ReferenceEntityType won't be used for seeking. If this is the most common case we can remove ReferenceEntityType from the index and put it in an INCLUDE clause. The field won't be part of the index key although it will still be available for filtering without having to load data from the actual table:
CREATE NONCLUSTERED INDEX IX_EntityReference_ReferenceEntityID
ON Common.EntityReference (ReferenceEntityId)
INCLUDE(ReferenceEntityType)
WHERE IsDeleted =0;
Of course, if that's the most common case the column's collation should be changed instead

Based on the execution plan of the stored procedure, what makes it to perform slowly, is the part where you are going to work with XML.
Lets rethink about the solution:
I have created a table like this:
CREATE TABLE [Common].[EntityReference]
(
IsDeleted BIT,
ReferencedEntityType VARCHAR(100),
ReferencedEntityId VARCHAR(10)
);
GO
and manipulate it like this(Insert 1M records into it):
DECLARE #i INT = 1000000;
DECLARE #isDeleted BIT,
#ReferencedEntityType VARCHAR(100),
#ReferencedEntityId VARCHAR(10);
WHILE #i > 0
BEGIN
SET #isDeleted =(SELECT #i % 2);
SET #ReferencedEntityType = 'TEST' + CASE WHEN #i % 2 = 0 THEN '' ELSE CAST(#i % 2 AS VARCHAR(100)) END;
SET #ReferencedEntityId = CAST(#i AS VARCHAR(10));
INSERT INTO [Common].[EntityReference]
(
IsDeleted,
ReferencedEntityType,
ReferencedEntityId
)
VALUES (#isDeleted, #ReferencedEntityType, #ReferencedEntityId);
SET #i = #i - 1;
END;
lets analyse your code:
You have a comma delimited input(#refEntityIds), which you want to split it and then run a query against these values. (your SP's subtree cost in my PC is about 376) To do so you have different approaches:
1.Pass a table variable to stored procedure which contains the refEntityIds
2.Make use of STRING_SPLIT function to split the string
Lets see the sample query:
INSERT INTO #fake_tbl
SELECT value
FROM STRING_SPLIT(#refEntityIds, ',');
Using this, you will gain a great performance improvement in your code.(subtree cost: 6.19 without following indexes) BUT this feature is not available in SQL Server 2008!
You can use a replacement for this function(read this: https://stackoverflow.com/a/54926996/1666800) and changing your query to this(subtree cost still is about 6.19):
INSERT INTO #fake_tbl
SELECT value FROM dbo.[fn_split_string_to_column](#refEntityIds,',')
In this case again you will see the notable performance improvement.
You can also create a non clustered index on [Common].[EntityReference] table, which has a little performance improvement too. But please think about creating index, before creating it, it might have negative impact on your DML operations:
CREATE NONCLUSTERED INDEX [Index Name] ON [Common].[EntityReference]
(
[IsDeleted] ASC
)
INCLUDE ([ReferencedEntityType],[ReferencedEntityId])
In case that I have not this index(Suppose that I have replaced your split solution with mine), the subtree cost is: 6.19, When I add aforementioned index, the subtree cost decreased to 4.70, and finally when I change the index to following one, the subtree cost is 5.16
CREATE NONCLUSTERED INDEX [Index Name] ON [Common].[EntityReference]
(
[ReferencedEntityType] ASC,
[ReferencedEntityId] ASC
)
INCLUDE ([IsDeleted])
Thanks to #PanagiotisKanavos following index will even performs better than the aforementioned ones (subtree cost: 3.95):
CREATE NONCLUSTERED INDEX IX_EntityReference_ReferenceEntityID
ON Common.EntityReference (ReferencedEntityId)
INCLUDE(ReferencedEntityType)
WHERE IsDeleted =0;
Also please note that, using transaction against a local table variable has almost no effect, and probably you can simply ignore it.

If [p].[ReferencedEntityId] is going to be integers, then you don't need to apply COLLATE clause. You can directly apply IN condition.
You can go for simple comma separated values to integers list using Table valued functions. There are many samples. Keep the datatype of the ID as integer, to avoid collation application.
[p].[ReferencedEntityId] IN (SELECT ft.entityid AS FROM #fake_tbl ft))

I don't think you need a TRAN. You are simply "shredding" your comma seperated values into a #variable table. and doing a SELECT. TRAN not need here.
try an exists
SELECT [p].[ReferencedEntityId]
FROM [Common].[EntityReference] AS [p]
WHERE ([p].[IsDeleted] = 0)
AND (([p].[ReferencedEntityType] COLLATE Turkish_CI_AS = #refEntityType COLLATE Turkish_CI_AS )
AND EXISTS (SELECT 1 FROM #fake_tbl ft WHERE ft.entityid COLLATE Turkish_CI_AS = [p].[ReferencedEntityId] COLLATE Turkish_CI_AS )
3.
See https://www.sqlshack.com/efficient-creation-parsing-delimited-strings/
for different ways to parse your delimited string.
quote from article:
Microsoft’s built-in function provides a solution that is convenient
and appears to perform well. It isn’t faster than XML, but it clearly
was written in a way that provides an easy-to-optimize execution plan.
Logical reads are higher, as well. While we cannot look under the
covers and see exactly how Microsoft implemented this function, we at
least have the convenience of a function to split strings that are
shipped with SQL Server. Note that the separator passed into this
function must be of size 1. In other words, you cannot use
STRING_SPLIT with a multi-character delimiter, such as ‘”,”’.
post a screen shot of your execution plan. if you don't have proper index (or you have "hints" that prevent use of indexes)..your query will never perform well.

Related

User-defined In-line Table-Valued Functions Called On Each Other In SQL Server 2008

I am using SQL Server 2008, and I am struggling with learning how to correctly call a User-defined In-line Table-Valued Function on a User-defined In-line Table-Valued Function (that is, since each expects a scalar or scalars as input and outputs a table, I want to learn how to correctly call one by passing it another table, whereupon each row is treated as its scalar inputs).
I posted a couple questions related to this recently, but I think I was not clear enough, and did not sufficiently encapsulate the problem to cleanly demonstrate it. I have now prepared the proper statements to provide anyone interested in helping the necessary tables, views, functions, and SELECT outputs to see the problem occur in front of them by executing the query below.
There are several ways I can phrase the core question, and from here and other forums, I can tell I have difficulty clearly expressing it. I am going to phrase it several ways here, but these are all meant to be the same question, phrased differently so people from different backgrounds can more easily understand me.
How do I correctly write the "imageFileNameFromAddress" function below so it works as intended; to wit, the intent is that it takes the same input as "bookAndPageFromAddress" and, using bookAndPageFromAddress and imageFileNameFromBookPage, passing the input to the first, then its output to the second, and returns the second's output?
Why does the third SELECT statement at the bottom below provide different results from the second one, and how do I fix the underlying function(s) to provide identical results, without repeating code from the other functions?
What is the correct syntax for the OUTER APPLY call in imageFileNameFromAddress so that its output is not null?
WARNING: The code below constructs the necessary tables, views, and functions to demonstrate the problem by dropping them first if they exist, so please please please check first to make sure you don't drop anything of your own! The final three SELECTS demonstrate the problem; the final two SELECTS should have identical output, but do not - the first one (of the final two, so the middle of the three) is a three row table of strings, and the final one is a one row table containing only a NULL.
USE [TOM_GIS]
GO
IF OBJECT_ID(N'[dbo].[constant]', N'U') IS NOT NULL
DROP TABLE [dbo].[constant]
CREATE TABLE [dbo].[constant]
(
ID INT IDENTITY(1,1) PRIMARY KEY CLUSTERED,
BOOK varchar(5),
PAGE varchar(5),
DocID numeric(8, 0)
)
INSERT INTO [dbo].[constant]
VALUES(' 4043',' 125', 576030)
GO
IF OBJECT_ID(N'[dbo].[images]', N'U') IS NOT NULL
DROP TABLE [dbo].[images]
CREATE TABLE [dbo].[images]
(
ID INT IDENTITY(1,1) PRIMARY KEY CLUSTERED,
DocID numeric(8, 0),
ImageID numeric(12,0)
)
INSERT INTO [dbo].[images] VALUES(576030, 1589666);
INSERT INTO [dbo].[images] VALUES(576030, 1589667);
INSERT INTO [dbo].[images] VALUES(576030, 1589668);
GO
IF OBJECT_ID(N'[dbo].[addressBookPage]', N'U') IS NOT NULL
DROP TABLE [dbo].[addressBookPage]
CREATE TABLE [dbo].[addressBookPage]
(
ID INT IDENTITY(1,1) PRIMARY KEY CLUSTERED,
PARCEL_ADDRESS nvarchar(50),
BOOK nchar(10),
PAGE nchar(10),
)
INSERT INTO [dbo].[addressBookPage]
VALUES('155 CENTER STREET','4043', '125')
GO
IF OBJECT_ID(N'[dbo].[vw_quindraco]') IS NOT NULL
DROP VIEW [dbo].[vw_quindraco]
GO
CREATE VIEW [dbo].[vw_quindraco]
AS
WITH files AS (SELECT RIGHT('00000000' + LTRIM(STR(c.DocID)), 8) AS PathInfo
,RIGHT('0000000000' + LTRIM(STR(i.ImageID)), 12) AS FileName
,ltrim(c.Book) as Book
,ltrim(c.Page) as Page
FROM [dbo].[constant] AS c INNER JOIN
[dbo].[images] AS i ON c.DocID = i.DocID)
SELECT 'Images/' + SUBSTRING(PathInfo, 1, 2) + '/' + SUBSTRING(PathInfo, 3, 2) + '/' + SUBSTRING(PathInfo, 5, 2)
+ '/' + RIGHT(PathInfo, 8) + '/' + FileName + '.tif' AS FullFileName
,Book
,Page
FROM files AS files_1
GO
IF OBJECT_ID(N'[dbo].[bookAndPageFromAddress]') IS NOT NULL
DROP FUNCTION [dbo].[bookAndPageFromAddress];
GO
CREATE FUNCTION [dbo].[bookAndPageFromAddress] (#address NVARCHAR(max))
RETURNS TABLE AS RETURN(
SELECT PARCEL_ADDRESS AS Address, Book, Page
FROM [dbo].[addressBookPage]
WHERE PARCEL_ADDRESS like '%' + #address + '%'
);
GO
IF OBJECT_ID(N'[dbo].[imageFileNameFromBookPage]') IS NOT NULL
DROP FUNCTION [dbo].[imageFileNameFromBookPage];
GO
CREATE FUNCTION [dbo].[imageFileNameFromBookPage] (#book nvarchar(max), #page nvarchar(max))
RETURNS TABLE AS RETURN(
SELECT i.FullFileName
FROM [dbo].[vw_quindraco] i
WHERE i.Book like #book
AND i.Page like #page
);
GO
IF OBJECT_ID(N'[dbo].[imageFileNameFromAddress]') IS NOT NULL
DROP FUNCTION [dbo].[imageFileNameFromAddress];
GO
CREATE FUNCTION [dbo].[imageFileNameFromAddress] (#address NVARCHAR(max))
RETURNS TABLE AS RETURN(
SELECT *
FROM [dbo].[bookAndPageFromAddress](#address) addresses
OUTER APPLY [dbo].[imageFileNameFromBookPage](addresses.Book, addresses.Page) foo
);
GO
SELECT Book,Page FROM [dbo].[bookAndPageFromAddress]('155 Center Street');
SELECT FullFileName FROM [dbo].[imageFileNameFromBookPage]('4043','125');
SELECT FullFileName FROM [dbo].[imageFileNameFromAddress]('155 Center Street')
You have your table fields as nchars, and you are using Like.
Because it's nchar, the value is padded with spaces to the declared length (10).
Because it's Like, the spaces are considered essential part of a match, whereas the equality operator, =, would ignore trailing spaces.
Because data types in the table and in the function parameters do not match, implicit conversions happen in the background, ultimately causing comparison to fail because of spaces.
Use = instead of Like inside imageFileNameFromBookPage to quickly fix it.
Better yet, use correct data types in all functions and views to avoid any conversions.

SQL How to Split One Column into Multiple Variable Columns

I am working on MSSQL, trying to split one string column into multiple columns. The string column has numbers separated by semicolons, like:
190230943204;190234443204;
However, some rows have more numbers than others, so in the database you can have
190230943204;190234443204;
121340944534;340212343204;134530943204
I've seen some solutions for splitting one column into a specific number of columns, but not variable columns. The columns that have less data (2 series of strings separated by commas instead of 3) will have nulls in the third place.
Ideas? Let me know if I must clarify anything.
Splitting this data into separate columns is a very good start (coma-separated values are an heresy). However, a "variable number of properties" should typically be modeled as a one-to-many relationship.
CREATE TABLE main_entity (
id INT PRIMARY KEY,
other_fields INT
);
CREATE TABLE entity_properties (
main_entity_id INT PRIMARY KEY,
property_value INT,
FOREIGN KEY (main_entity_id) REFERENCES main_entity(id)
);
entity_properties.main_entity_id is a foreign key to main_entity.id.
Congratulations, you are on the right path, this is called normalisation. You are about to reach the First Normal Form.
Beweare, however, these properties should have a sensibly similar nature (ie. all phone numbers, or addresses, etc.). Do not to fall into the dark side (a.k.a. the Entity-Attribute-Value anti-pattern), and be tempted to throw all properties into the same table. If you can identify several types of attributes, store each type in a separate table.
If these are all fixed length strings (as in the question), then you can do the work fairly simply (at least relative to other solutions):
select substring(col, 1+13*(n-1), 12) as val
from t join
(select 1 as n union all select union all select 3
) n
on len(t.col) <= 13*n.n
This is a useful hack if all the entries are the same size (not so easy if they are of different sizes). Do, however, think about the data structure because semi-colon (or comma) separated list is not a very good data structure.
IF I were you, I would create a simple function that is dividing values separated with ';' like this:
IF EXISTS (SELECT * FROM sysobjects WHERE id = object_id(N'fn_Split_List') AND xtype IN (N'FN', N'IF', N'TF'))
BEGIN
DROP FUNCTION [dbo].[fn_Split_List]
END
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE FUNCTION [dbo].[fn_Split_List](#List NVARCHAR(512))
RETURNS #ResultRowset TABLE ( [Value] NVARCHAR(128) PRIMARY KEY)
AS
BEGIN
DECLARE #XML xml = N'<r><![CDATA[' + REPLACE(#List, ';', ']]></r><r><![CDATA[') + ']]></r>'
INSERT INTO #ResultRowset ([Value])
SELECT DISTINCT RTRIM(LTRIM(Tbl.Col.value('.', 'NVARCHAR(128)')))
FROM #xml.nodes('//r') Tbl(Col)
RETURN
END
GO
Than simply called in this way:
SET NOCOUNT ON
GO
DECLARE #RawData TABLE( [Value] NVARCHAR(256))
INSERT INTO #RawData ([Value] )
VALUES ('1111111;22222222')
,('3333333;113113131')
,('776767676')
,('89332131;313131312;54545353')
SELECT SL.[Value]
FROM #RawData AS RD
CROSS APPLY [fn_Split_List] ([Value]) as SL
SET NOCOUNT OFF
GO
The result is as the follow:
Value
1111111
22222222
113113131
3333333
776767676
313131312
54545353
89332131
Anyway, the logic in the function is not complicated, so you can easily put it anywhere you need.
Note: There is not limitations of how many values you will have separated with ';', but there are length limitation in the function that you can set to NVARCHAR(MAX) if you need.
EDIT:
As I can see, there are some rows in your example that will caused the function to return empty strings. For example:
number;number;
will return:
number
number
'' (empty string)
To clear them, just add the following where clause to the statement above like this:
SELECT SL.[Value]
FROM #RawData AS RD
CROSS APPLY [fn_Split_List] ([Value]) as SL
WHERE LEN(SL.[Value]) > 0

Sort nvarchar in SQL Server 2008

I have a table with this data in SQL Server :
Id
=====
1
12e
5
and I want to order this data like this:
id
====
1
5
12e
My id column is of type nvarchar(50) and I can't convert it to int.
Is this possible that I sort the data in this way?
As a general rule, if you ever find yourself manipulating parts of columns, you're almost certainly doing it wrong.
If your ID is made up of a numeric and alpha component and you need to fiddle with just the numeric bit, make it two columns and save yourself some angst. In that case, you have an integral id_numeric and a varchar id_alpha and your query is simply:
select char(id_numeric) | id_alpha as id
from mytable
order by id_numeric asc
Or, if you really must store that as a single column, create extra columns to hold the individual parts and use those for sorting and selection. But, in order to mitigate the problems in having duplicate data in a row, use triggers to ensure the data remains consistent:
select id
from mytable
order by id_numeric asc
You usually don't want to have to do this splitting on every select since that never scales well. By doing it as an update/insert trigger, you only do the splitting when needed (ie, when the data changes) and this cost is amortised across all the selects. That's a good idea because, in the vast majority of cases, databases are read far more often than they're written.
And it's perfectly normal practice to revert to lesser levels of normalisation for performance reasons, provided that you understand and mitigate the consequences.
I'd actually use something along the lines of this function, though be warned that it's not going to be super-speedy. I've modified that function to return only the numbers:
CREATE FUNCTION dbo.UDF_ParseNumericChars
(
#string VARCHAR(8000)
)
RETURNS VARCHAR(8000)
WITH SCHEMABINDING
AS
BEGIN
DECLARE #IncorrectCharLoc SMALLINT
SET #IncorrectCharLoc = PATINDEX('%[^0-9]%', #string)
WHILE #IncorrectCharLoc > 0
BEGIN
SET #string = STUFF(#string, #IncorrectCharLoc, 1, '')
SET #IncorrectCharLoc = PATINDEX('%[^0-9]%', #string)
END
SET #string = #string
RETURN #string
END
GO
Once you create that function, then you can do your sort like this:
SELECT YourMixedColumn
FROM YourTable
ORDER BY CONVERT(INT, dbo.UDF_ParseNumericChars(YourMixedColumn))
It can be sort with the Len function
create table #temp (id nvarchar(50) null)
select * from #temp order by LEN(id)

How hard would you try to make your SQL queries secure?

I am in a situation where I am given a comma-separated VarChar as input to a stored procedure. I want to do something like this:
SELECT * FROM tblMyTable
INNER JOIN /*Bunch of inner joins here*/
WHERE ItemID IN ($MyList);
However, you can't use a VarChar with the IN statement. There are two ways to get around this problem:
(The Wrong Way) Create the SQL query in a String, like so:
SET $SQL = '
SELECT * FROM tblMyTable
INNER JOIN /*Bunch of inner joins here*/
WHERE ItemID IN (' + $MyList + ');
EXEC($SQL);
(The Right Way) Create a temporary table that contains the values of $MyList, then join that table in the initial query.
My question is:
Option 2 has a relatively large performance hit with creating a temporary table, which is less than ideal.
While Option 1 is open to an SQL injection attack, since my SPROC is being called from an authenticated source, does it really matter? Only trusted sources will execute this SPROC, so if they choose to bugger up the database, that is their prerogative.
So, how far would you go to make your code secure?
What database are you using? in SQL Server you can create a split function that can split a long string and return a table sub-second. you use the table function call like a regular table in a query (no temp table necessary)
You need to create a split function, or if you have one just use it. This is how a split function can be used:
SELECT
*
FROM YourTable y
INNER JOIN dbo.yourSplitFunction(#Parameter) s ON y.ID=s.Value
I prefer the number table approach to split a string in TSQL but there are numerous ways to split strings in SQL Server, see the previous link, which explains the PROs and CONs of each.
For the Numbers Table method to work, you need to do this one time table setup, which will create a table Numbers that contains rows from 1 to 10,000:
SELECT TOP 10000 IDENTITY(int,1,1) AS Number
INTO Numbers
FROM sys.objects s1
CROSS JOIN sys.objects s2
ALTER TABLE Numbers ADD CONSTRAINT PK_Numbers PRIMARY KEY CLUSTERED (Number)
Once the Numbers table is set up, create this split function:
CREATE FUNCTION [dbo].[FN_ListToTable]
(
#SplitOn char(1) --REQUIRED, the character to split the #List string on
,#List varchar(8000)--REQUIRED, the list to split apart
)
RETURNS TABLE
AS
RETURN
(
----------------
--SINGLE QUERY-- --this will not return empty rows
----------------
SELECT
ListValue
FROM (SELECT
LTRIM(RTRIM(SUBSTRING(List2, number+1, CHARINDEX(#SplitOn, List2, number+1)-number - 1))) AS ListValue
FROM (
SELECT #SplitOn + #List + #SplitOn AS List2
) AS dt
INNER JOIN Numbers n ON n.Number < LEN(dt.List2)
WHERE SUBSTRING(List2, number, 1) = #SplitOn
) dt2
WHERE ListValue IS NOT NULL AND ListValue!=''
);
GO
You can now easily split a CSV string into a table and join on it:
select * from dbo.FN_ListToTable(',','1,2,3,,,4,5,6777,,,')
OUTPUT:
ListValue
-----------------------
1
2
3
4
5
6777
(6 row(s) affected)
Your can use the CSV string like this, not temp table necessary:
SELECT * FROM tblMyTable
INNER JOIN /*Bunch of inner joins here*/
WHERE ItemID IN (select ListValue from dbo.FN_ListToTable(',',$MyList));
I would personally prefer option 2 in that just because a source is authenticated, does not mean you should be letting your guard down. You would leave yourself open to potential rights escalations where an authenticated low lvl user, is able to still execute commands against the database you had not intended.
The phrase you use of 'trusted sources' - it might be better if you assume an X-Files aproach and to trust no-one.
If someone buggers up the database you might still be getting a call.
A good option that is similar to option two is to use a function to create a table in memory from the CSV list. It is reasonably fast and offers the protections of option two. Then that table can be joined to the Inner Join, e.g.
CREATE FUNCTION [dbo].[simple_strlist_to_tbl] (#list nvarchar(MAX))
RETURNS #tbl TABLE (str varchar(4000) NOT NULL) AS
BEGIN
DECLARE #pos int,
#nextpos int,
#valuelen int
SELECT #pos = 0, #nextpos = 1
WHILE #nextpos > 0
BEGIN
SELECT #nextpos = charindex(',', #list, #pos + 1)
SELECT #valuelen = CASE WHEN #nextpos > 0
THEN #nextpos
ELSE len(#list) + 1
END - #pos - 1
INSERT #tbl (str)
VALUES (substring(#list, #pos + 1, #valuelen))
SELECT #pos = #nextpos
END
RETURN
END
Then in the join:
tblMyTable INNER JOIN
simple_strlist_to_tbl(#MyList) list ON tblMyTable.itemId = list.str
Option 3 is to confirm each item in the list is in fact an integer before concatenating the string to your SQL statement.
Do this by parsing the input string (e.g., split into an array), loop through and convert each value to an int, and then recreate the list yourself before concatenating back to the SQL statement. This will give you reasonable assurance that SQL injection cannot occur.
It is safer to concatenate strings that have been created by your application, because you can do things like check for int, but it also means your code is written in a way that a subsequent developer may modify slightly, thus opening back up the risk of SQL injection, because they do not realize that is what your code is protecting against. Make sure you comment well what you are doing if you go this route.
A third option: pass the values to the stored procedure in an array. Then you can either assemble the comma-separated string in your code and use the dynamic SQL option, or (if your flavour of RDBMS permits it) use the array directly in the SELECT statement.
Why don't You write an CLR split function, that will do all the job nice and easy? You can write user Defined table functions which will return a table doing string splitting with .Net infructure. Hell in SQL 2008 you can even give them hints if they return the strings sorted in any way... like ascending or something which can help the optimizer?
Or maybe You can't do CLR integration then You have to stick to the tsql but I personally would go for the CLR soluton

Filtering With Multi-Select Boxes With SQL Server

I need to filter result sets from sql server based on selections from a multi-select list box. I've been through the idea of doing an instring to determine if the row value exists in the selected filter values, but that's prone to partial matches (e.g. Car matches Carpet).
I also went through splitting the string into a table and joining/matching based on that, but I have reservations about how that is going to perform.
Seeing as this is a seemingly common task, I'm looking to the Stack Overflow community for some feedback and maybe a couple suggestions on the most commonly utilized approach to solving this problem.
I solved this one by writing a table-valued function (we're using 2005) which takes a delimited string and returns a table. You can then join to that or use WHERE EXISTS or WHERE x IN. We haven't done full stress testing yet, but with limited use and reasonably small sets of items I think that performance should be ok.
Below is one of the functions as a starting point for you. I also have one written to specifically accept a delimited list of INTs for ID values in lookup tables, etc.
Another possibility is to use LIKE with the delimiters to make sure that partial matches are ignore, but you can't use indexes with that, so performance will be poor for any large table. For example:
SELECT
my_column
FROM
My_Table
WHERE
#my_string LIKE '%|' + my_column + '|%'
.
/*
Name: GetTableFromStringList
Description: Returns a table of values extracted from a delimited list
Parameters:
#StringList - A delimited list of strings
#Delimiter - The delimiter used in the delimited list
History:
Date Name Comments
---------- ------------- ----------------------------------------------------
2008-12-03 T. Hummel Initial Creation
*/
CREATE FUNCTION dbo.GetTableFromStringList
(
#StringList VARCHAR(1000),
#Delimiter CHAR(1) = ','
)
RETURNS #Results TABLE
(
String VARCHAR(1000) NOT NULL
)
AS
BEGIN
DECLARE
#string VARCHAR(1000),
#position SMALLINT
SET #StringList = LTRIM(RTRIM(#StringList)) + #Delimiter
SET #position = CHARINDEX(#Delimiter, #StringList)
WHILE (#position > 0)
BEGIN
SET #string = LTRIM(RTRIM(LEFT(#StringList, #position - 1)))
IF (#string <> '')
BEGIN
INSERT INTO #Results (String) VALUES (#string)
END
SET #StringList = RIGHT(#StringList, LEN(#StringList) - #position)
SET #position = CHARINDEX(#Delimiter, #StringList, 1)
END
RETURN
END
I've been through the idea of doing an
instring to determine if the row value
exists in the selected filter values,
but that's prone to partial matches
(e.g. Car matches Carpet)
It sounds to me like you aren't including a unique ID, or possibly the primary key as part of values in your list box. Ideally each option will have a unique identifier that matches a column in the table you are searching on. If your listbox was like below then you would be able to filter for specifically for cars because you would get the unique value 3.
<option value="3">Car</option>
<option value="4">Carpret</option>
Then you just build a where clause that will allow you to find the values you need.
Updated, to answer comment.
How would I do the related join
considering that the user can select
and arbitrary number of options from
the list box? SELECT * FROM tblTable
JOIN tblOptions ON tblTable.FK = ? The
problem here is that I need to join on
multiple values.
I answered a similar question here.
One method would be to build a temporary table and add each selected option as a row to the temporary table. Then you would simply do a join to your temporary table.
If you want to simply create your sql dynamically you can do something like this.
SELECT * FROM tblTable WHERE option IN (selected_option_1, selected_option_2, selected_option_n)
I've found that a CLR table-valued function which takes your delimited string and calls Split on the string (returning the array as the IEnumerable) is more performant than anything written in T-SQL (it starts to break down when you have around one million items in the delimited list, but that's much further out than the T-SQL solution).
And then, you could join on the table or check with EXISTS.