Sort nvarchar in SQL Server 2008 - sql

I have a table with this data in SQL Server :
Id
=====
1
12e
5
and I want to order this data like this:
id
====
1
5
12e
My id column is of type nvarchar(50) and I can't convert it to int.
Is this possible that I sort the data in this way?

As a general rule, if you ever find yourself manipulating parts of columns, you're almost certainly doing it wrong.
If your ID is made up of a numeric and alpha component and you need to fiddle with just the numeric bit, make it two columns and save yourself some angst. In that case, you have an integral id_numeric and a varchar id_alpha and your query is simply:
select char(id_numeric) | id_alpha as id
from mytable
order by id_numeric asc
Or, if you really must store that as a single column, create extra columns to hold the individual parts and use those for sorting and selection. But, in order to mitigate the problems in having duplicate data in a row, use triggers to ensure the data remains consistent:
select id
from mytable
order by id_numeric asc
You usually don't want to have to do this splitting on every select since that never scales well. By doing it as an update/insert trigger, you only do the splitting when needed (ie, when the data changes) and this cost is amortised across all the selects. That's a good idea because, in the vast majority of cases, databases are read far more often than they're written.
And it's perfectly normal practice to revert to lesser levels of normalisation for performance reasons, provided that you understand and mitigate the consequences.

I'd actually use something along the lines of this function, though be warned that it's not going to be super-speedy. I've modified that function to return only the numbers:
CREATE FUNCTION dbo.UDF_ParseNumericChars
(
#string VARCHAR(8000)
)
RETURNS VARCHAR(8000)
WITH SCHEMABINDING
AS
BEGIN
DECLARE #IncorrectCharLoc SMALLINT
SET #IncorrectCharLoc = PATINDEX('%[^0-9]%', #string)
WHILE #IncorrectCharLoc > 0
BEGIN
SET #string = STUFF(#string, #IncorrectCharLoc, 1, '')
SET #IncorrectCharLoc = PATINDEX('%[^0-9]%', #string)
END
SET #string = #string
RETURN #string
END
GO
Once you create that function, then you can do your sort like this:
SELECT YourMixedColumn
FROM YourTable
ORDER BY CONVERT(INT, dbo.UDF_ParseNumericChars(YourMixedColumn))

It can be sort with the Len function
create table #temp (id nvarchar(50) null)
select * from #temp order by LEN(id)

Related

How can I solve a performance issue in my stored procedure?

I have a performance problem with a stored procedure. Because when I check my benchmark's result I realized that "MatchxxxReferencesByIds" has '240.25' ms Average LastElapsedTimeInSecond. How can I improve my procedure?
ALTER PROCEDURE [Common].[MatchxxxReferencesByIds]
(#refxxxIds VARCHAR(MAX),
#refxxxType NVARCHAR(250))
BEGIN
SET NOCOUNT ON;
BEGIN TRAN
DECLARE #fake_tbl TABLE (xxxid NVARCHAR(50))
INSERT INTO #fake_tbl
SELECT LTRIM(RTRIM(split.a.value('.', 'NVARCHAR(MAX)'))) AS fqdn
FROM
(SELECT
CAST ('<M>' + REPLACE(#refxxxIds, ',', '</M><M>') + '</M>' AS XML) AS data
) AS a
CROSS APPLY
data.nodes ('/M') AS split(a)
SELECT [p].[ReferencedxxxId]
FROM [Common].[xxxReference] AS [p]
WHERE ([p].[IsDeleted] = 0)
AND (([p].[ReferencedxxxType] COLLATE Turkish_CI_AS = #refxxxType COLLATE Turkish_CI_AS )
AND [p].[ReferencedxxxId] COLLATE Turkish_CI_AS IN (SELECT ft.xxxid COLLATE Turkish_CI_AS FROM #fake_tbl ft))
COMMIT;
END;
One can only make assumptions without knowing the table's schema, indexes and data sizes.
Hard-coding collations can prevent the query optimizer from using any indexes on the ReferencedEntityId column. The field name and sample data '423423,423423,423432,23423' suggest this is a numeric column anyway (int? bigint?). The collation shouldn't be needed and the variable's column type should match the table's type.
Finally, a.value can return an int or bigint directly, which means the splitting query can be rewritten as:
declare #refEntityIds nvarchar(max)='423423,423423,423432,23423';
DECLARE #fake_tbl TABLE (entityid bigint PRIMARY KEY, INDEX IX_TBL(Entityid))
INSERT INTO #fake_tbl
SELECT split.a.value('.', 'bigint') AS fqdn
FROM
(SELECT
CAST ('<M>' + REPLACE(#refEntityIds, ',', '</M><M>') + '</M>' AS XML) AS data
) AS a
CROSS APPLY
data.nodes ('/M') AS split(a)
The input data contains some duplicates so entityid can't be a PRIMARY KEY.
After that, the query can change to :
SELECT [p].[ReferencedEntityId]
FROM [Common].[EntityReference] AS [p]
WHERE [p].[IsDeleted] = 0
AND [p].[ReferencedEntityType] COLLATE Turkish_CI_AS = #refEntityType COLLATE Turkish_CI_AS
AND [p].[ReferencedEntityId] IN (SELECT ft.entityid FROM #fake_tbl ft)
The next problem is the hard-coded collation. Unless it matches the column's actual collation, this prevents the server from using any indexes that cover that column. How to fix this depends on the actual data statistics. Perhaps the column's collation has to change or perhaps the rows after filtering by ReferencedEntityId are so few that there's no benefit to this.
Finally, IsDeleted can't be indexed. It's either a bit columns whose values are 1/0 or another numberic column that still contains 0/1. An index that's so bad at selecting rows won't be used by the query optimizer because it's actually faster to just scan the rows returned by the other conditions.
A general rule is to put the most selective index column first. The database combines all columns to create one "key" value and construct a B+-tree index from it. The more selective the key, the fewer index nodes need to be scanned.
IsDeleted can still be used in a filtered index to index only the non-deleted columns. This allows the query optimizer to eliminate unwanted columns from the search. The resulting index will be smaller too, which means the same number of IO operations will load more index pages in memory and allow faster seeking.
All of this means that EntityReference should have an index like this one.
CREATE NONCLUSTERED INDEX IX_EntityReference_ReferenceEntityID
ON Common.EntityReference (ReferenceEntityId, ReferenceEntityType)
WHERE IsDeleted =0;
If the collations don't match, ReferenceEntityType won't be used for seeking. If this is the most common case we can remove ReferenceEntityType from the index and put it in an INCLUDE clause. The field won't be part of the index key although it will still be available for filtering without having to load data from the actual table:
CREATE NONCLUSTERED INDEX IX_EntityReference_ReferenceEntityID
ON Common.EntityReference (ReferenceEntityId)
INCLUDE(ReferenceEntityType)
WHERE IsDeleted =0;
Of course, if that's the most common case the column's collation should be changed instead
Based on the execution plan of the stored procedure, what makes it to perform slowly, is the part where you are going to work with XML.
Lets rethink about the solution:
I have created a table like this:
CREATE TABLE [Common].[EntityReference]
(
IsDeleted BIT,
ReferencedEntityType VARCHAR(100),
ReferencedEntityId VARCHAR(10)
);
GO
and manipulate it like this(Insert 1M records into it):
DECLARE #i INT = 1000000;
DECLARE #isDeleted BIT,
#ReferencedEntityType VARCHAR(100),
#ReferencedEntityId VARCHAR(10);
WHILE #i > 0
BEGIN
SET #isDeleted =(SELECT #i % 2);
SET #ReferencedEntityType = 'TEST' + CASE WHEN #i % 2 = 0 THEN '' ELSE CAST(#i % 2 AS VARCHAR(100)) END;
SET #ReferencedEntityId = CAST(#i AS VARCHAR(10));
INSERT INTO [Common].[EntityReference]
(
IsDeleted,
ReferencedEntityType,
ReferencedEntityId
)
VALUES (#isDeleted, #ReferencedEntityType, #ReferencedEntityId);
SET #i = #i - 1;
END;
lets analyse your code:
You have a comma delimited input(#refEntityIds), which you want to split it and then run a query against these values. (your SP's subtree cost in my PC is about 376) To do so you have different approaches:
1.Pass a table variable to stored procedure which contains the refEntityIds
2.Make use of STRING_SPLIT function to split the string
Lets see the sample query:
INSERT INTO #fake_tbl
SELECT value
FROM STRING_SPLIT(#refEntityIds, ',');
Using this, you will gain a great performance improvement in your code.(subtree cost: 6.19 without following indexes) BUT this feature is not available in SQL Server 2008!
You can use a replacement for this function(read this: https://stackoverflow.com/a/54926996/1666800) and changing your query to this(subtree cost still is about 6.19):
INSERT INTO #fake_tbl
SELECT value FROM dbo.[fn_split_string_to_column](#refEntityIds,',')
In this case again you will see the notable performance improvement.
You can also create a non clustered index on [Common].[EntityReference] table, which has a little performance improvement too. But please think about creating index, before creating it, it might have negative impact on your DML operations:
CREATE NONCLUSTERED INDEX [Index Name] ON [Common].[EntityReference]
(
[IsDeleted] ASC
)
INCLUDE ([ReferencedEntityType],[ReferencedEntityId])
In case that I have not this index(Suppose that I have replaced your split solution with mine), the subtree cost is: 6.19, When I add aforementioned index, the subtree cost decreased to 4.70, and finally when I change the index to following one, the subtree cost is 5.16
CREATE NONCLUSTERED INDEX [Index Name] ON [Common].[EntityReference]
(
[ReferencedEntityType] ASC,
[ReferencedEntityId] ASC
)
INCLUDE ([IsDeleted])
Thanks to #PanagiotisKanavos following index will even performs better than the aforementioned ones (subtree cost: 3.95):
CREATE NONCLUSTERED INDEX IX_EntityReference_ReferenceEntityID
ON Common.EntityReference (ReferencedEntityId)
INCLUDE(ReferencedEntityType)
WHERE IsDeleted =0;
Also please note that, using transaction against a local table variable has almost no effect, and probably you can simply ignore it.
If [p].[ReferencedEntityId] is going to be integers, then you don't need to apply COLLATE clause. You can directly apply IN condition.
You can go for simple comma separated values to integers list using Table valued functions. There are many samples. Keep the datatype of the ID as integer, to avoid collation application.
[p].[ReferencedEntityId] IN (SELECT ft.entityid AS FROM #fake_tbl ft))
I don't think you need a TRAN. You are simply "shredding" your comma seperated values into a #variable table. and doing a SELECT. TRAN not need here.
try an exists
SELECT [p].[ReferencedEntityId]
FROM [Common].[EntityReference] AS [p]
WHERE ([p].[IsDeleted] = 0)
AND (([p].[ReferencedEntityType] COLLATE Turkish_CI_AS = #refEntityType COLLATE Turkish_CI_AS )
AND EXISTS (SELECT 1 FROM #fake_tbl ft WHERE ft.entityid COLLATE Turkish_CI_AS = [p].[ReferencedEntityId] COLLATE Turkish_CI_AS )
3.
See https://www.sqlshack.com/efficient-creation-parsing-delimited-strings/
for different ways to parse your delimited string.
quote from article:
Microsoft’s built-in function provides a solution that is convenient
and appears to perform well. It isn’t faster than XML, but it clearly
was written in a way that provides an easy-to-optimize execution plan.
Logical reads are higher, as well. While we cannot look under the
covers and see exactly how Microsoft implemented this function, we at
least have the convenience of a function to split strings that are
shipped with SQL Server. Note that the separator passed into this
function must be of size 1. In other words, you cannot use
STRING_SPLIT with a multi-character delimiter, such as ‘”,”’.
post a screen shot of your execution plan. if you don't have proper index (or you have "hints" that prevent use of indexes)..your query will never perform well.

How to select specific row in SQL from a bad designed schema?

I have a string in a column of a db schema I did not design, like this:
numbers column
--------------------
First: 1,2,33,34,43,5
Second: 1,2,3,4,5
Despite I know this is not the best practice scenario, I would still want to select the row which contains only '3' value, not '33' or '34' or '43'.
How could I select only second row?
SELECT *
FROM tblNumbers
WHERE numbers like '%,3,%' OR numbers like '3,%' OR numbers like '%,3'
This query selected both 2 columns. How can I do this, to get just the second row?
Here is my problem:
Thanks.
You should be storing the values in a separate table, with one row per column and per number.
Sometimes, though, we are stuck with other peoples bad data structures. If so, you can do what you want in this rather cumbersome way:
where replace(replace(numbers, '{', ','), '}', ',') like '%,3,%'
That is, put the delimiters around all the numbers in numbers.
Let me repeat, though: the proper way to store this data is using a separate table. If you need to store multiple values in a column like this, then do some research on XML and JSON formats (which are supported only in the most recent version of SQL Server).
EDIT:
Exactly the same idea applies, the code is just simpler:
where ',' + numbers + ',' like '%,3,%'
Did you try it like this?
SELECT *
FROM tblNumbers
WHERE number = '3' OR ReportedGMY = '3'
if you are storing numbers as integers
SELECT *
FROM tblNumbers
WHERE number = '3'
if you are storing as string
SELECT *
FROM tblNumbers
WHERE number like "3"
Its is bad practice to save command separated value in a column. This should be avoid as much as possible. If you really need to do it, then can be done using user defined function.
CREATE FUNCTION dbo.HasDigit (#String VARCHAR(MAX), #DigitToCheck INT, #Delimiter VARCHAR(10))
RETURNS BIT
AS
BEGIN
DECLARE #DelimiterPosition INT
DECLARE #Digit INT
DECLARE #ContainsDigit BIT = 0
WHILE CHARINDEX(#Delimiter, #String) > 0
BEGIN
SELECT #DelimiterPosition = CHARINDEX(#Delimiter, #String)
SELECT #Digit = CAST(SUBSTRING(#String, 1, #DelimiterPosition - 1) AS INT)
IF(#Digit = #DigitToCheck)
BEGIN
SET #ContainsDigit = 1
END
SELECT #String = SUBSTRING(#String, #DelimiterPosition + 1, LEN(#String) - #DelimiterPosition)
END
RETURN #ContainsDigit
END;
GO
CREATE TABLE TEST (
Numbers VARCHAR(MAX),
COLUMNNAME VARCHAR(MAX)
)
GO
INSERT INTO TEST VALUES('First:', '1,2,33,34,43,5')
INSERT INTO TEST VALUES('Second:', ' 1,2,3,4,5')
GO
SELECT * FROM TEST WHERE dbo.HasDigit(COLUMNNAME, 3, ',') = 1
Output:
--Numbers COLUMNNAME
--------- ----------------
--Second: 1,2,3,4,5

SQL: SELECT number text base on a number

Background: I have an SQL database that contain a column (foo) of a text type and not integer. In the column I store integer in a text form.
Question: Is it possible to SELECT the row that contains (in foo column) number greater/lesser than n?
PS: I have a very good reason to store them as text form. Please refrain from commenting on that.
Update: (Forgot to mention) I am storing it in SQLite3.
SELECT foo
FROM Table
WHERE CAST(foo as int)>#n
select *
from tableName
where cast(textColumn as int) > 5
A simple CAST in the WHERE clause will work as long as you are sure that the data in the foo column is going to properly convert to an integer. If not, your SELECT statement will throw an error. I would suggest you add an extra step here and take out the non-numeric characters before casting the field to an int. Here is a link on how to do something similar:
http://blog.sqlauthority.com/2007/05/13/sql-server-udf-function-to-parse-alphanumeric-characters-from-string/
The only real modification you would need to do on this function would be to change the following lines:
PATINDEX('%[^0-9A-Za-z]%', #string)
to
PATINDEX('%[^0-9]%', #string)
The results from that UDF should then be castable to an int without it throwing an error. It will further slow down your query, but it will be safer. You could even put your CAST inside the UDF and make it one call. The final UDF would look like this:
CREATE FUNCTION dbo.UDF_ParseAlphaChars
(
#string VARCHAR(8000)
)
RETURNS int
AS
BEGIN
DECLARE #IncorrectCharLoc SMALLINT
SET #IncorrectCharLoc = PATINDEX('%[^0-9]%', #string)
WHILE #IncorrectCharLoc > 0
BEGIN
SET #string = STUFF(#string, #IncorrectCharLoc, 1, '')
SET #IncorrectCharLoc = PATINDEX('%[^0-9]%', #string)
END
SET #string = #string
RETURN CAST(#string as int)
END
GO
Your final SELECT statement would look something like this:
SELECT *
FROM Table
WHERE UDF_ParseAlphaChars(Foo) > 5
EDIT
Based upon the new information that the database is SQLite, the above probably won't work directly. I don't believe SQLite has native support for UDFs. You might be able to create a type of UDF using your programming language of choice (like this: http://www.christian-etter.de/?p=439)
The other option I see to safely get all of your data (an IsNumeric would exclude certain rows from your results, which might not be what you want) would probably be to create an extra column that has the int representation of the string. It is a little more dangerous in that you need to keep two fields in sync, but it will allow you to quickly sort and filter the table data.
SELECT *
FROM Table
WHERE CAST(foo as int) > 2000

How hard would you try to make your SQL queries secure?

I am in a situation where I am given a comma-separated VarChar as input to a stored procedure. I want to do something like this:
SELECT * FROM tblMyTable
INNER JOIN /*Bunch of inner joins here*/
WHERE ItemID IN ($MyList);
However, you can't use a VarChar with the IN statement. There are two ways to get around this problem:
(The Wrong Way) Create the SQL query in a String, like so:
SET $SQL = '
SELECT * FROM tblMyTable
INNER JOIN /*Bunch of inner joins here*/
WHERE ItemID IN (' + $MyList + ');
EXEC($SQL);
(The Right Way) Create a temporary table that contains the values of $MyList, then join that table in the initial query.
My question is:
Option 2 has a relatively large performance hit with creating a temporary table, which is less than ideal.
While Option 1 is open to an SQL injection attack, since my SPROC is being called from an authenticated source, does it really matter? Only trusted sources will execute this SPROC, so if they choose to bugger up the database, that is their prerogative.
So, how far would you go to make your code secure?
What database are you using? in SQL Server you can create a split function that can split a long string and return a table sub-second. you use the table function call like a regular table in a query (no temp table necessary)
You need to create a split function, or if you have one just use it. This is how a split function can be used:
SELECT
*
FROM YourTable y
INNER JOIN dbo.yourSplitFunction(#Parameter) s ON y.ID=s.Value
I prefer the number table approach to split a string in TSQL but there are numerous ways to split strings in SQL Server, see the previous link, which explains the PROs and CONs of each.
For the Numbers Table method to work, you need to do this one time table setup, which will create a table Numbers that contains rows from 1 to 10,000:
SELECT TOP 10000 IDENTITY(int,1,1) AS Number
INTO Numbers
FROM sys.objects s1
CROSS JOIN sys.objects s2
ALTER TABLE Numbers ADD CONSTRAINT PK_Numbers PRIMARY KEY CLUSTERED (Number)
Once the Numbers table is set up, create this split function:
CREATE FUNCTION [dbo].[FN_ListToTable]
(
#SplitOn char(1) --REQUIRED, the character to split the #List string on
,#List varchar(8000)--REQUIRED, the list to split apart
)
RETURNS TABLE
AS
RETURN
(
----------------
--SINGLE QUERY-- --this will not return empty rows
----------------
SELECT
ListValue
FROM (SELECT
LTRIM(RTRIM(SUBSTRING(List2, number+1, CHARINDEX(#SplitOn, List2, number+1)-number - 1))) AS ListValue
FROM (
SELECT #SplitOn + #List + #SplitOn AS List2
) AS dt
INNER JOIN Numbers n ON n.Number < LEN(dt.List2)
WHERE SUBSTRING(List2, number, 1) = #SplitOn
) dt2
WHERE ListValue IS NOT NULL AND ListValue!=''
);
GO
You can now easily split a CSV string into a table and join on it:
select * from dbo.FN_ListToTable(',','1,2,3,,,4,5,6777,,,')
OUTPUT:
ListValue
-----------------------
1
2
3
4
5
6777
(6 row(s) affected)
Your can use the CSV string like this, not temp table necessary:
SELECT * FROM tblMyTable
INNER JOIN /*Bunch of inner joins here*/
WHERE ItemID IN (select ListValue from dbo.FN_ListToTable(',',$MyList));
I would personally prefer option 2 in that just because a source is authenticated, does not mean you should be letting your guard down. You would leave yourself open to potential rights escalations where an authenticated low lvl user, is able to still execute commands against the database you had not intended.
The phrase you use of 'trusted sources' - it might be better if you assume an X-Files aproach and to trust no-one.
If someone buggers up the database you might still be getting a call.
A good option that is similar to option two is to use a function to create a table in memory from the CSV list. It is reasonably fast and offers the protections of option two. Then that table can be joined to the Inner Join, e.g.
CREATE FUNCTION [dbo].[simple_strlist_to_tbl] (#list nvarchar(MAX))
RETURNS #tbl TABLE (str varchar(4000) NOT NULL) AS
BEGIN
DECLARE #pos int,
#nextpos int,
#valuelen int
SELECT #pos = 0, #nextpos = 1
WHILE #nextpos > 0
BEGIN
SELECT #nextpos = charindex(',', #list, #pos + 1)
SELECT #valuelen = CASE WHEN #nextpos > 0
THEN #nextpos
ELSE len(#list) + 1
END - #pos - 1
INSERT #tbl (str)
VALUES (substring(#list, #pos + 1, #valuelen))
SELECT #pos = #nextpos
END
RETURN
END
Then in the join:
tblMyTable INNER JOIN
simple_strlist_to_tbl(#MyList) list ON tblMyTable.itemId = list.str
Option 3 is to confirm each item in the list is in fact an integer before concatenating the string to your SQL statement.
Do this by parsing the input string (e.g., split into an array), loop through and convert each value to an int, and then recreate the list yourself before concatenating back to the SQL statement. This will give you reasonable assurance that SQL injection cannot occur.
It is safer to concatenate strings that have been created by your application, because you can do things like check for int, but it also means your code is written in a way that a subsequent developer may modify slightly, thus opening back up the risk of SQL injection, because they do not realize that is what your code is protecting against. Make sure you comment well what you are doing if you go this route.
A third option: pass the values to the stored procedure in an array. Then you can either assemble the comma-separated string in your code and use the dynamic SQL option, or (if your flavour of RDBMS permits it) use the array directly in the SELECT statement.
Why don't You write an CLR split function, that will do all the job nice and easy? You can write user Defined table functions which will return a table doing string splitting with .Net infructure. Hell in SQL 2008 you can even give them hints if they return the strings sorted in any way... like ascending or something which can help the optimizer?
Or maybe You can't do CLR integration then You have to stick to the tsql but I personally would go for the CLR soluton

Filtering With Multi-Select Boxes With SQL Server

I need to filter result sets from sql server based on selections from a multi-select list box. I've been through the idea of doing an instring to determine if the row value exists in the selected filter values, but that's prone to partial matches (e.g. Car matches Carpet).
I also went through splitting the string into a table and joining/matching based on that, but I have reservations about how that is going to perform.
Seeing as this is a seemingly common task, I'm looking to the Stack Overflow community for some feedback and maybe a couple suggestions on the most commonly utilized approach to solving this problem.
I solved this one by writing a table-valued function (we're using 2005) which takes a delimited string and returns a table. You can then join to that or use WHERE EXISTS or WHERE x IN. We haven't done full stress testing yet, but with limited use and reasonably small sets of items I think that performance should be ok.
Below is one of the functions as a starting point for you. I also have one written to specifically accept a delimited list of INTs for ID values in lookup tables, etc.
Another possibility is to use LIKE with the delimiters to make sure that partial matches are ignore, but you can't use indexes with that, so performance will be poor for any large table. For example:
SELECT
my_column
FROM
My_Table
WHERE
#my_string LIKE '%|' + my_column + '|%'
.
/*
Name: GetTableFromStringList
Description: Returns a table of values extracted from a delimited list
Parameters:
#StringList - A delimited list of strings
#Delimiter - The delimiter used in the delimited list
History:
Date Name Comments
---------- ------------- ----------------------------------------------------
2008-12-03 T. Hummel Initial Creation
*/
CREATE FUNCTION dbo.GetTableFromStringList
(
#StringList VARCHAR(1000),
#Delimiter CHAR(1) = ','
)
RETURNS #Results TABLE
(
String VARCHAR(1000) NOT NULL
)
AS
BEGIN
DECLARE
#string VARCHAR(1000),
#position SMALLINT
SET #StringList = LTRIM(RTRIM(#StringList)) + #Delimiter
SET #position = CHARINDEX(#Delimiter, #StringList)
WHILE (#position > 0)
BEGIN
SET #string = LTRIM(RTRIM(LEFT(#StringList, #position - 1)))
IF (#string <> '')
BEGIN
INSERT INTO #Results (String) VALUES (#string)
END
SET #StringList = RIGHT(#StringList, LEN(#StringList) - #position)
SET #position = CHARINDEX(#Delimiter, #StringList, 1)
END
RETURN
END
I've been through the idea of doing an
instring to determine if the row value
exists in the selected filter values,
but that's prone to partial matches
(e.g. Car matches Carpet)
It sounds to me like you aren't including a unique ID, or possibly the primary key as part of values in your list box. Ideally each option will have a unique identifier that matches a column in the table you are searching on. If your listbox was like below then you would be able to filter for specifically for cars because you would get the unique value 3.
<option value="3">Car</option>
<option value="4">Carpret</option>
Then you just build a where clause that will allow you to find the values you need.
Updated, to answer comment.
How would I do the related join
considering that the user can select
and arbitrary number of options from
the list box? SELECT * FROM tblTable
JOIN tblOptions ON tblTable.FK = ? The
problem here is that I need to join on
multiple values.
I answered a similar question here.
One method would be to build a temporary table and add each selected option as a row to the temporary table. Then you would simply do a join to your temporary table.
If you want to simply create your sql dynamically you can do something like this.
SELECT * FROM tblTable WHERE option IN (selected_option_1, selected_option_2, selected_option_n)
I've found that a CLR table-valued function which takes your delimited string and calls Split on the string (returning the array as the IEnumerable) is more performant than anything written in T-SQL (it starts to break down when you have around one million items in the delimited list, but that's much further out than the T-SQL solution).
And then, you could join on the table or check with EXISTS.