Split String Value in SQL for Flexible Filtering - sql

I have a function where in user can assign multiple categories (food, non food etc) to a certain Tenant. See sample Data Table
Table: tblSales
date tenant sales category
1/1/2015 tenant1 1000 Food,Non-Food,Kiosk
1/1/2015 tenant2 2000 Food
1/1/2015 tenant3 1000 Non-Food,Kiosk
The system should be able to load record when the user selected any of the categories listed in Category Column.
For example, User selected categories: Non-Food,Kiosk. Expected result should be:
date tenant sales category
1/1/2015 tenant1 1000 Food,Non-Food,Kiosk
1/1/2015 tenant3 1000 Non-Food,Kiosk
Since, Non-Food and Kiosk is seen in Tenants 1 and 3.
So, what I think, the process should be a string manipulation first on the value of Category column, splitting each word delimited by comma. I have code which does not work correctly
#Category nvarchar(500) = 'Non-Food,Kiosk' --User selected
SELECT date,tenant,sales,category
FROM tblSales
WHERE (category in (SELECT val FROM dbo.split (#Category, #delimeter)))
That does not seem to work because the one it is splitting is the User Selected Categories and not the value of the data itself. I tried this
#Category nvarchar(500) = 'Non-Food,Kiosk' --User selected
SELECT date,tenant,sales,category
FROM tblSales
WHERE ((SELECT val FROM dbo.split (category, #delimeter)) in (SELECT val FROM dbo.split (#Category, #delimeter)))
But it resulted to this error
Subquery returned more than 1 value. This is not permitted when the subquery follows =, !=, <, <= , >, >= or when the subquery is used as an expression.

In addition to Tim's answer (he is absolutely right about CSV fields in databases!) please note that SQL Server 2016 introduced STRING_SPLIT function. For a single category it's as simple as:
SELECT
date
,tenant
,sales
,category
FROM tblSales
WHERE #Category IN (SELECT value FROM STRING_SPLIT(category, ','))
For a comma delimited list of categories you have to use it twice together with EXISTS:
WHERE EXISTS
(
SELECT *
FROM STRING_SPLIT(category, ',')
WHERE value IN (SELECT value FROM STRING_SPLIT(#Category, ','))
)
If you're using an older SQL Server version you may write your own STRING_SPLIT function, take a look to T-SQL split string. You can use that function with the same syntax as above (please note I wrote code here and it's untested so you may need some fixes).
Note about performance: from QP you can check how sub-queries will be executed, from a naive point of view I'd say CTE, temp-tables and sub-queries have roughly same performance (in this simple case) but if this code is performance critical you'd better perform some benchmark (with real data and a real-world access scenario).

In general, it is bad practice to store CSV data into a database column, because, as you are currently seeing, it renders many of the advantages a database has not usable.
However, I think you might be able to get away with just using LIKE. Assuming the user selected the categories Non-Food and Kiosk, you could try the following query:
SELECT date,
tenant,
sales,
category
FROM tblSales
WHERE category LIKE 'Non-Food' OR
category LIKE 'Kiosk'

Try with the below code .
Create a function to split delemited strings.
CREATE FUNCTION SplitWords
(
#Input NVARCHAR(MAX),
#Character CHAR(1)
)
RETURNS #Output TABLE (
Item NVARCHAR(1000)
)
AS
BEGIN
DECLARE #StartIndex INT, #EndIndex INT
SET #StartIndex = 1
IF SUBSTRING(#Input, LEN(#Input) - 1, LEN(#Input)) <> #Character
BEGIN
SET #Input = #Input + #Character
END
WHILE CHARINDEX(#Character, #Input) > 0
BEGIN
SET #EndIndex = CHARINDEX(#Character, #Input)
INSERT INTO #Output(Item)
SELECT SUBSTRING(#Input, #StartIndex, #EndIndex - 1)
SET #Input = SUBSTRING(#Input, #EndIndex + 1, LEN(#Input))
END
RETURN
END
GO
create a input tabl inside your prcedure /script and keep the split data in that. here your input is #Category
DECLARE #input TABLE (item VARCHAR(50))
INSERT INTO #input
SELECT Item
FROM [dbo].SplitWords (#Category, ',')
make a join using like operator with your actual table
SELECT DISTINCT a.date,
a.tenant,
a.sales,
a.category
FROM tblSales s
JOIN #input a
ON category LIKE '%'+item+'%'

You can try following SQL Select statement where I used my user defined SQL function for split string task
declare #Category nvarchar(500) = 'Non-Food,Kiosk'
declare #cnt int = (select COUNT(*) from dbo.SPLIT(#Category,','))
;with cte as (
select
t.*, COUNT(*) over (partition by tenant) cnt
from dbo.SPLIT(#Category,',') u
inner join (
select
tblSales.*, c.val
from tblSales
cross apply dbo.SPLIT(tblSales.category,',') c
) t on u.val = t.val
)
select distinct tenant from cte where cnt = #cnt

Related

How to get query result with stored procedure (convert item quantity from one table into my unit defined in second table)

I have two MSSQL2008 tables like this:
I have problem on the unit conversion logic.
The result I expect like this :
1589 cigar = 1ball, 5slop, 8box, 2pcs
52 pen = 2box, 12pcs
Basically I'm trying to take number (qty) from one table and to convert (split) him into the units which I defined in other table!
Note : Both table are allowed to add new row and new data (dinamic)
How can I get these results through a SQL stored procedure?
i totally misunderstand the question lest time so previous answer is removed (you can see it in edit but it's not relevant for this question)... However i come up with solution that may solve your problem...
NOTE: one little think about this solution, if you enter the value in second table like this
+--------+-------+
| Item | qty |
+--------+-------+
| 'cigar'| 596 |
+--------+-------+
result for this column will be
598cigar = 0ball, 5slop, 8box, 0pcs
note that there is a ball and pcs is there even if their value is 0, that probably can be fix if you don't want to show that value but I let you to play with it...
So let's back to solution and code. Solution have two stored procedures first one is the main and that one is the one you execute. I call it sp_MainProcedureConvertMe. Here is a code for that procedure:
CREATE PROCEDURE sp_MainProcedureConvertMe
AS
DECLARE #srcTable TABLE(srcId INT IDENTITY(1, 1), srcItem VARCHAR(50), srcQty INT)
DECLARE #xTable TABLE(xId INT IDENTITY(1, 1), xVal1 VARCHAR(1000), xVal2 VARCHAR(1000))
DECLARE #maxId INT
DECLARE #start INT = 1
DECLARE #sItem VARCHAR(50)
DECLARE #sQty INT
DECLARE #val1 VARCHAR(1000)
DECLARE #val2 VARCHAR(1000)
INSERT INTO #srcTable (srcItem, srcQty)
SELECT item, qty
FROM t2
SELECT #maxId = (SELECT MAX(srcId) FROM #srcTable)
WHILE #start <= #maxId
BEGIN
SELECT #sItem = (SELECT srcItem FROM #srcTable WHERE srcId = #start)
SELECT #sQty = (SELECT srcQty FROM #srcTable WHERE srcId = #start)
SELECT #val1 = (CAST(#sQty AS VARCHAR) + #sItem)
EXECUTE sp_ConvertMeIntoUnit #sItem, #sQty, #val2 OUTPUT
INSERT INTO #xTable (xVal1, xVal2)
VALUES (#val1, #val2)
SELECT #start = (#start + 1)
CONTINUE
END
SELECT xVal1 + ' = ' + xVal2 FROM #xTable
GO
This stored procedure have two variables as table #srcTable is basically your second table but instead of using id of your table it's create new srcId which goes from 1 to some number and it's auto_increment it's done because of while loop to avoid any problems when there is some deleted values etc. so we wanna be sure that there wont be any skipped number or something like that.
There is few more variables some of them is used to make while loop work other one is to store data. I think it's not hard to figure out from code what are they used for...
While loop iterate throughout all rows from #srcTable take values processing them and insert them into #xTable which basically hold result.
In while loop we execute second stored procedure which have a task to calculate how many unit of something is there in specific number of item. I call her sp_ConvertMeIntoUnit and here is a code for her:
CREATE PROCEDURE sp_ConvertMeIntoUnit
#inItemName VARCHAR(50),
#inQty INT,
#myResult VARCHAR(5000) OUT
AS
DECLARE #rTable TABLE(rId INT IDENTITY(1, 1), rUnit VARCHAR(50), rQty INT)
DECLARE #yTable TABLE(yId INT IDENTITY(1, 1), yVal INT, yRest INT)
DECLARE #maxId INT
DECLARE #start INT = 1
DECLARE #quentity INT = #inQty
DECLARE #divider INT
DECLARE #quant INT
DECLARE #rest INT
DECLARE #result VARCHAR(5000)
INSERT INTO #rTable(rUnit, rQty)
SELECT unit, qty
FROM t1
WHERE item = #inItemName
ORDER BY qty DESC
SELECT #maxId = (SELECT MAX(rId) FROM #rTable)
WHILE #start <= #maxId
BEGIN
SELECT #divider = (SELECT rQty FROM #rTable WHERE rId = #start)
SELECT #quant = (#quentity / #divider)
SELECT #rest = (#quentity % #divider)
INSERT INTO #yTable(yVal, yRest)
VALUES (#quant, #rest)
SELECT #quentity = #rest
SELECT #start = (#start + 1)
CONTINUE
END
SELECT #result = COALESCE(#result + ', ', '') + CAST(y.yVal AS VARCHAR) + r.rUnit FROM #rTable AS r INNER JOIN #yTable AS y ON r.rId = y.yId
SELECT #myResult = #result
GO
This procedure contain three parametars it's take two parameters from the first one and one is returned as result (OUTPUT). In parameters are Item and Quantity.
There are also two variables as table #rTable we stored values as #rId which is auto increment and always will go from 1 to some number no matter what is there Id's in the first table. Other two values are inserted there from the first table based on #inItemName parameter which is sanded from first procedure... From the your first table we use unit and quantity and stored them with rId into table #rTable ordered by Qty from biggest number to lowest. This is a part of code for that
INSERT INTO #rTable(rUnit, rQty)
SELECT unit, qty
FROM t1
WHERE item = #inItemName
ORDER BY qty DESC
Then we go into while loop where we do some maths. Basically we store into variable #divider values from #rTable. In the first iteration we take the biggest value calculate how many times it's contain into the number (second parameter we pass from first procedure is qty from the yours second table) and store it into #quant than we also calculate modulo and store it into variable #rest. This line
SELECT #rest = (#quentity % #divider)
After that we insert our values into #yTable. Before we and with iteration in while loop we assign #quentity variable value of #rest value because we need to work just with the remainder not with whole quantity any more. In second iteration we take next (the second greatest number in our #rTable) number and procedure repeat itself...
When while loop finish we create a string. This line here:
SELECT #result = COALESCE(#result + ', ', '') + CAST(y.yVal AS VARCHAR) + r.rUnit FROM #rTable AS r INNER JOIN #yTable AS y ON r.rId = y.yId
This is the line you want to change if you want to exclude result with 0 (i talk about them at the beginning of answer)...
And at the end we store result into output variable #myResult...
Result of this stored procedure will return string like this:
+--------------------------+
| 1ball, 5slop, 8box, 2pcs |
+--------------------------+
Hope I didn't miss anything important. Basically only think you should change here is the name of the table and their columns (if they are different) in first stored procedure instead t2 here
INSERT INTO...
SELECT item, qty
FROM t2
And in second one instead of t1 (and column if needed) here..
INSERT INTO...
SELECT unit, qty
FROM t1
WHERE item = #inItemName
ORDER BY qty DESC
Hope i help a little or give you an idea how this can be solved...
GL!
You seem to want string aggregation – something that does not have a simple instruction in Transact-SQL and is usually implemented using a correlated FOR XML subquery.
You have not provided names for your tables. For the purpose of the following example, the first table is called ItemDetails and the second one, Items:
SELECT
i.item,
i.qty,
details = (
SELECT
', ' + CAST(d.qty AS varchar(10)) + d.unit
FROM
dbo.ItemDetails AS d
WHERE
d.item = i.item
FOR XML
PATH (''), TYPE
).value('substring(./text()[1], 3)', 'nvarchar(max)')
FROM
dbo.Items AS i
;
For the input provided in the question, the above query would return the following output:
item qty details
----- ----------- ------------------------------
cigar 1598 1pcs, 1000ball, 12box, 100slop
pen 52 1pcs, 20box
You can further arrange the data into strings as per your requirement. I would recommend you do it in the calling application and use SQL only as your data source. However, if you must, you can do the concatenation in SQL as well.
Note that the above query assumes that the same unit does not appear more than once per item in ItemDetails. If it does and you want to aggregate qty values per unit before producing the detail line, you will need to change the query a little:
SELECT
i.item,
i.qty,
details = (
SELECT
', ' + CAST(SUM(d.qty) AS varchar(10)) + d.unit
FROM
dbo.ItemDetails AS d
WHERE
d.item = i.item
GROUP BY
d.unit
FOR XML
PATH (''), TYPE
).value('substring(./text()[1], 3)', 'nvarchar(max)')
FROM
dbo.Items AS i
;

Checking existence of all words words of a column of table 1 in other column of table 2

I have a table which contains product_name field. Then another table with models.
===products
product_id, product_name
===models
model_id, model_name
I am looking for a way to do the following.
Model names can have words separated by hyphen i.e JVC-600-BLACK
For each model I need to check the existence of each words of model in product name.
I'll need result in some where like below.
== results
model_id, product_id
If someone can point me in right direction, that would be a great help.
Notes
These are huge tables with about millions of records and number of
words in model_name are not fixed.
words in model may exist in any order or in between or other words in product name
Here's a function that splits the first string into parts using - as a delimiter and looks up each part in the second string, returning 1 if all parts were found and 0 otherwise.
CREATE FUNCTION dbo.func(#str1 varchar(max), #str2 varchar(max))
RETURNS BIT
AS
BEGIN
DECLARE #pos INT, #newPos INT,
#delimiter NCHAR(1)
SET #delimiter = '-'
SET #pos = 1
SET #newPos = 0
WHILE (#newPos < LEN(#str1))
BEGIN
SET #newPos = CHARINDEX(#delimiter, #str1, #pos)
IF #newPos = 0
SET #newPos = LEN(#str1)+1
DECLARE #data2 NVARCHAR(MAX)
SET #data2 = SUBSTRING(#str1, #pos, #newPos-#pos)
IF CHARINDEX(#data2, #str2) = 0
RETURN 0
SET #pos = #newPos + 1
IF #newPos = 0
BREAK
END
RETURN 1
END
You can use the above function for your problem as follows:
SELECT model_id, product_id
FROM models
JOIN products
ON dbo.func(models.model_name, products.product_name) = 1
It's not going to be fast, but I don't think a fast solution exists, since your problem doesn't allow for indexing. It may be possible to change the database structure to allow for this, but how exactly this can be done largely depends on what your data looks like.
I don't know if this solution is faster, for you to check if you care:
--=======================
-- sample data
-- ======================
declare #Products table
(
product_id int,
product_name nvarchar(max)
)
insert into #Products select 1, 'sdfsd def1 abc1klm1 sdljkfd'
insert into #Products select 2, 'sdfsd def2 abc2klm2 sdljkfd'
insert into #Products select 3, 'sdfsd def3 abc3klm3 sdljkfd'
declare #Models table
(
model_id int,
model_name nvarchar(max)
)
insert into #Models select 1, 'abc1-def1-klm1'
insert into #Models select 2, 'abc2-def2-klm2'
insert into #Models select 3, 'abc3-def3-klm3'
--=======================
-- solution
-- ======================
select t1.product_id, t2.model_id from #Products t1
cross join (
select
t1.model_id, Word = t2.r.value('.', 'nvarchar(max)')
from (select model_id, x = cast('<e>' + replace(model_name, '-', '</e><e>') + '</e>' as xml) from #Models ) t1
cross apply x.nodes('e') as t2 (r)
) t2
group by product_id, model_id
having min(charindex(word, product_name)) != 0
You may want to consider using the Full Text Search feature of SQL Server. In a nutshell, it catalogs all of the words (ignoring noise words like "and", "or", "a" and "the" by default but this list of noise worlds is configurable) in the tables and columns you specify when setting up the Full Text Catalog and offers a handful of functions that allow you to utilize that catalog to quickly find rows.

Finding strings with duplicate letters inside

Can somebody help me with this little task? What I need is a stored procedure that can find duplicate letters (in a row) in a string from a table "a" and after that make a new table "b" with just the id of the string that has a duplicate letter.
Something like this:
Table A
ID Name
1 Matt
2 Daave
3 Toom
4 Mike
5 Eddie
And from that table I can see that Daave, Toom, Eddie have duplicate letters in a row and I would like to make a new table and list their ID's only. Something like:
Table B
ID
2
3
5
Only 2,3,5 because that is the ID of the string that has duplicate letters in their names.
I hope this is understandable and would be very grateful for any help.
In your answer with stored procedure, you have 2 mistakes, one is missing space between column name and LIKE clause, second is missing single quotes around search parameter.
I first create user-defined scalar function which return 1 if string contains duplicate letters:
EDITED
CREATE FUNCTION FindDuplicateLetters
(
#String NVARCHAR(50)
)
RETURNS BIT
AS
BEGIN
DECLARE #Result BIT = 0
DECLARE #Counter INT = 1
WHILE (#Counter <= LEN(#String) - 1)
BEGIN
IF(ASCII((SELECT SUBSTRING(#String, #Counter, 1))) = ASCII((SELECT SUBSTRING(#String, #Counter + 1, 1))))
BEGIN
SET #Result = 1
BREAK
END
SET #Counter = #Counter + 1
END
RETURN #Result
END
GO
After function was created, just call it from simple SELECT query like following:
SELECT
*
FROM
(SELECT
*,
dbo.FindDuplicateLetters(ColumnName) AS Duplicates
FROM TableName) AS a
WHERE a.Duplicates = 1
With this combination, you will get just rows that has duplicate letters.
In any version of SQL, you can do this with a brute force approach:
select *
from t
where t.name like '%aa%' or
t.name like '%bb%' or
. . .
t.name like '%zz%'
If you have a case sensitive collation, then use:
where lower(t.name) like '%aa%' or
. . .
Here's one way.
First create a table of numbers
CREATE TABLE dbo.Numbers
(
number INT PRIMARY KEY
);
INSERT INTO dbo.Numbers
SELECT number
FROM master..spt_values
WHERE type = 'P'
AND number > 0;
Then with that in place you can use
SELECT *
FROM TableA
WHERE EXISTS (SELECT *
FROM dbo.Numbers
WHERE number < LEN(Name)
AND SUBSTRING(Name, number, 1) = SUBSTRING(Name, number + 1, 1))
Though this is an old post it's worth posting a solution that will be faster than a brute force approach or one that uses a scalar udf (which generally drag down performance). Using NGrams8K this is rather simple.
--sample data
declare #table table (id int identity primary key, [name] varchar(20));
insert #table([name]) values ('Mattaa'),('Daave'),('Toom'),('Mike'),('Eddie');
-- solution #1
select id
from #table
cross apply dbo.NGrams8k([name],1)
where charindex(replicate(token,2), [name]) > 0
group by id;
-- solution #2 (SQL 2012+ solution using LAG)
select id
from
(
select id, token, prevToken = lag(token,1) over (partition by id order by position)
from #table
cross apply dbo.NGrams8k([name],1)
) prep
where token = prevToken
group by id; -- optional id you want to remove possible duplicates.
another burte force way:
select *
from t
where t.name ~ '(.)\1';

SQL query to match keywords?

I have a table with a column as nvarchar(max) with text extracted from word documents in it. How can I create a select query that I'll pass another a list of keywords as parameter and return the rows ordered by the number of matches?
Maybe it is possible with full text search?
Yes, possible with full text search, and likely the best answer. For a straight T-SQL solution, you could use a split function and join, e.g. assuming a table of numbers called dbo.Numbers (you may need to decide on a different upper limit):
SET NOCOUNT ON;
DECLARE #UpperLimit INT;
SET #UpperLimit = 200000;
WITH n AS
(
SELECT
rn = ROW_NUMBER() OVER
(ORDER BY s1.[object_id])
FROM sys.objects AS s1
CROSS JOIN sys.objects AS s2
CROSS JOIN sys.objects AS s3
)
SELECT [Number] = rn - 1
INTO dbo.Numbers
FROM n
WHERE rn <= #UpperLimit + 1;
CREATE UNIQUE CLUSTERED INDEX n ON dbo.Numbers([Number]);
And a splitting function that uses that table of numbers:
CREATE FUNCTION dbo.SplitStrings
(
#List NVARCHAR(MAX)
)
RETURNS TABLE
AS
RETURN
(
SELECT DISTINCT
[Value] = LTRIM(RTRIM(
SUBSTRING(#List, [Number],
CHARINDEX(N',', #List + N',', [Number]) - [Number])))
FROM
dbo.Numbers
WHERE
Number <= LEN(#List)
AND SUBSTRING(N',' + #List, [Number], 1) = N','
);
GO
Then you can simply say:
SELECT key, NvarcharColumn /*, other cols */
FROM dbo.table AS outerT
WHERE EXISTS
(
SELECT 1
FROM dbo.table AS t
INNER JOIN dbo.SplitStrings(N'list,of,words') AS s
ON t.NvarcharColumn LIKE '%' + s.Item + '%'
WHERE t.key = outerT.key
);
As a procedure:
CREATE PROCEDURE dbo.Search
#List NVARCHAR(MAX)
AS
BEGIN
SET NOCOUNT ON;
SELECT key, NvarcharColumn /*, other cols */
FROM dbo.table AS outerT
WHERE EXISTS
(
SELECT 1
FROM dbo.table AS t
INNER JOIN dbo.SplitStrings(#List) AS s
ON t.NvarcharColumn LIKE '%' + s.Item + '%'
WHERE t.key = outerT.key
);
END
GO
Then you can just pass in #List (e.g. EXEC dbo.Search #List = N'foo,bar,splunge') from C#.
This won't be super fast, but I'm sure it will be quicker than pulling all the data out into C# and double-nested loop it manually.
how to ... return the rows ordered by the number of [full-text] matches
I have not used it myself but believe SQL Server 2008 supports weighting the CONTAINSTABLE matches which might be of help to you:
http://msdn.microsoft.com/en-us/library/ms189760.aspx
If you don't have an engine that returns results weighted by the number of hits ...
You could write a UDF that takes two inputs and returns an integer: the big textvalue is the first input and the words you're looking for as a comma-delimited string is the second. The function returns an integer representing either the number of distinct looked-for words that were actually found at least once in the text, or the total number of times the looked-for words were found. Implementation --how to weight-- is up to you. Maybe, for example, you'd want to arrange the looked-for words in most-important to least-important order, and give an important word hit more weight than a less important word hit.
You could then use your full text search engine to find all records that contain at least one of the words (you'd OR them), and you'd run this result set through your UDF scalar function:
pseudo code
select title, weightfunction(summary, 'word1,word2,word3....wordN')
from docs
where summary contains ( word1 or word2 or word3 ... or wordN)
order by weightfunction(summary, 'word1,word2,word3....wordN') desc

Charindex in SQL doesn't give the desired result

I have a string which is an output from a function, for example: "1,3,16,..,..".
I used the following SQL query and ran it in the query builder in Visual Studio, and it didn't give me any syntax errors.
SELECT ItemID, Name, RelDate, Price, Status FROM item_k WHERE (ItemID = cast(charindex(',', #itemIDs) as int))
I gave 3,16 as the #itemID parameter values, but it didn't give the desired results.
Then I used the following SQL query (without charindex):
SELECT ItemID, Name, RelDate, Price, Status FROM item_k WHERE (ItemID = #itemIDs)
I gave 3 as the #itemID parameter value, and I got a result for it.
I also gave 16 (on a separate occasion) as the #itemID parameter value, and I got a result for it. I conclude that there are values for ItemID 3 & 16.
Why doesn't an SQL query with charindex give me any result?
I can't seem to figure out the issue here, please help.
Here's yet another solution. In my experience, when you have a list of ItemIds as a string of comma separated values, you need a split function. This is very useful to have.
With a split function, you can simply do an INNER JOIN with the results of calling the split function and passing the list of ItemIds and associated delimeter as follows:
DECLARE #ItemIDs varchar(100)
SET #ItemIDs = '1,3,16,22,34,35'
SELECT
ItemID, Name, RelDate, Price, Status
FROM item_k
INNER JOIN dbo.UTILfn_Split(#ItemIDs,',') itemIds
ON itemIds.Value = item_k.ItemID
While this may look complicated at first, it is the more elegant and maintainable solution. Here's the code for creating the dbo.UTILfn_Split function. You need to run this first:
IF EXISTS (SELECT * FROM sysobjects WHERE id =
object_id(N'[dbo].[UTILfn_Split]') AND xtype IN (N'FN', N'IF', N'TF'))
DROP FUNCTION [dbo].[UTILfn_Split]
GO
CREATE FUNCTION dbo.UTILfn_Split
(
#String nvarchar (4000),
#Delimiter nvarchar (10)
)
RETURNS #ValueTable TABLE ([Value] nvarchar(4000))
BEGIN
DECLARE #NextString nvarchar(4000)
DECLARE #Pos int
DECLARE #NextPos int
DECLARE #CommaCheck nvarchar(1)
--Initialize
SET #NextString = ''
SET #CommaCheck = RIGHT(#String,1)
--Check for trailing Comma, if not exists, INSERT
--if (#CommaCheck <> #Delimiter )
SET #String = #String + #Delimiter
--Get position of first Comma
SET #Pos = CHARINDEX(#Delimiter,#String)
SET #NextPos = 1
--Loop while there is still a comma in the String of levels
WHILE (#pos <> 0)
BEGIN
SET #NextString = SUBSTRING(#String,1,#Pos - 1)
INSERT INTO #ValueTable ( [Value]) Values (#NextString)
SET #String = SUBSTRING(#String,#pos +1,LEN(#String))
SET #NextPos = #Pos
SET #pos = CHARINDEX(#Delimiter,#String)
END
RETURN
END
CHARINDEX just returns the postition where the character is found within the string.
So when #ItemIDs is set to '3,16' then your WHERE clause...
WHERE (ItemID = CAST(CHARINDEX(',', #ItemIDs) AS INT))
...is equivalent to...
WHERE ItemID = 2
...because CHARINDEX returns 2 since the comma character is found at position 2 of the string '3,16'.
I'm guessing that (a) you don't have a row in your table where ItemID is 2, and (b) you don't really want the position of the comma to dictate which rows are returned.
You can create a query dynamically that uses the in operator:
declare #Sql varchar(1000)
set #Sql = 'select ItemID, Name, RelDate, Price, Status from item_k where ItemID in (' + #itemIDs + ')'
exec(#Sql)
Be careful with what you send into the procedure, though. As with any dynamic SQL, if the data comes from user input without validation, the procedure is wide open for SQL injection.
Edit:
This is what happens in the query:
First we declare a variable to hold the dynamic query. This is just a varchar variable that is large enough.
In the variable we put the #itemIDs variable between two strings to form the query. The comma separated values is put between the parentheses of the in operator to form an expression similar to: where ItemID in (1,3,16)
Finally the exec command executes the query in the variable.
Try
SELECT ItemID, Name, RelDate, Price, Status FROM item_k WHERE ItemID in (#itemIDs)