Checking if field contains multiple string in sql server - sql

I am working on a sql database which will provide with data some grid. The grid will enable filtering, sorting and paging but also there is a strict requirement that users can enter free text to a text input above the grid for example
'Engine 1001 Requi' and that the result will contain only rows which in some columns contain all the pieces of the text. So one column may contain Engine, other column may contain 1001 and some other will contain Requi.
I created a technical column (let's call it myTechnicalColumn) in the table (let's call it myTable) which will be updated each time someone inserts or updates a row and it will contain all the values of all the columns combined and separated with space.
Now to use it with entity framework I decided to use a table valued function which accepts one parameter #searchQuery and it will handle it like this:
CREATE FUNCTION myFunctionName(#searchText NVARCHAR(MAX))
RETURNS #Result TABLE
( ... here come columns )
AS
BEGIN
DECLARE #searchToken TokenType
INSERT INTO #searchToken(token) SELECT value FROM STRING_SPLIT(#searchText,' ')
DECLARE #searchTextLength INT
SET #searchTextLength = (SELECT COUNT(*) FROM #searchToken)
INSERT INTO #Result
SELECT
... here come columns
FROM myTable
WHERE (SELECT COUNT(*) FROM #searchToken WHERE CHARINDEX(token, myTechnicalColumn) > 0) = #searchTextLength
RETURN;
END
Of course the solution works fine but it's kinda slow. Any hints how to improve its efficiency?

You can use an inline Table Valued Function, which should be quite a lot faster.
This would be a direct translation of your current code
CREATE FUNCTION myFunctionName(#searchText NVARCHAR(MAX))
RETURNS TABLE
AS RETURN
(
WITH searchText AS (
SELECT value token
FROM STRING_SPLIT(#searchText,' ') s(token)
)
SELECT
... here come columns
FROM myTable t
WHERE (
SELECT COUNT(*)
FROM searchText
WHERE CHARINDEX(s.token, t.myTechnicalColumn) > 0
) = (SELECT COUNT(*) FROM searchText)
);
GO
You are using a form of query called Relational Division Without Remainder and there are other ways to cut this cake:
CREATE FUNCTION myFunctionName(#searchText NVARCHAR(MAX))
RETURNS TABLE
AS RETURN
(
WITH searchText AS (
SELECT value token
FROM STRING_SPLIT(#searchText,' ') s(token)
)
SELECT
... here come columns
FROM myTable t
WHERE NOT EXISTS (
SELECT 1
FROM searchText
WHERE CHARINDEX(s.token, t.myTechnicalColumn) = 0
)
);
GO
This may be faster or slower depending on a number of factors, you need to test.

Since there is no data to test, i am not sure if the following will solve your issue:
-- Replace the last INSERT portion
INSERT INTO #Result
SELECT
... here come columns
FROM myTable T
JOIN #searchToken S ON CHARINDEX(S.token, T.myTechnicalColumn) > 0

Related

Azure Synapse Analytics SQL Database function to get match between two delimited lists

I'm using Azure Synapse Analytics SQL Database. I'm aware I can't use selects in a scalar function (hence the error The SELECT statement is not allowed in user-defined functions). I'm looking for a work-around since this function does not rely on any tables. The goal is a scalar function that takes two delimited lists parameters, a delimiter parameter and returns 1 if the lists have one or more matching items, and returns 0 if no matches are found.
--The SELECT statement is not allowed in user-defined functions
CREATE FUNCTION util.get_lsts_have_mtch
(
#p_lst_1 VARCHAR(8000),
#p_lst_2 VARCHAR(8000),
#p_dlmtr CHAR(1)
)
RETURNS BIT
/***********************************************************************************************************
Description: This function returns 1 if two delimited lists have an item that exists in both lists.
--Example run:
SELECT util.get_lsts_have_mtch('AB|CD|EF|GH|IJ','UV|WX|CD|IJ|YZ','|') -- returns 1, there's a match
SELECT util.get_lsts_have_mtch('AB|CD|EF|GH|IJ','ST|UV|WX|YZ','|') -- returns 0, there's no match
**********************************************************************************************************/
AS
BEGIN
DECLARE #v_result BIT;
-- *** CAN THIS BE ACCOMPLISHED EFFICIENTLY WITHOUT ANY SELECTS? ***
SET #v_result = (SELECT CAST(CASE WHEN EXISTS (SELECT 1
FROM STRING_SPLIT(#p_lst_1, #p_dlmtr) AS tokens_1
INNER JOIN STRING_SPLIT(#p_lst_2, #p_dlmtr) AS tokens_2
ON tokens_1.value = tokens_2.value)
THEN 1
ELSE 0
END) AS BIT);
RETURN #v_result;
END;
I ditched the function and used this CASE statement. I wanted a function to join on that would be reusable. If anyone can find a function to do this, I will make that the accepted answer.
SELECT ...
FROM tbl_1
JOIN tbl_2
ON
-- wanted: util.get_lsts_have_mtch(tbl_1.my_lst, tbl_2.my_lst, '|') = 1
-- but settled for:
CASE WHEN EXISTS
(SELECT [value]
FROM STRING_SPLIT(tbl_1.my_lst, '|')
INTERSECT
SELECT [value]
FROM STRING_SPLIT(tbl_2.my_lst, '|'))
THEN 1
ELSE 0
END = 1

How does one automatically insert the results of several function calls into a table?

Wasn't sure how to title the question but hopefully this makes sense :)
I have a table (OldTable) with an index and a column of comma separated lists. I'm trying to split the strings in the list column and create a new table with the indexes coupled with each of the sub strings of the string it was connected to in the old table.
Example:
OldTable
index | list
1 | 'a,b,c'
2 | 'd,e,f'
NewTable
index | letter
1 | 'a'
1 | 'b'
1 | 'c'
2 | 'd'
2 | 'e'
2 | 'f'
I have created a function that will split the string and return each sub string as a record in a 1 column table as so:
SELECT * FROM Split('a,b,c', ',', 1)
Which will result in:
Result
index | string
1 | 'a'
1 | 'b'
1 | 'c'
I was hoping that I could use this function as so:
SELECT * FROM Split((SELECT * FROM OldTable), ',')
And then use the id and string columns from OldTable in my function (by re-writing it slightly) to create NewTable. But I as far as I understand sending tables into the function doesn't work as I get: "Subquery returned more than 1 value. ... not premitted ... when the subquery is used as an expression."
One solution I was thinking of would be to run the function, as is, on all the rows of OldTable and insert the result of each call into NewTable. But I'm not sure how to iterate each row without a function. And I can't send tables into the a function to iterate so I'm back at square one.
I could do it manually but OldTable contains a few records (1000 or so) so it seems like automation would be preferable.
Is there a way to either:
Iterate over OldTable row by row, run the row through Split(), add the result to NewTable for all rows in OldTable. Either by a function or through regular sql-transactions
Re-write Split() to take a table variable after all
Get rid of the function altogether and just do it in sql transactions?
I'd prefer to not use procedures (don't know if there is a solutions with them either) mostly because I don't want the functionality inside of the DB to be exposed to the outside. If, however that is the "best"/only way to go I'll have to consider it. I'm quite (read very) new to SQL so it might be a needless worry.
Here is my Split() function if it is needed:
CREATE FUNCTION Split (
#string nvarchar(4000),
#delimitor nvarchar(10),
#indexint = 0
)
RETURNS #splitTable TABLE (id int, string nvarchar(4000) NOT NULL) AS
BEGIN
DECLARE #startOfSubString smallint;
DECLARE #endOfSubString smallint;
SET #startOfSubString = 1;
SET #endOfSubString = CHARINDEX(#delimitor, #string, #startOfSubString);
IF (#endOfSubString <> 0)
WHILE #endOfSubString > 0
BEGIN
INSERT INTO #splitTable
SELECT #index, SUBSTRING(#string, #startOfSubString, #endOfSubString - #startOfSubString);
SET #startOfSubString = #endOfSubString+1;
SET #endOfSubString = CHARINDEX(#delimitor, #string, #startOfSubString);
END;
INSERT INTO #splitTable
SELECT #index, SUBSTRING(#string, #startOfSubString, LEN(#string)-#startOfSubString+1);
RETURN;
END
Hope my problem and attempt was explained and possible to understand.
You are looking for cross apply:
SELECT t.index, s.item
FROM OldTable t CROSS APPLY
(dbo.split(t.list, ',')) s(item);
Inserting in the new table just requires an insert or select into clause.

Execute table valued function from row values

Given a table as below where fn contains the name of an existing table valued functions and param contains the param to be passed to the function
fn | param
----------------
'fn_one' | 1001
'fn_two' | 1001
'fn_one' | 1002
'fn_two' | 1002
Is there a way to get a resulting table like this by using set-based operations?
The resulting table would contain 0-* lines for each line from the first table.
param | resultval
---------------------------
1001 | 'fn_one_result_a'
1001 | 'fn_one_result_b'
1001 | 'fn_two_result_one'
1002 | 'fn_two_result_one'
I thought I could do something like (pseudo)
select t1.param, t2.resultval
from table1 t1
cross join exec sp_executesql('select * from '+t1.fn+'('+t1.param+')') t2
but that gives a syntax error at exec sp_executesql.
Currently we're using cursors to loop through the first table and insert into a second table with exec sp_executesql. While this does the job correctly, it is also the heaviest part of a frequently used stored procedure and I'm trying to optimize it. Changes to the data model would probably imply changes to most of the core of the application and that would cost more then just throwing hardware at sql server.
I believe that this should do what you need, using dynamic SQL to generate a single statement that can give you your results and then using that with EXEC to put them into your table. The FOR XML trick is a common one for concatenating VARCHAR values together from multiple rows. It has to be written with the AS [text()] for it to work.
--=========================================================
-- Set up
--=========================================================
CREATE TABLE dbo.TestTableFunctions (function_name VARCHAR(50) NOT NULL, parameter VARCHAR(20) NOT NULL)
INSERT INTO dbo.TestTableFunctions (function_name, parameter)
VALUES ('fn_one', '1001'), ('fn_two', '1001'), ('fn_one', '1002'), ('fn_two', '1002')
CREATE TABLE dbo.TestTableFunctionsResults (function_name VARCHAR(50) NOT NULL, parameter VARCHAR(20) NOT NULL, result VARCHAR(200) NOT NULL)
GO
CREATE FUNCTION dbo.fn_one
(
#parameter VARCHAR(20)
)
RETURNS TABLE
AS
RETURN
SELECT 'fn_one_' + #parameter AS result
GO
CREATE FUNCTION dbo.fn_two
(
#parameter VARCHAR(20)
)
RETURNS TABLE
AS
RETURN
SELECT 'fn_two_' + #parameter AS result
GO
--=========================================================
-- The important stuff
--=========================================================
DECLARE #sql VARCHAR(MAX)
SELECT #sql =
(
SELECT 'SELECT ''' + T1.function_name + ''', ''' + T1.parameter + ''', F.result FROM ' + T1.function_name + '(' + T1.parameter + ') F UNION ALL ' AS [text()]
FROM
TestTableFunctions T1
FOR XML PATH ('')
)
SELECT #sql = SUBSTRING(#sql, 1, LEN(#sql) - 10)
INSERT INTO dbo.TestTableFunctionsResults
EXEC(#sql)
SELECT * FROM dbo.TestTableFunctionsResults
--=========================================================
-- Clean up
--=========================================================
DROP TABLE dbo.TestTableFunctions
DROP TABLE dbo.TestTableFunctionsResults
DROP FUNCTION dbo.fn_one
DROP FUNCTION dbo.fn_two
GO
The first SELECT statement (ignoring the setup) builds a string which has the syntax to run all of the functions in your table, returning the results all UNIONed together. That makes it possible to run the string with EXEC, which means that you can then INSERT those results into your table.
A couple of quick notes though... First, the functions must all return identical result set structures - the same number of columns with the same data types (technically, they might be able to be different data types if SQL Server can always do implicit conversions on them, but it's really not worth the risk). Second, if someone were able to update your functions table they could use SQL injection to wreak havoc on your system. You'll need that to be tightly controlled and I wouldn't let users just enter in function names, etc.
You cannot access objects by referencing their names in a SQL statement. One method would be to use a case statement:
select t1.*,
(case when fn = 'fn_one' then dbo.fn_one(t1.param)
when fn = 'fn_two' then dbo.fn_two(t1.param)
end) as resultval
from table1 t1 ;
Interestingly, you could encapsulate the case as another function, and then do:
select t1.*, dbo.fn_generic(t1.fn, t1.param) as resultval
from table1 t1 ;
However, in SQL Server, you cannot use dynamic SQL in a user-defined function (defined in T-SQL), so you would still need to use case or similar logic.
Either of these methods is likely to be much faster than a cursor, because they do not require issuing multiple queries.

SQL separate string being passed

I have a product table with a tag column, each product has multiple tags stored in this format: "|technology|mobile|acer|laptop|" ...second product's tags could look like this "|computer|laptop|toshiba|"
I am using MS SQL Server 2008 and stored procedure, I would like to know how I could pass a string like "|computer|laptop|" and get both records returned as they both have the tag laptop in them and if I passed "|computer|" only the second record would return as it is the only one comtainning that tag.
What is the best way of doing this without performance penalties using stored procedure?
I have so far had no luck with different codes i have found on the internet, I really hope you guys can maybe help me with this, thank you.
I agree with the other posters that storing data in a column like that is going to cause headaches. You really want to store those tags in a child table so you can easily and efficiently join them. If it's an inherited system or something you can't refactor right away you can write a split function.
The typical sql split implementation uses a while loop and a table variable in a multi-statement TVF. Every iteration incurs more I/O and CPU overhead. Performance testing on SQL 2005 SP1 showed that this overhead is hidden from the I/O Stats and query plan. Profiling the code will reveal the true cost.
Rewriting that function into a inline TVF is much more efficient. The primary difference between an inline and multi-statement TVF is the Query Optimizer will merge the inline function into the query before processing; this eliminates the overhead from the function call. Also, since there is no table variable required, the additional I/O cost is eliminated. Finally, you avoid the costly iterative processing.
Here is the fastest, most scalable split function I could come up with including unit tests and summary.
This function requires a numbers table:
CREATE TABLE dbo.Numbers
(
NUM INT PRIMARY KEY CLUSTERED
)
;WITH Nbrs ( n ) AS
(
SELECT 1 UNION ALL
SELECT 1 + n FROM Nbrs WHERE n < 10000
)
INSERT INTO dbo.Numbers
SELECT n FROM Nbrs
OPTION ( MAXRECURSION 10000 )
The source of the function is here:
IF EXISTS (
SELECT 1
FROM dbo.sysobjects
WHERE id = object_id(N'[dbo].[ParseString]')
AND xtype in (N'FN', N'IF', N'TF'))
BEGIN
DROP FUNCTION [dbo].[ParseString]
END
GO
CREATE FUNCTION dbo.ParseString (#String VARCHAR(8000), #Delimiter VARCHAR(10))
RETURNS TABLE
AS
/*******************************************************************************************************
* dbo.ParseString
*
* Creator: MagicMike
* Date: 9/12/2006
*
*
* Outline: A set-based string tokenizer
* Takes a string that is delimited by another string (of one or more characters),
* parses it out into tokens and returns the tokens in table format. Leading
* and trailing spaces in each token are removed, and empty tokens are thrown
* away.
*
*
* Usage examples/test cases:
Single-byte delimiter:
select * from dbo.ParseString2('|HDI|TR|YUM|||', '|')
select * from dbo.ParseString('HDI| || TR |YUM', '|')
select * from dbo.ParseString(' HDI| || S P A C E S |YUM | ', '|')
select * from dbo.ParseString2('HDI|||TR|YUM', '|')
select * from dbo.ParseString('', '|')
select * from dbo.ParseString('YUM', '|')
select * from dbo.ParseString('||||', '|')
select * from dbo.ParseString('HDI TR YUM', ' ')
select * from dbo.ParseString(' HDI| || S P A C E S |YUM | ', ' ') order by Ident
select * from dbo.ParseString(' HDI| || S P A C E S |YUM | ', ' ') order by StringValue
Multi-byte delimiter:
select * from dbo.ParseString('HDI and TR', 'and')
select * from dbo.ParseString('Pebbles and Bamm Bamm', 'and')
select * from dbo.ParseString('Pebbles and sandbars', 'and')
select * from dbo.ParseString('Pebbles and sandbars', ' and ')
select * from dbo.ParseString('Pebbles and sand', 'and')
select * from dbo.ParseString('Pebbles and sand', ' and ')
*
*
* Notes:
1. A delimiter is optional. If a blank delimiter is given, each byte is returned in it's own row (including spaces).
select * from dbo.ParseString3('|HDI|TR|YUM|||', '')
2. In order to maintain compatibility with SQL 2000, ident is not sequential but can still be used in an order clause
If you are running on SQL2005 or later
SELECT Ident, StringValue FROM
with
SELECT Ident = ROW_NUMBER() OVER (ORDER BY ident), StringValue FROM
*
*
* Modifications
*
*
********************************************************************************************************/
RETURN (
SELECT Ident, StringValue FROM
(
SELECT Num as Ident,
CASE
WHEN DATALENGTH(#delimiter) = 0 or #delimiter IS NULL
THEN LTRIM(SUBSTRING(#string, num, 1)) --replace this line with '' if you prefer it to return nothing when no delimiter is supplied. Remove LTRIM if you want to return spaces when no delimiter is supplied
ELSE
LTRIM(RTRIM(SUBSTRING(#String,
CASE
WHEN (Num = 1 AND SUBSTRING(#String,num ,DATALENGTH(#delimiter)) <> #delimiter) THEN 1
ELSE Num + DATALENGTH(#delimiter)
END,
CASE CHARINDEX(#Delimiter, #String, Num + DATALENGTH(#delimiter))
WHEN 0 THEN LEN(#String) - Num + DATALENGTH(#delimiter)
ELSE CHARINDEX(#Delimiter, #String, Num + DATALENGTH(#delimiter)) - Num -
CASE
WHEN Num > 1 OR (Num = 1 AND SUBSTRING(#String,num ,DATALENGTH(#delimiter)) = #delimiter)
THEN DATALENGTH(#delimiter)
ELSE 0
END
END
)))
End AS StringValue
FROM dbo.Numbers
WHERE Num <= LEN(#String)
AND (
SUBSTRING(#String, Num, DATALENGTH(ISNULL(#delimiter,''))) = #Delimiter
OR Num = 1
OR DATALENGTH(ISNULL(#delimiter,'')) = 0
)
) R WHERE StringValue <> ''
)
For your case, you could use it like this:
--SAMPLE DATA
CREATE TABLE #products
(
productid INT IDENTITY PRIMARY KEY CLUSTERED ,
prodname VARCHAR(200),
tags VARCHAR(200)
)
INSERT INTO #products (prodname, tags)
SELECT 'toshiba laptop', '|laptop|toshiba|notebook|'
UNION ALL
SELECT 'toshiba netbook', '|netbook|toshiba|'
UNION ALL
SELECT 'Apple macbook', '|laptop|apple|notebook|'
UNION ALL
SELECT 'Apple mouse', '|apple|mouse'
--Actual solution
DECLARE #searchTags VARCHAR(200)
SET #searchTags = '|apple|laptop|' --This would the string that would get passed in if it were a stored procedure
--First we convert the supplied tags into a table for use later
--My (2005) dev box raised a severe error attempting to do the search in 1 step
--hence the temp table
CREATE TABLE #tags
(
tag VARCHAR(200) PRIMARY KEY CLUSTERED
)
INSERT INTO #tags --The function splits the string up into one record for each value
SELECT stringValue
FROM dbo.parsestring(#searchTags,'|') --SQL 2005 has a real problem joining to a TVF twice, apparently
SELECT DISTINCT p.*
FROM #products P --we join the products table with the function to get a row for each tag so we can compare with the temp table
CROSS APPLY (SELECT stringValue FROM dbo.parsestring(P.tags,'|')) T
WHERE EXISTS(SELECT * FROM #tags WHERE tag = T.stringValue) --we compare the rows with our temp table and if we get matches, the products are returned
/*This will return the Apple Macbook and the Toshiba Laptop because they both contain
the 'laptop' tag and the Apple mouse because it contains the 'apple' tag. The
toshiba netbook contains neither tag so it won't be returned.*/
But, with your tags in a separate table as suggested (1-many for a simplified example) It would look like this:
SELECT * FROM Products P
WHERE EXISTS (SELECT *
FROM tags T
INNER JOIN dbo.parsestring(#tags,'|') Q
ON T.tag = Q.StringValue
WHERE T.productid = P.productiId )
You have a many-to-many relationship between products and tags. The best way of doing this is to redesign your database. Create a table of tags and a junction table that links products with tags.
That's not a very good design. Combining like terms into one field and separating them with a delimiter such as a vertical bar does not scale well and it is very limiting.
I recommend you read up on how to design databases. The best book I ever bought regarding database design was Database Design for Mere Mortals by Michael Hernandez ISBN: 0-201-69471-9. Amazon Listing I noticed he has a second edition.
He walks you through the entire process of (from start to finish) of designing a database. I recommend you start with this book.
You have to learn to look at things in groups or chunks. Database design has simple building blocks just like programming does. If you gain a thorough understanding of these simple building blocks you can tackle any database design.
In programming you have:
If Constructs
If Else Constructs
Do While Loops
Do Until Loops
Case Constructs
With databases you have:
Data Tables
Lookup Tables
One to One relationships
One to Many Relationships
Many to Many relationships
Primary keys
Foreign keys
The simpler you make things the better. A database is nothing more than a place where you put data into cubbie holes. Start by identifying what these cubbie holes are and what kind of stuff you want in them.
You are never going to create the perfect database design the first time you try. This is a fact. Your design will go through several refinements during the process. Sometimes things won't seem apparent until you start entering data, and then you have an ahh ha moment.
The web brings it's own sets of challenges. Bandwith issues. Statelessness. Erroneous data from processes that start but never get finished.
make a split with CLR function return a table with the value or pass as xml and load it into a table varible an make a join
create procedure search
(
#data xml
)
AS
BEGIN
--declare #data xml
declare #LoadData table
(
dataToFind varchar(max)
)
--set #data= cast(
--'<data>
-- <item>computer</item>
-- <item>television</item>
--</data>' as xml)
insert into #LoadData
SELECT T2.Loc.value('.','varchar(max)')
FROM (select #data as data )T
CROSS APPLY data.nodes('/data/item') as T2(Loc)
select * from #LoadData--use for join
END
I would suggest you write an extra couplle of tables that with "proper design,
Populate those tables from the existing not well designed bit - this way y our search will work properly buy others using the old | pipe approach won't notice till you have time to refactor

T-SQL Foreach Loop

Scenario
I have a stored procedure written in T-Sql using SQL Server 2005.
"SEL_ValuesByAssetName"
It accepts a unique string "AssetName".
It returns a table of values.
Question
Instead of calling the stored procedure multiple times and having to make a database call everytime I do this, I want to create another stored procedure that accepts a list of all the "AssetNames", and calls the stored procedure "SEL_ValueByAssetName" for each assetname in the list, and then returns the ENTIRE TABLE OF VALUES.
Pseudo Code
foreach(value in #AllAssetsList)
{
#AssetName = value
SEL_ValueByAssetName(#AssetName)
UPDATE #TempTable
}
How would I go about doing this?
It will look quite crippled with using Stored Procedures. But can you use Table-Valued Functions instead?
In case of Table-Valued functions it would look something like:
SELECT al.Value AS AssetName, av.* FROM #AllAssetsList AS al
CROSS APPLY SEL_ValuesByAssetName(al.Value) AS av
Sample implementation:
First of all, we need to create a Table-Valued Parameter type:
CREATE TYPE [dbo].[tvpStringTable] AS TABLE(Value varchar(max) NOT NULL)
Then, we need a function to get a value of a specific asset:
CREATE FUNCTION [dbo].[tvfGetAssetValue]
(
#assetName varchar(max)
)
RETURNS TABLE
AS
RETURN
(
-- Add the SELECT statement with parameter references here
SELECT 0 AS AssetValue
UNION
SELECT 5 AS AssetValue
UNION
SELECT 7 AS AssetValue
)
Next, a function to return a list AssetName, AssetValue for assets list:
CREATE FUNCTION [dbo].[tvfGetAllAssets]
(
#assetsList tvpStringTable READONLY
)
RETURNS TABLE
AS
RETURN
(
-- Add the SELECT statement with parameter references here
SELECT al.Value AS AssetName, av.AssetValue FROM #assetsList al
CROSS APPLY tvfGetAssetValue(al.Value) AS av
)
Finally, we can test it:
DECLARE #names tvpStringTable
INSERT INTO #names VALUES ('name1'), ('name2'), ('name3')
SELECT * FROM [Test].[dbo].[tvfGetAllAssets] (#names)
In MSSQL 2000 I would make #allAssetsList a Varchar comma separated values list. (and keep in mind that maximum length is 8000)
I would create a temporary table in the memory, parse this string and insert into that table, then do a simple query with the condition where assetName in (select assetName from #tempTable)
I wrote about MSSQL 2000 because I am not sure whether MSSQL 2005 has some new data type like an array that can be passed as a literal to the SP.