output of replace SQL as a where clause in another SQL - sql

Sorry for the rubbish title. I could quite articulate my problem in a few words.
I have an SQL that gives a list of ids separate by a pipe (|). I want to pass these ids into another sql as a where clause. I can use replace to convert the values from pipe separate into comma separated.
As an example the list of IDs might be
1|2|3|4
and using replace I get
1,2,3,4
select replace(value, '|', ',') from my_table;
If I try and pass this into another SQL where I want to look up these IDs I get an error
ORA-01722: invalid number
select * from my_table2 where id in (
select replace(value, '|', ',') from my_table);
Now I presume I need to cast the output to a number but I dont want to cast the entire string to a number just hte numeric values within it.
How can I do this easily?
Thanks

This is a complicated expression, but you can do it with like and exists:
select *
from my_table2
where exists (select 1
from my_table t1
where '|' || value || '|' like '%|' || id || '|%'
);
However, you have a fundamental problem with the data structure in my_table. You should not be storing lists of anything -- and especially integer ids -- in a string. The proper SQL approach is to use a junction table, with one row per id. Oracle has other data structures such as nested tables, which can help with this.

There may be two cases: good and bad.
Bad case is your pipe-separated string is stored somewhere in the database and you cannot change this design to something meaningful. If so, you'll need to use like operator, something like this:
select t2.*
from my_table2 t2, my_table t
where '|' || t1.value || '|' like '%|' || t2.id || '|%'
Good case is this pipelining isn't persistent and made by first SQL. If so, you should just remove garbage. Remove pipelining, remove listing into one row. Make inner SQL return resultset of IDs required, one per row, and use something like
select t2.*
from my_table2 t2
where t2.id in (select id from ...)
Additional case is if this list is a parameter value transferred from client. Some developers use this approach to make filters etc. If so, you should change client for transferring something better, say, table of numbers. SQL would be like
select t2.*
from my_table2 t2
where t2.id in (select column_value from table(cast :param as NumberTable))

We do it the following way. We have one function that splits string and returns table. The code is TSQL, but i think you can easily change it to ORACLE SQL
CREATE FUNCTION [dbo].[fStringToTable]
(
#List NVARCHAR(MAX) ,
#Splitter NVARCHAR(MAX)
)
RETURNS #ParsedList TABLE ( ID INT )
AS
BEGIN
DECLARE #ID NVARCHAR(MAX) ,
#Pos INT ,
#sqlstat NVARCHAR(MAX)
DECLARE #tbl TABLE ( ID INT )
SET #List = LTRIM(RTRIM(#List)) + #Splitter
SET #Pos = CHARINDEX(',', #List, 1)
IF REPLACE(#List, #Splitter, '') <> ''
BEGIN
WHILE #Pos > 0
BEGIN
SET #ID = LTRIM(RTRIM(LEFT(#List, #Pos - 1)))
IF #ID <> ''
BEGIN
INSERT INTO #tbl
( ID )
SELECT ( #ID )
END
SET #List = RIGHT(#List, LEN(#List) - #Pos)
SET #Pos = CHARINDEX(#Splitter, #List, 1)
END
END
INSERT INTO #ParsedList
SELECT ID
FROM #tbl
GROUP BY ID
RETURN
END
And your select will be
select * from my_table2 where id in (
SELECT ID FROM [dbo].[fStringToTable]('1,2', ','));
See this http://www.adp-gmbh.ch/ora/plsql/coll/return_table.html

Related

How to put a comma separated value from a column in a table into SQL IN operator?

I have a table which has a column in which I am storing a comma separated text with single quotes for each of the comma separated values. These values are employee IDs. This is how it looks
Now, I have a SQL query wherein I need to put the value from this column into a SQL IN operator. Something like this:
select *
from EMPLOYEE_MASTER
where EMPLOYEEID IN (select CM_CONFIG_VALUE
from ADL_CONFIG_MAST_T
where CM_CONFIG_KEY like 'ATT_BIOMETRIC_OU_ID'
)
But this, does not work, the query when executed returns 0 rows whereas if I execute the query normally like below, it works.
select *
from EMPLOYEE_MASTER
where EMPLOYEEID IN('9F3DD4B791554DDE','C9B90D62851D43AB','828CB9E6204B4DDC')
Please suggest what I should do here. I have tried using substring to remove the first and the last character as well assuming that single quotes might be the issue, but that does not work either.
select * from EMPLOYEE_MASTER where EMPLOYEEID IN(select EMPLOYEEID from ADL_CONFIG_MAST_T where CM_CONFIG_KEY like 'ATT_BIOMETRIC_OU_ID')
column should be same in where COLUMNNAME IN (select COLUMNNMAE from tablename)
You can create a temp varible and then use exec command to get the desired result.
declare #temp varchar(200)
select #temp=CM_CONFIG_VALUE
from ADL_CONFIG_MAST_T
where CM_CONFIG_KEY like 'ATT_BIOMETRIC_OU_ID'
exec('select *
from EMPLOYEE_MASTER
where EMPLOYEEID IN (' + #temp + ')')
Try This:
DECLARE #ID VARCHAR(500);
DECLARE #Number VARCHAR(500);
DECLARE #comma CHAR;
SET #comma = ','
SET #ID = (select CM_CONFIG_VALUE
from ADL_CONFIG_MAST_T
where CM_CONFIG_KEY like %ATT_BIOMETRIC_OU_ID% + #comma);
Create table #temp (EMPLOYEEID varchar(500))
WHILE CHARINDEX(#comma, #ID) > 0
BEGIN
SET #Number = SUBSTRING(#ID, 0, CHARINDEX(#comma, #ID))
SET #ID = SUBSTRING(#ID, CHARINDEX(#comma, #ID) + 1, LEN(#ID))
Insert into #temp
select #Number
END
select *
from EMPLOYEE_MASTER
where EMPLOYEEID IN(select EMPLOYEEID from #temp)
The reason you are not getting it in your query is because your inner query returns only one row. So your query searches for '9F3DD4B791554DDE','C9B90D62851D43AB','828CB9E6204B4DDC' as as single record.
If your compatibility level is greater than or equal to 130 you can use STRING_SPLIT() function. Then your query would be
SELECT *
FROM EMPLOYEE_MASTER
WHERE EMPLOYEEID IN
(SELECT value AS empid
FROM ADL_CONFIG_MAST_T CROSS APPLY string_split(CM_CONFIG_VALUE, ',' )
WHERE CM_CONFIG_KEY LIKE 'ATT_BIOMETRIC_OU_ID' )
What this actually does is, it splits the CM_CONFIG_VALUE with ',' and returns them as rows. This is the value column I have referred. Then you use them with the IN clause.
Hope this helps!
Direct IN condition will not work here. You have split your string before searching. You can do that with XML options in SQL SERVER 2014
SELECT *
FROM EMP
WHERE EMPID IN (
SELECT a.c.value('.', 'VARCHAR(1000)')
FROM (
SELECT x = CAST('<a>' +
REPLACE(REPLACE(CM_CONFIG_VALUE , ',', '</a><a>'),'''','') + '</a>' AS XML )
FROM ADL_CONFIG_MAST_T
-- WHERE <your_condition>
) m
CROSS APPLY x.nodes('/a') a(c))
CHECK DEMO HERE
For the version 2016 and above you can use STRING_SPLIT with Compatibility level 130

Select statement that concatenates the first character after every '/' character in a column

So I am trying to write a query which, among other things, brings back the first character in a Varchar field, then returns the first character which appears after each / character throughout the rest of the field.
The field I am refrering too will contain a group of last names, separated by a '/'. For example: Fischer-Costello/Korbell/Morrison/Pearson
For the above example, I would want my select statement to return: FKMP.
So far, I have only been able to get my code to return the first character + the first character after the FIRST (and only the first) '/' character.
So for the above example input, my select statement would return: FK
Here is the code that I have written so far:
select rp.CONTACT_ID, ra.TRADE_REP, c.FIRST_NAME, c.LAST_NAME,
UPPER(LEFT(FIRST_NAME, 1)) + SUBSTRING(c.first_name,CHARINDEX('/',c.first_name)+1,1) as al_1,
UPPER(LEFT(LAST_NAME, 1)) + SUBSTRING(c.LAST_name,CHARINDEX('/',c.LAST_name)+1,1) as al_2
from dbo.REP_ALIAS ra
inner join dbo.REP_PROFILE rp on rp.CONTACT_ID = ra.CONTACT_ID
inner join dbo.CONTACT c on rp.CONTACT_ID = c.CONTACT_ID
where
rp.CRD_NUMBER is null and
ra.TRADE_REP like '%DNK%' and
(c.LAST_NAME like '%/%' or c.FIRST_NAME like '%/%') and
ra.TRADE_FIRM in
(
'xxxxxxx',
'xxxxxxx'
)
If you read the code, it's obvious that I am attempting to perform the same concatenation on the first_name column as well. However, I realize that a solution which will work for the Last_name column (used in my example), will also work for the first_name column.
Thank you.
Some default values
DECLARE #List VARCHAR(50) = 'Fischer-Costello/Korbell/Morrison/Pearson'
DECLARE #SplitOn CHAR(1) = '/'
This area just splits the string into a list
DECLARE #RtnValue table
(
Id int identity(1,1),
Value nvarchar(4000)
)
While (Charindex(#SplitOn, #List)>0)
Begin
Insert Into #RtnValue (value)
Select
Value = ltrim(rtrim(Substring(#List,1,Charindex(#SplitOn,#List)-1)))
Set #List = Substring(#List,Charindex(#SplitOn,#List)+len(#SplitOn+',')-1,len(#List))
End
Insert Into #RtnValue (Value)
Select Value = ltrim(rtrim(#List))
Now lets grab the first character of each name and stuff it back into a single variable
SELECT STUFF((SELECT SUBSTRING(VALUE,1,1) FROM #RtnValue FOR XML PATH('')),1,0,'') AS Value
Outputs:
Value
FKMP
Here is another way to do this would be a lot faster than looping. What you need is a set based splitter. Jeff Moden at sql server central has one that is awesome. Here is a link to the article. http://www.sqlservercentral.com/articles/Tally+Table/72993/
Now I know you have to signup for an account to view this but it is free and the logic in that article will change the way you look at data. You might also be able to find his code posted if you search for DelimitedSplit8K.
At any rate, here is how you could implement this type of splitter.
declare #Table table(ID int identity, SomeValue varchar(50))
insert #Table
select 'Fischer-Costello/Korbell/Morrison/Pearson'
select ID, STUFF((select '' + left(x.Item, 1)
from #Table t2
cross apply dbo.DelimitedSplit8K(SomeValue, '/') x
where t2.ID = t1.ID
for xml path('')), 1, 0 , '') as MyResult
from #Table t1
group by t1.ID

TSQL - Querying a table column to pull out popular words for a tag cloud

Just an exploratory question to see if anyone has done this or if, in fact it is at all possible.
We all know what a tag cloud is, and usually, a tag cloud is created by someone assigning tags. Is it possible, within the current features of SQL Server to create this automatically, maybe via trigger when a table has a record added or updated, by looking at the data within a certain column and getting popular words?
It is similar to this question: How can I get the most popular words in a table via mysql?. But, that is MySQL not MSSQL.
Thanks in advance.
James
Here is a good bit on parsing delimited string into rows:
http://anyrest.wordpress.com/2010/08/13/converting-parsing-delimited-string-column-in-sql-to-rows/
http://www.sqlteam.com/article/parsing-csv-values-into-multiple-rows
http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=50648
T-SQL: Opposite to string concatenation - how to split string into multiple records
If you want to parse all words, you can use the space ' ' as your delimiter, Then you get a row for each word.
Next you would simply select the result set GROUPing by the word and aggregating the COUNT
Order your results and you're there.
IMO, the design approach is what makes this difficult. Just because you allow users to assign tags does not mean the tags must be stored as a single delimited list of words. You can normalize the structure into something like:
Create Table Posts ( Id ... not null primary key )
Create Table Tags( Id ... not null primary key, Name ... not null Unique )
Create Table PostTags
( PostId ... not null References Posts( Id )
, TagId ... not null References Tags( Id ) )
Now your question becomes trivial:
Select T.Id, T.Name, Count(*) As TagCount
From PostTags As PT
Join Tags As T
On T.Id = PT.TagId
Group By T.Id, T.Name
Order By Count(*) Desc
If you insist on storing tags as delimited values, then only solution is to split the values on their delimiter by writing a custom Split function and then do your count. At the bottom is an example of a Split function. With it your query would look something like (using a comma delimiter):
Select Tag.Value, Count(*) As TagCount
From Posts As P
Cross Apply dbo.Split( P.Tags, ',' ) As Tag
Group By Tag.Value
Order By Count(*) Desc
Split Function:
Create Function [dbo].[Split]
(
#DelimitedList nvarchar(max)
, #Delimiter nvarchar(2) = ','
)
RETURNS TABLE
AS
RETURN
(
With CorrectedList As
(
Select Case When Left(#DelimitedList, DataLength(#Delimiter)/2) <> #Delimiter Then #Delimiter Else '' End
+ #DelimitedList
+ Case When Right(#DelimitedList, DataLength(#Delimiter)/2) <> #Delimiter Then #Delimiter Else '' End
As List
, DataLength(#Delimiter)/2 As DelimiterLen
)
, Numbers As
(
Select TOP (Coalesce(Len(#DelimitedList),1)) Row_Number() Over ( Order By c1.object_id ) As Value
From sys.objects As c1
Cross Join sys.columns As c2
)
Select CharIndex(#Delimiter, CL.list, N.Value) + CL.DelimiterLen As Position
, Substring (
CL.List
, CharIndex(#Delimiter, CL.list, N.Value) + CL.DelimiterLen
, Case
When CharIndex(#Delimiter, CL.list, N.Value + 1)
- CharIndex(#Delimiter, CL.list, N.Value)
- CL.DelimiterLen < 0 Then Len(CL.List)
Else CharIndex(#Delimiter, CL.list, N.Value + 1)
- CharIndex(#Delimiter, CL.list, N.Value)
- CL.DelimiterLen
End
) As Value
From CorrectedList As CL
Cross Join Numbers As N
Where N.Value < Len(CL.List)
And Substring(CL.List, N.Value, CL.DelimiterLen) = #Delimiter
)
Word or Tag clouds need two fields: a string and a value of how many times that word or string appeared in your collection. You can then pass the results into a tag cloud tool that will display the data as you require.
Not to take away from the previous answers, as they do answer the original challenge. However, I have a simpler solution using two functions (similar to #Thomas answer), one of which uses regex to "clean" the words.
The two functions are:
dbo.fnStripChars(a, b) --use regex 'b' to cleanse a string 'a'
dbo.fnMakeTableFromList(a, b) --convert a single field 'a' into a tabled list, delimited by 'b'
I then apply them into a single SQL statement, using the TOP n feature to give me the top 10 words I want to pass onto PowerBI or some other graphical tool, for actually displaying a word or tag cloud.
SELECT TOP 10 b.[words], b.[total]
FROM
(SELECT a.[words], count(*) AS [total]
FROM
(SELECT upper(l.item) AS [words]
FROM dbo.MyTableWithWords AS c
CROSS APPLY POTS.fnMakeTableFromList([POTS].fnStripChars(c.myColumnThatHasTheWords,'[^a-zA-Z ]'),' ') AS l) AS a
GROUP BY a.[words]) AS b
ORDER BY 2 DESC
As you can see, the regex is [^a-zA-Z ], which is to give me only alphabetical characters and spaces. The space is then used as a delimiter to the make table function to separate each word individually. I apply a count(*), to give me the number of times that word appears, hence then I have everything I need to give me the TOP 10 results.
Note that CROSS APPLY is important here so I get only data with actual "words" in each record found. Otherwise it will go through every record with or without words to extract from the column I want.
fnStripChars()
FUNCTION [dbo].[fnStripChars]
(
#String NVARCHAR(4000),
#MatchExpression VARCHAR(255)
)
RETURNS NVARCHAR(MAX)
AS
BEGIN
SET #MatchExpression = '%' + #MatchExpression + '%'
WHILE PatIndex(#MatchExpression, #String) > 0
SET #String = Stuff(#String, PatIndex(#MatchExpression, #String), 1, '')
RETURN #String
END
fnMakeTableFromList()
FUNCTION [dbo].[fnMakeTableFromList](
#List VARCHAR(MAX),
#Delimiter CHAR(1))
RETURNS TABLE
AS
RETURN (SELECT Item = CONVERT(VARCHAR, Item)
FROM (SELECT Item = x.i.value('(./text())[1]','varchar(max)')
FROM (SELECT [XML] = CONVERT(XML,'<i>' + REPLACE(#List,#Delimiter,'</i><i>') + '</i>').query('.')) AS a
CROSS APPLY [XML].nodes('i') AS x(i)) AS y
WHERE Item IS NOT NULL);
I've tested this with over 400K records and it's able to come back with my results in under 60 seconds. I think that's reasonable.

Combination of 'LIKE' and 'IN' using t-sql

How can I do this kind of selection:
SELECT *
FROM Street
WHERE StreetName LIKE IN ('% Main Street', 'foo %')
Please don't tell me that I can use OR because these actually comes from a query.
There is no combined LIKE and IN syntax but you can use LIKE to JOIN onto your query as below.
;WITH Query(Result) As
(
SELECT '% Main Street' UNION ALL
SELECT 'foo %'
)
SELECT DISTINCT s.*
FROM Street s
JOIN Query q ON StreetName LIKE q.Result
Or to use your example in the comments
SELECT DISTINCT s.*
FROM Street s
JOIN CarStreets cs ON s.StreetName LIKE cs.name + '%'
WHERE cs.Streets = 'offroad'
You don't have a lot of choices here.
SELECT * FROM Street Where StreetName LIKE '% Main Street' OR StreetName LIKE 'foo %'
If this is part of an existing, more complicated query (which is the impression I'm getting), you could create a table value function that does the checking for you.
SELECT * FROM Street Where StreetName IN (dbo.FindStreetNameFunction('% Main Street|foo %'))
I'd recommend using the simplest solution (the first). If this is nested inside a larger, more complicated query, post it and we'll take a look.
I had a similar conundrum but due to only needing to match the start of a string, I changed my 'like' to SUBSTRING as such:
SELECT *
FROM codes
WHERE SUBSTRING(code, 1, 12) IN ('012316963429', '012315667849')
You can resort to Dynamic SQL and wrapping up all in a stored procedure.
If you get the LIKE IN param in a string as tokens with a certain separator, like
'% Main Street,foo %,Another%Street'
first you need to create a function that receives a list of LIKE "tokens" and returns a table of them.
CREATE FUNCTION [dbo].[SplitList]
(
#list nvarchar(MAX),
#delim nvarchar(5)
)
RETURNS #splitTable table
(
value nvarchar(50)
)
AS BEGIN
While (Charindex(#delim, #list)>0) Begin
Insert Into #splitTable (value)
Select ltrim(rtrim(Substring(#list, 1, Charindex(#delim, #list)-1)))
Set #list = Substring(#list, Charindex(#delim, #list)+len(#delim), len(#list))
End
Insert Into #splitTable (value) Select ltrim(rtrim(#list))
Return
END
Then in the SP you have the following code
declare
#sql nvarchar(MAX),
#subWhere nvarchar(MAX)
#params nvarchar(MAX)
-- prepare the where sub-clause to cover LIKE IN (...)
-- it will actually generate where sub clause StreetName Like option1 or StreetName Like option2 or ...
set #subWhere = STUFF(
(
--(**)
SELECT ' OR StreetName like ''' + value + '''' FROM SplitList('% Main Street,foo %,Another%Street', ',')
FOR XML PATH('')
), 1, 4, '')
-- create the dynamic SQL
set #sql ='select * from [Street]
where
(' + #subWhere + ')
-- and any additional query params here, if needed, like
AND StreetMinHouseNumber = #minHouseNumber
AND StreetNumberOfHouses between (#minNumberOfHouses and #maxNumberOfHouses)'
set #params = ' #minHouseNumber nvarchar(5),
#minNumberOfHouses int,
#minNumberOfHouses int'
EXECUTE sp_executesql #sql, #params,
#minHouseNumber,
#minNumberOfHouses,
#minNumberOfHouses
Of course, if you have your LIKE IN parameters in another table or you gather it through a query, you can replace that in line (**)
I believe I can clarify what he is looking for, but I don't know the answer. I'll use my situation to demonstrate. I have a table with a column called "Query" that holds SQL queries. These queries sometimes contain table names from one of my databases. I need to find all Query rows that contain table names from a particular database. So, I can use the following code to get the table names:
SELECT TABLE_NAME FROM INFORMATION_SCHEMA.TABLES
I'm trying to use a WHERE IN clause to identify the Query rows that contain the table names I'm interested in:
SELECT *
FROM [DatasourceQuery]
WHERE Query IN LIKE
(
SELECT TABLE_NAME FROM INFORMATION_SCHEMA.TABLES
)
I believe the OP is trying to do something like that.
This is my way:
First create a table function:
create function [splitDelimeter](#str nvarchar(max), #delimeter nvarchar(10)='*')
returns #r table(val nvarchar(max))
as
begin
declare #x nvarchar(max)=#str
set #x='<m>'+replace(#x, #delimeter, '</m><m>')+'</m>'
declare #xx xml=cast(#x as xml)
insert #r(val)
SELECT Tbl.Col.value('.', 'nvarchar(max)') id
FROM #xx.nodes('/m') Tbl(Col)
return
end
Then split the search text with your preference delimeter. After that you can do your select with left join as below:
declare #s nvarchar(max)='% Main Street*foo %'
select a.* from street a
left join gen.splitDelimeter(#s, '*') b
on a.streetname like b.val
where val is not null
What I did when solving a similar problem was:
SELECT DISTINCT S.*
FROM Street AS S
JOIN (SELECT value FROM String_Split('% Main Street,foo %', N',')) T
ON S.StreetName LIKE T.value;
Which is functionally similar to Martin's answer but a more direct answer to the question.
Note: DISTINCT is used because you might get multiple matches for a single row.

SQL: Find rows where Column contains all of the given words

I have some column EntityName, and I want to have users to be able to search names by entering words separated by space. The space is implicitly considered as an 'AND' operator, meaning that the returned rows must have all of the words specified, and not necessarily in the given order.
For example, if we have rows like these:
abba nina pretty balerina
acdc you shook me all night long
sth you are me
dream theater it's all about you
when the user enters: me you, or you me (the results must be equivalent), the result has rows 2 and 3.
I know I can go like:
WHERE Col1 LIKE '%' + word1 + '%'
AND Col1 LIKE '%' + word2 + '%'
but I wanted to know if there's some more optimal solution.
The CONTAINS would require a full text index, which (for various reasons) is not an option.
Maybe Sql2008 has some built-in, semi-hidden solution for these cases?
The only thing I can think of is to write a CLR function that does the LIKE comparisons. This should be many times faster.
Update: Now that I think about it, it makes sense CLR would not help. Two other ideas:
1 - Try indexing Col1 and do this:
WHERE (Col1 LIKE word1 + '%' or Col1 LIKE '%' + word1 + '%')
AND (Col1 LIKE word2 + '%' or Col1 LIKE '%' + word2 + '%')
Depending on the most common searches (starts with vs. substring), this may offer an improvement.
2 - Add your own full text indexing table where each word is a row in the table. Then you can index properly.
Function
CREATE FUNCTION [dbo].[fnSplit] ( #sep CHAR(1), #str VARCHAR(512) )
RETURNS TABLE AS
RETURN (
WITH Pieces(pn, start, stop) AS (
SELECT 1, 1, CHARINDEX(#sep, #str)
UNION ALL
SELECT pn + 1, stop + 1, CHARINDEX(#sep, #str, stop + 1)
FROM Pieces
WHERE stop > 0
)
SELECT
pn AS Id,
SUBSTRING(#str, start, CASE WHEN stop > 0 THEN stop - start ELSE 512 END) AS Data
FROM
Pieces
)
Query
DECLARE #FilterTable TABLE (Data VARCHAR(512))
INSERT INTO #FilterTable (Data)
SELECT DISTINCT S.Data
FROM fnSplit(' ', 'word1 word2 word3') S -- Contains words
SELECT DISTINCT
T.*
FROM
MyTable T
INNER JOIN #FilterTable F1 ON T.Col1 LIKE '%' + F1.Data + '%'
LEFT JOIN #FilterTable F2 ON T.Col1 NOT LIKE '%' + F2.Data + '%'
WHERE
F2.Data IS NULL
Source: SQL SELECT WHERE field contains words
http://msdn.microsoft.com/en-us/magazine/cc163473.aspx
You're going to end up with a full table scan anyway.
The collation can make a big difference apparently. Kalen Delaney in the book "Microsoft SQL Server 2008 Internals" says:
Collation can make a huge difference
when SQL Server has to look at almost
all characters in the strings. For
instance, look at the following:
SELECT COUNT(*) FROM tbl WHERE longcol LIKE '%abc%'
This may execute 10 times faster or more with a binary collation than a nonbinary Windows collation. And with varchar data, this executes up to seven or eight times faster with a SQL collation than with a Windows collation.
WITH Tokens AS(SELECT 'you' AS Token UNION ALL SELECT 'me')
SELECT ...
FROM YourTable AS t
WHERE (SELECT COUNT(*) FROM Tokens WHERE y.Col1 LIKE '%'+Tokens.Token+'%')
=
(SELECT COUNT(*) FROM Tokens) ;
This should ideally be done with the help of Full text search as mentioned above.
BUT,
If you don't have full text configured for your DB, here is a performance intensive solution for doing a prioritized string search.
-- table to search in
drop table if exists dbo.myTable;
go
CREATE TABLE dbo.myTable
(
myTableId int NOT NULL IDENTITY (1, 1),
code varchar(200) NOT NULL,
description varchar(200) NOT NULL -- this column contains the values we are going to search in
) ON [PRIMARY]
GO
-- function to split space separated search string into individual words
drop function if exists [dbo].[fnSplit];
go
CREATE FUNCTION [dbo].[fnSplit] (#StringInput nvarchar(max),
#Delimiter nvarchar(1))
RETURNS #OutputTable TABLE (
id nvarchar(1000)
)
AS
BEGIN
DECLARE #String nvarchar(100);
WHILE LEN(#StringInput) > 0
BEGIN
SET #String = LEFT(#StringInput, ISNULL(NULLIF(CHARINDEX(#Delimiter, #StringInput) - 1, -1),
LEN(#StringInput)));
SET #StringInput = SUBSTRING(#StringInput, ISNULL(NULLIF(CHARINDEX
(
#Delimiter, #StringInput
),
0
), LEN
(
#StringInput)
)
+ 1, LEN(#StringInput));
INSERT INTO #OutputTable (id)
VALUES (#String);
END;
RETURN;
END;
GO
-- this is the search script which can be optionally converted to a stored procedure /function
declare #search varchar(max) = 'infection upper acute genito'; -- enter your search string here
-- the searched string above should give rows containing the following
-- infection in upper side with acute genitointestinal tract
-- acute infection in upper teeth
-- acute genitointestinal pain
if (len(trim(#search)) = 0) -- if search string is empty, just return records ordered alphabetically
begin
select 1 as Priority ,myTableid, code, Description from myTable order by Description
return;
end
declare #splitTable Table(
wordRank int Identity(1,1), -- individual words are assinged priority order (in order of occurence/position)
word varchar(200)
)
declare #nonWordTable Table( -- table to trim out auxiliary verbs, prepositions etc. from the search
id varchar(200)
)
insert into #nonWordTable values
('of'),
('with'),
('at'),
('in'),
('for'),
('on'),
('by'),
('like'),
('up'),
('off'),
('near'),
('is'),
('are'),
(','),
(':'),
(';')
insert into #splitTable
select id from dbo.fnSplit(#search,' '); -- this function gives you a table with rows containing all the space separated words of the search like in this e.g., the output will be -
-- id
-------------
-- infection
-- upper
-- acute
-- genito
delete s from #splitTable s join #nonWordTable n on s.word = n.id; -- trimming out non-words here
declare #countOfSearchStrings int = (select count(word) from #splitTable); -- count of space separated words for search
declare #highestPriority int = POWER(#countOfSearchStrings,3);
with plainMatches as
(
select myTableid, #highestPriority as Priority from myTable where Description like #search -- exact matches have highest priority
union
select myTableid, #highestPriority-1 as Priority from myTable where Description like #search + '%' -- then with something at the end
union
select myTableid, #highestPriority-2 as Priority from myTable where Description like '%' + #search -- then with something at the beginning
union
select myTableid, #highestPriority-3 as Priority from myTable where Description like '%' + #search + '%' -- then if the word falls somewhere in between
),
splitWordMatches as( -- give each searched word a rank based on its position in the searched string
-- and calculate its char index in the field to search
select myTable.myTableid, (#countOfSearchStrings - s.wordRank) as Priority, s.word,
wordIndex = CHARINDEX(s.word, myTable.Description) from myTable join #splitTable s on myTable.Description like '%'+ s.word + '%'
-- and not exists(select myTableid from plainMatches p where p.myTableId = myTable.myTableId) -- need not look into myTables that have already been found in plainmatches as they are highest ranked
-- this one takes a long time though, so commenting it, will have no impact on the result
),
matchingRowsWithAllWords as (
select myTableid, count(myTableid) as myTableCount from splitWordMatches group by(myTableid) having count(myTableid) = #countOfSearchStrings
)
, -- trim off the CTE here if you don't care about the ordering of words to be considered for priority
wordIndexRatings as( -- reverse the char indexes retrived above so that words occuring earlier have higher weightage
-- and then normalize them to sequential values
select s.myTableid, Priority, word, ROW_NUMBER() over (partition by s.myTableid order by wordindex desc) as comparativeWordIndex
from splitWordMatches s join matchingRowsWithAllWords m on s.myTableId = m.myTableId
)
,
wordIndexSequenceRatings as ( -- need to do this to ensure that if the same set of words from search string is found in two rows,
-- their sequence in the field value is taken into account for higher priority
select w.myTableid, w.word, (w.Priority + w.comparativeWordIndex + coalesce(sequncedPriority ,0)) as Priority
from wordIndexRatings w left join
(
select w1.myTableid, w1.priority, w1.word, w1.comparativeWordIndex, count(w1.myTableid) as sequncedPriority
from wordIndexRatings w1 join wordIndexRatings w2 on w1.myTableId = w2.myTableId and w1.Priority > w2.Priority and w1.comparativeWordIndex>w2.comparativeWordIndex
group by w1.myTableid, w1.priority,w1.word, w1.comparativeWordIndex
)
sequencedPriority on w.myTableId = sequencedPriority.myTableId and w.Priority = sequencedPriority.Priority
),
prioritizedSplitWordMatches as ( -- this calculates the cumulative priority for a field value
select w1.myTableId, sum(w1.Priority) as OverallPriority from wordIndexSequenceRatings w1 join wordIndexSequenceRatings w2 on w1.myTableId = w2.myTableId
where w1.word <> w2.word group by w1.myTableid
),
completeSet as (
select myTableid, priority from plainMatches -- get plain matches which should be highest ranked
union
select myTableid, OverallPriority as priority from prioritizedSplitWordMatches -- get ranked split word matches (which are ordered based on word rank in search string and sequence)
),
maximizedCompleteSet as( -- set the priority of a field value = maximum priority for that field value
select myTableid, max(priority) as Priority from completeSet group by myTableId
)
select priority, myTable.myTableid , code, Description from maximizedCompleteSet m join myTable on m.myTableId = myTable.myTableId
order by Priority desc, Description -- order by priority desc to get highest rated items on top
--offset 0 rows fetch next 50 rows only -- optional paging