Is there a possibility to Avoid multiple "OR" statement in Microsoft SQL? - sql

I have a query that has to filter our results from a text field based on certain keywords used in the textline .. currently the SQL statement looks like the below.
and (name like '%Abc%') or (name like '%XYZ%') or (name like '%CSV%')...
Is there a way to avoid multiple or statements and achieve the same results?

You could put your filter keywords into a table or temp table and query them like this:
select a.*
from table_you_are_searching a
inner join temp_filter_table b
on charindex(b.filtercolumn,a.searchcolumn) <> 0

A slightly more shorthand way of doing this if you have a large amount of different patterns is to use EXISTS and a table value constructor:
SELECT *
FROM T
WHERE EXISTS
( SELECT 1
FROM (VALUES ('abc'), ('xyz'), ('csv')) m (match)
WHERE T.Name LIKE '%' + m.Match + '%'
);
A similar approach can be applied with table valued parameters. Since this is usually a requirement where people want to pass a variable number of search terms for a match it can be quite a useful approach:
CREATE TYPE dbo.ListOfString TABLE (value VARCHAR(MAX));
Then a procedure can take this type:
CREATE PROCEDURE dbo.GetMatches #List dbo.ListOfString READONLY
AS
BEGIN
SELECT *
FROM T
WHERE EXISTS
( SELECT 1
FROM #List AS l
WHERE T.Name LIKE '%' + l.value + '%'
);
END
Then you can call this procedure:
DECLARE #T dbo.ListOfString;
INSERT #T VALUES ('abc'), ('xyz'), ('csv');
EXECUTE dbo.GetMatches #T;

Just to give you another option you could also try this, an IN statement mixed with a PATINDEX:
Select *
from tbl
Where 0 not in (PATINDEX('%Abc%', name), PATINDEX('%XYZ%', name), PATINDEX('%CSV%', name))

Related

How to match an array of substrings to db [duplicate]

In SQL I (sadly) often have to use "LIKE" conditions due to databases that violate nearly every rule of normalization. I can't change that right now. But that's irrelevant to the question.
Further, I often use conditions like WHERE something in (1,1,2,3,5,8,13,21) for better readability and flexibility of my SQL statements.
Is there any possible way to combine these two things without writing complicated sub-selects?
I want something as easy as WHERE something LIKE ('bla%', '%foo%', 'batz%') instead of this:
WHERE something LIKE 'bla%'
OR something LIKE '%foo%'
OR something LIKE 'batz%'
I'm working with SQl Server and Oracle here but I'm interested if this is possible in any RDBMS at all.
There is no combination of LIKE & IN in SQL, much less in TSQL (SQL Server) or PLSQL (Oracle). Part of the reason for that is because Full Text Search (FTS) is the recommended alternative.
Both Oracle and SQL Server FTS implementations support the CONTAINS keyword, but the syntax is still slightly different:
Oracle:
WHERE CONTAINS(t.something, 'bla OR foo OR batz', 1) > 0
SQL Server:
WHERE CONTAINS(t.something, '"bla*" OR "foo*" OR "batz*"')
The column you are querying must be full-text indexed.
Reference:
Building Full-Text Search Applications with Oracle Text
Understanding SQL Server Full-Text
If you want to make your statement easily readable, then you can use REGEXP_LIKE (available from Oracle version 10 onwards).
An example table:
SQL> create table mytable (something)
2 as
3 select 'blabla' from dual union all
4 select 'notbla' from dual union all
5 select 'ofooof' from dual union all
6 select 'ofofof' from dual union all
7 select 'batzzz' from dual
8 /
Table created.
The original syntax:
SQL> select something
2 from mytable
3 where something like 'bla%'
4 or something like '%foo%'
5 or something like 'batz%'
6 /
SOMETH
------
blabla
ofooof
batzzz
3 rows selected.
And a simple looking query with REGEXP_LIKE
SQL> select something
2 from mytable
3 where regexp_like (something,'^bla|foo|^batz')
4 /
SOMETH
------
blabla
ofooof
batzzz
3 rows selected.
BUT ...
I would not recommend it myself due to the not-so-good performance. I'd stick with the several LIKE predicates. So the examples were just for fun.
you're stuck with the
WHERE something LIKE 'bla%'
OR something LIKE '%foo%'
OR something LIKE 'batz%'
unless you populate a temp table (include the wild cards in with the data) and join like this:
FROM YourTable y
INNER JOIN YourTempTable t On y.something LIKE t.something
try it out (using SQL Server syntax):
declare #x table (x varchar(10))
declare #y table (y varchar(10))
insert #x values ('abcdefg')
insert #x values ('abc')
insert #x values ('mnop')
insert #y values ('%abc%')
insert #y values ('%b%')
select distinct *
FROM #x x
WHERE x.x LIKE '%abc%'
or x.x LIKE '%b%'
select distinct x.*
FROM #x x
INNER JOIN #y y On x.x LIKE y.y
OUTPUT:
x
----------
abcdefg
abc
(2 row(s) affected)
x
----------
abc
abcdefg
(2 row(s) affected)
With PostgreSQL there is the ANY or ALL form:
WHERE col LIKE ANY( subselect )
or
WHERE col LIKE ALL( subselect )
where the subselect returns exactly one column of data.
Another solution, should work on any RDBMS:
WHERE EXISTS (SELECT 1
FROM (SELECT 'bla%' pattern FROM dual UNION ALL
SELECT '%foo%' FROM dual UNION ALL
SELECT 'batz%' FROM dual)
WHERE something LIKE pattern)
The inner select can be replaced by another source of patterns like a table (or a view) in this way:
WHERE EXISTS (SELECT 1
FROM table_of_patterns t
WHERE something LIKE t.pattern)
table_of_patterns should contain at least a column pattern, and can be populated like this:
INSERT INTO table_of_patterns(pattern) VALUES ('bla%');
INSERT INTO table_of_patterns(pattern) VALUES ('%foo%');
INSERT INTO table_of_patterns(pattern) VALUES ('batz%');
I would suggest using a TableValue user function if you'd like to encapsulate the Inner Join or temp table techniques shown above. This would allow it to read a bit more clearly.
After using the split function defined at: http://www.logiclabz.com/sql-server/split-function-in-sql-server-to-break-comma-separated-strings-into-table.aspx
we can write the following based on a table I created called "Fish" (int id, varchar(50) Name)
SELECT Fish.* from Fish
JOIN dbo.Split('%ass,%e%',',') as Splits
on Name like Splits.items //items is the name of the output column from the split function.
Outputs
1 Bass
2 Pike
7 Angler
8 Walleye
I'm working with SQl Server and Oracle here but I'm interested if this is possible in any RDBMS at all.
Teradata supports LIKE ALL/ANY syntax:
ALL every string in the list.
ANY any string in the list.
┌──────────────────────────────┬────────────────────────────────────┐
│ THIS expression … │ IS equivalent to this expression … │
├──────────────────────────────┼────────────────────────────────────┤
│ x LIKE ALL ('A%','%B','%C%') │ x LIKE 'A%' │
│ │ AND x LIKE '%B' │
│ │ AND x LIKE '%C%' │
│ │ │
│ x LIKE ANY ('A%','%B','%C%') │ x LIKE 'A%' │
│ │ OR x LIKE '%B' │
│ │ OR x LIKE '%C%' │
└──────────────────────────────┴────────────────────────────────────┘
EDIT:
jOOQ version 3.12.0 supports that syntax:
Add synthetic [NOT] LIKE ANY and [NOT] LIKE ALL operators
A lot of times, SQL users would like to be able to combine LIKE and IN predicates, as in:
SELECT *
FROM customer
WHERE last_name [ NOT ] LIKE ANY ('A%', 'E%') [ ESCAPE '!' ]
The workaround is to manually expand the predicate to the equivalent
SELECT *
FROM customer
WHERE last_name LIKE 'A%'
OR last_name LIKE 'E%'
jOOQ could support such a synthetic predicate out of the box.
PostgreSQL LIKE/ILIKE ANY (ARRAY[]):
SELECT *
FROM t
WHERE c LIKE ANY (ARRAY['A%', '%B']);
SELECT *
FROM t
WHERE c LIKE ANY ('{"Do%", "%at"}');
db<>fiddle demo
Snowflake also supports LIKE ANY/LIKE ALL matching:
LIKE ANY/ALL
Allows case-sensitive matching of strings based on comparison with one or more patterns.
<subject> LIKE ANY (<pattern1> [, <pattern2> ... ] ) [ ESCAPE <escape_char> ]
Example:
SELECT *
FROM like_example
WHERE subject LIKE ANY ('%Jo%oe%','T%e')
-- WHERE subject LIKE ALL ('%Jo%oe%','J%e')
Use an inner join instead:
SELECT ...
FROM SomeTable
JOIN
(SELECT 'bla%' AS Pattern
UNION ALL SELECT '%foo%'
UNION ALL SELECT 'batz%'
UNION ALL SELECT 'abc'
) AS Patterns
ON SomeTable.SomeColumn LIKE Patterns.Pattern
One approach would be to store the conditions in a temp table (or table variable in SQL Server) and join to that like this:
SELECT t.SomeField
FROM YourTable t
JOIN #TempTableWithConditions c ON t.something LIKE c.ConditionValue
I have a simple solution, that works in postgresql at least, using like any followed by the list of regex. Here is an example, looking at identifying some antibiotics in a list:
select *
from database.table
where lower(drug_name) like any ('{%cillin%,%cyclin%,%xacin%,%mycine%,%cephal%}')
u can even try this
Function
CREATE FUNCTION [dbo].[fn_Split](#text varchar(8000), #delimiter varchar(20))
RETURNS #Strings TABLE
(
position int IDENTITY PRIMARY KEY,
value varchar(8000)
)
AS
BEGIN
DECLARE #index int
SET #index = -1
WHILE (LEN(#text) > 0)
BEGIN
SET #index = CHARINDEX(#delimiter , #text)
IF (#index = 0) AND (LEN(#text) > 0)
BEGIN
INSERT INTO #Strings VALUES (#text)
BREAK
END
IF (#index > 1)
BEGIN
INSERT INTO #Strings VALUES (LEFT(#text, #index - 1))
SET #text = RIGHT(#text, (LEN(#text) - #index))
END
ELSE
SET #text = RIGHT(#text, (LEN(#text) - #index))
END
RETURN
END
Query
select * from my_table inner join (select value from fn_split('ABC,MOP',','))
as split_table on my_table.column_name like '%'+split_table.value+'%';
Starting with 2016, SQL Server includes a STRING_SPLIT function. I'm using SQL Server v17.4 and I got this to work for me:
DECLARE #dashboard nvarchar(50)
SET #dashboard = 'P1%,P7%'
SELECT * from Project p
JOIN STRING_SPLIT(#dashboard, ',') AS sp ON p.ProjectNumber LIKE sp.value
May be you think the combination like this:
SELECT *
FROM table t INNER JOIN
(
SELECT * FROM (VALUES('bla'),('foo'),('batz')) AS list(col)
) l ON t.column LIKE '%'+l.Col+'%'
If you have defined full text index for your target table then you may use this alternative:
SELECT *
FROM table t
WHERE CONTAINS(t.column, '"bla*" OR "foo*" OR "batz*"')
I was also wondering for something like that. I just tested using a combination of SUBSTRING and IN and it is an effective solution for this kind of problem. Try the below query :
Select * from TB_YOUR T1 Where SUBSTRING(T1.Something, 1,3) IN ('bla', 'foo', 'batz')
In Oracle you can use a collection in the following way:
WHERE EXISTS (SELECT 1
FROM TABLE(ku$_vcnt('bla%', '%foo%', 'batz%'))
WHERE something LIKE column_value)
Here I have used a predefined collection type ku$_vcnt, but you can declare your own one like this:
CREATE TYPE my_collection AS TABLE OF VARCHAR2(4000);
I may have a solution for this, although it will only work in SQL Server 2008 as far as I know. I discovered that you can use the row-constructor described in https://stackoverflow.com/a/7285095/894974 to join a 'fictional' table using a like clause.
It sounds more complex then it is, look:
SELECT [name]
,[userID]
,[name]
,[town]
,[email]
FROM usr
join (values ('hotmail'),('gmail'),('live')) as myTable(myColumn) on email like '%'+myTable.myColumn+'%'
This will result in all users with an e-mail adres like the ones provided in the list.
Hope it's of use to anyone. The problem had been bothering me a while.
For Sql Server you can resort to Dynamic SQL.
Most of the time in such situations you have the parameter of IN clause based on some data from database.
The example below is a little "forced", but this can match various real cases found in legacy databases.
Suppose you have table Persons where person names are stored in a single field PersonName as FirstName + ' ' + LastName.
You need to select all persons from a list of first names, stored in field NameToSelect in table NamesToSelect, plus some additional criteria (like filtered on gender, birth date, etc)
You can do it as follows
-- #gender is nchar(1), #birthDate is date
declare
#sql nvarchar(MAX),
#subWhere nvarchar(MAX)
#params nvarchar(MAX)
-- prepare the where sub-clause to cover LIKE IN (...)
-- it will actually generate where clause PersonName Like 'param1%' or PersonName Like 'param2%' or ...
set #subWhere = STUFF(
(
SELECT ' OR PersonName like ''' + [NameToSelect] + '%'''
FROM [NamesToSelect] t FOR XML PATH('')
), 1, 4, '')
-- create the dynamic SQL
set #sql ='select
PersonName
,Gender
,BirstDate -- and other field here
from [Persons]
where
Gender = #gender
AND BirthDate = #birthDate
AND (' + #subWhere + ')'
set #params = ' #gender nchar(1),
#birthDate Date'
EXECUTE sp_executesql #sql, #params,
#gender,
#birthDate
If you are using MySQL the closest you can get is full-text search:
Full-Text Search, MySQL Documentation
This works for comma separated values
DECLARE #ARC_CHECKNUM VARCHAR(MAX)
SET #ARC_CHECKNUM = 'ABC,135,MED,ASFSDFSF,AXX'
SELECT ' AND (a.arc_checknum LIKE ''%' + REPLACE(#arc_checknum,',','%'' OR a.arc_checknum LIKE ''%') + '%'')''
Evaluates to:
AND (a.arc_checknum LIKE '%ABC%' OR a.arc_checknum LIKE '%135%' OR a.arc_checknum LIKE '%MED%' OR a.arc_checknum LIKE '%ASFSDFSF%' OR a.arc_checknum LIKE '%AXX%')
If you want it to use indexes, you must omit the first '%' character.
In Oracle RBDMS you can achieve this behavior using REGEXP_LIKE function.
The following code will test if the string three is present in the list expression one|two|three|four|five (in which the pipe "|" symbol means OR logic operation).
SELECT 'Success !!!' result
FROM dual
WHERE REGEXP_LIKE('three', 'one|two|three|four|five');
RESULT
---------------------------------
Success !!!
1 row selected.
Preceding expression is equivalent to:
three=one OR three=two OR three=three OR three=four OR three=five
So it will succeed.
On the other hand, the following test will fail.
SELECT 'Success !!!' result
FROM dual
WHERE REGEXP_LIKE('ten', 'one|two|three|four|five');
no rows selected
There are several functions related to regular expressions (REGEXP_*) available in Oracle since 10g version. If you are an Oracle developer and interested this topic this should be a good beginning Using Regular Expressions with Oracle Database.
No answer like this:
SELECT * FROM table WHERE something LIKE ('bla% %foo% batz%')
In oracle no problem.
In Teradata you can use LIKE ANY ('%ABC%','%PQR%','%XYZ%'). Below is an example which has produced the same results for me
--===========
-- CHECK ONE
--===========
SELECT *
FROM Random_Table A
WHERE (Lower(A.TRAN_1_DSC) LIKE ('%american%express%centurion%bank%')
OR Lower(A.TRAN_1_DSC) LIKE ('%bofi%federal%bank%')
OR Lower(A.TRAN_1_DSC) LIKE ('%american%express%bank%fsb%'))
;
--===========
-- CHECK TWO
--===========
SELECT *
FROM Random_Table A
WHERE Lower(A.TRAN_1_DSC) LIKE ANY
('%american%express%centurion%bank%',
'%bofi%federal%bank%',
'%american%express%bank%fsb%')
Sorry for dredging up an old post, but it has a lot of views. I faced a similar problem this week and came up with this pattern:
declare #example table ( sampletext varchar( 50 ) );
insert #example values
( 'The quick brown fox jumped over the lazy dog.' ),
( 'Ask not what your country can do for you.' ),
( 'Cupcakes are the new hotness.' );
declare #filter table ( searchtext varchar( 50 ) );
insert #filter values
( 'lazy' ),
( 'hotness' ),
( 'cupcakes' );
-- Expect to get rows 1 and 3, but no duplication from Cupcakes and Hotness
select *
from #example e
where exists ( select * from #filter f where e.sampletext like '%' + searchtext + '%' )
Exists() works a little better than join, IMO, because it just tests each record in the set, but doesn't cause duplication if there are multiple matches.
This is possible in Postgres using like or ilike and any or all with array. This is an example that worked for me using Postgres 9:
select id, name from tb_organisation where name ilike any (array['%wembley%', '%south%']);
And this prints out:
id | name
-----+------------------------
433 | South Tampa Center
613 | South Pole
365 | Bromley South
796 | Wembley Special Events
202 | Southall
111 | Wembley Inner Space
In T-SQL, this option works but it is not very fast:
CREATE FUNCTION FN_LIKE_IN (#PROC NVARCHAR(MAX), #ITENS NVARCHAR(MAX)) RETURNS NVARCHAR(MAX) AS BEGIN
--Search an item with LIKE inside a list delimited by "," Vathaire 11/06/2019
DECLARE #ITEM NVARCHAR(MAX)
WHILE CHARINDEX(',', #ITENS) > 0 BEGIN
SET #ITEM = LEFT(#ITENS, CHARINDEX(',', #ITENS) - 1)
--IF #ITEM LIKE #PROC
IF #PROC LIKE #ITEM
RETURN #PROC --#ITEM --1
ELSE
SET #ITENS = STUFF(#ITENS, 1, LEN(#ITEM) + 1, '')
END
IF #PROC LIKE #ITENS RETURN #PROC --#ITEM --1
RETURN NULL --0
END
Query:
SELECT * FROM SYS.PROCEDURES
WHERE DBO.FN_LIKE_IN(NAME, 'PRC%,SP%') IS NOT NULL
you can do this dynamically for a large number of elements, at the expense of performance, but it works.
DECLARE #val nvarchar(256),
#list nvarchar(max) = 'one,two,three,ten,five';
CREATE table #table (FIRST_NAME nvarchar(512), LAST_NAME nvarchar(512));
CREATE table #student (FIRST_NAME nvarchar(512), LAST_NAME nvarchar(512), EMAIL
nvarchar(512));
INSERT INTO #student (FIRST_NAME, LAST_NAME, EMAIL)
SELECT 'TEST', ' redOne' ,'test.redOne#toto.com' UNION ALL
SELECT 'student', ' student' ,'student#toto.com' UNION ALL
SELECT 'student', ' two' ,'student.two#toto.com' UNION ALL
SELECT 'hello', ' ONE TWO THREE' ,'student.two#toto.com'
DECLARE check_cursor CURSOR FOR select value from STRING_SPLIT(#list,',')
OPEN check_cursor
FETCH NEXT FROM check_cursor INTO #val
WHILE ##FETCH_STATUS = 0
BEGIN
PRINT #val
IF EXISTS (select * from #student where REPLACE(FIRST_NAME, ' ','')
like '%' + #val + '%' OR REPLACE(LAST_NAME, ' ','') like '%' + #val + '%')
BEGIN
INSERT INTO #table (FIRST_NAME, LAST_NAME )
SELECT TOP 1 FIRST_NAME, LAST_NAME VALUE from #student where
REPLACE(FIRST_NAME, ' ','') like '%' + #val + '%' OR REPLACE(LAST_NAME, ' ','')
like '%' + #val + '%'
END;
FETCH NEXT FROM check_cursor INTO #val
END
CLOSE check_cursor;
DEALLOCATE check_cursor;
SELECT * FROM #table;
DROP TABLE #table;
DROP TABLE #student;
use cursor in SQL SERVER and execute for every value:
table sample:
create table Gastos_ConciliacionExcluida(IdRegistro int identity(1,1), MascaraTexto nvarchar(50), Activa bit default 1, Primary key (IDRegistro))
insert into Gastos_ConciliacionExcluida(MascaraTexto) Values ('%Reembolso%')
alter procedure SP_Gastos_ConciliacionExcluidaProcesar
as
declare cur cursor for select MascaraTexto From Gastos_ConciliacionExcluida where Activa=1
declare #Txt nvarchar(50)
open cur
fetch next from cur into #Txt
while ##Fetch_Status = 0
begin
update Gastos_BancoRegistro set PresumibleNoConciliable = 1
where Concepto like #txt
fetch next from cur into #Txt
end
close cur
deallocate cur
do this
WHERE something + '%' in ('bla', 'foo', 'batz')
OR '%' + something + '%' in ('tra', 'la', 'la')
or
WHERE something + '%' in (select col from table where ....)

SQL testing for data in column using LIKE

I have a query I am writing to test for permission access. One of the columns I have to look through is in the format of ABC|DEF|GHI|JKL and I need to see if there is a value that exists in that column.
The problem is, there are multiple things I need to test for which is causing my query to throw an error due to the subquery returning multiple values.
-- Default permission
DECLARE #hasAccess BIT = 0;
-- Define our temp data
DECLARE #managers AS TABLE(personnelID VARCHAR(10))
-- Insert our data for the manager test logic
INSERT INTO #managers( personnelID ) VALUES ( 'ABC' )
INSERT INTO #managers( personnelID ) VALUES ( 'XYZ' )
SELECT *
FROM Employees AS e
WHERE e.QID = #QID
AND e.PersonnelIDList LIKE '%' + (SELECT personnelID FROM #managers) + '%'
How can I go about testing to see if any one of the values in #managers exists in the column value (example column value: ABC|DEF|GHI|JKL) exists in the records.
In like clause, you're only allowed to have the subquery return a single result.
Join to your #managers table and use the like within the join clause.
-- Default permission
DECLARE #hasAccess BIT = 0;
-- Define our temp data
DECLARE #managers AS TABLE(personnelID VARCHAR(10))
-- Insert our data for the manager test logic
INSERT INTO #managers( personnelID ) VALUES ( 'ABC' )
INSERT INTO #managers( personnelID ) VALUES ( 'XYZ' )
SELECT
e.*
FROM
Employees AS e
inner join #managers as m
on e.PersonnelIDList LIKE '%' + m.personnelID + '%'
WHERE
e.QID = #QID
You should not be storing lists as strings. This is just a bad idea. There should be a separate table instead.
Sometimes, we are stuck with other people's really bad design decisions. In this case, you can use an exists subquery:
SELECT e.*
FROM Employees e
WHERE e.QID = #QID AND
EXISTS (SELECT 1
FROM #managers m
WHERE '|' + e.PersonnelIDList + '|' LIKE '%|' +personnelID + '|%'
);

SQL query for finding all tables ROWS with two columns in them

I have a DataBase with around +100 tables, like half of tables have column A & column B.
My question is, Can I query all tables that have this columns with a specific values e.g.
SELECT * FROM DATABASE
WHERE
EACHTABLE HAS COLUMN A = 21 //only if table has columns and then values
AND
COLUMN B = 13
I am not sure how exact I will do it, nothing is coming up on google either
You can use the undocumented MS stored procedure sp_MSforeachtable, if you fancy living life recklessly:
create table T1 (
ColumnA int not null,
ColumnB int not null
)
go
create table T2 (
ColumnA int not null,
Column2 int not null
)
go
create table T3 (
Column1 int not null,
ColumnB int not null
)
go
create table T4 (
ColumnA int not null,
ColumnB int not null
)
go
insert into T1 values (1,2);
insert into T2 values (3,4);
insert into T3 values (5,6);
insert into T4 values (7,8);
go
create table #Results (TableName sysname,ColumnA int,ColumnB int)
exec sp_MSforeachtable 'insert into #Results select ''?'',ColumnA,ColumnB from ?',
#whereand = ' and syso.object_id in (select object_id from sys.columns where name=''ColumnA'') and syso.object_id in (select object_id from sys.columns where name=''ColumnB'')'
select * from #Results
drop table #Results
Result:
TableName ColumnA ColumnB
------------------------------------- ----------- -----------
[dbo].[T1] 1 2
[dbo].[T4] 7 8
By default, sp_MSforeachtable will, as its name implies, perform the same task for each table in the database. However, one optional parameter to this procedure, called #Whereand, can be used to modify the WHERE clause of the internal query that enumerates the tables in the database. It helps to know that this internal query has already established two aliases to some of the system views. o is an alias for sysobjects (the legacy system view). syso is an alias for sys.all_objects (a more modern system view).
Once sp_MSforeachtable has decided which tables to run against, it will execute the query given to it as its first parameter. But, it will replace ? with the schema and table name (? is the default replacement character. This can be changed as needed)
In this case, I chose to create a temp table, then have each selected table store its results into this temp table, and after sp_MSforeachtable has finished running, to select the combined results out with no further processing.
There is a similar (and similarly undocumented) procedure called sp_MSforeachdb which will access each user database on the server. These can even be combined (although you have to be careful with doubling up ' quote characters twice, at times). However, there's no equivalent sp_MSforeachcolumn.
Try this:
select t.name from sys.objects t inner join sys.columns c
on t.name=OBJECT_NAME(c.object_id)
where t.type='U'
and c.name in('col1','col2')
group by t.name
having COUNT(*) = 2
order by 1
Then you just loop through all the tables and fine the values for these columns.
Like
Declare #out TABLE(tblname varchar(100))
if exists(select * from tbl1 where col1='21' and col2='22')
BEGIN
INSERT INTO #out
select tbl1
END
You can try like this using dynamic query.
select 'select * from '+table_name+ ' where'+column_name+'=21'
from information_schema.columns where column_name = 'A'
I suggest to use two steps:
First, find out all tables in your database that have these two columns and use it for a temporal derived table. For I am not an expert in SQL-Server 2008 I recommend to have a look at the whitepages.
The expression might look like this:
SELECT tablename
FROM information_schema.tables sdbt
WHERE "column a" IN
(SELECT columns
FROM information_schema.columns col
WHERE col.tablename = sdbt.tablename)
Second, use a expresssion to filter the results according to your demanded values.
This command should do the trick in one go, only for column A, amend accordingly to include any other columns you need:
exec sp_MSforeachtable
#command1=N'SELECT * FROM ? WHERE A = 21',
#whereand=' and o.name IN (SELECT TABLE_NAME FROM INFORMATION_SCHEMA.COLUMNS WHERE COLUMN_NAME = ''A'') '

SQL - Joining tables where one of the columns is a list

I'm tryin to join two tables. The problem i'm having is that one of the columns i'm trying to join on is a list.
So is it possible to join two tables using "IN" rather than "=". Along the lines of
SELECT ID
FROM tableA INNER JOIN
tableB ON tableB.misc IN tableA.misc
WHERE tableB.miscTitle = 'help me please'
tableB.misc = 1
tableA.misc = 1,2,3
Thanks in advance
No what you want is not possible without a major workaround. DO NOT STORE ITEMS YOU WANT TO JOIN TO IN A LIST! In fact a comma delimited list should almost never be stored in a database. It is only acceptable if this is note type information that will never need to be used in a query where clasue or join.
If you are stuck with this horrible design, then you will have to parse out the list to a temp table or table variable and then join through that.
Try this:
SELECT ID
FROM tableA INNER JOIN
tableB ON ',' + TableA.misc + ',' like '%,' + cast(tableB.misc as varchar) + ',%'
WHERE tableB.miscTitle = 'help me please'
A string parsing function like the one found here together with a CROSS APPLY should do the trick.
CREATE FUNCTION [dbo].[fnParseStringTSQL] (#string NVARCHAR(MAX),#separator NCHAR(1))
RETURNS #parsedString TABLE (string NVARCHAR(MAX))
AS
BEGIN
DECLARE #position int
SET #position = 1
SET #string = #string + #separator
WHILE charindex(#separator,#string,#position) <> 0
BEGIN
INSERT into #parsedString
SELECT substring(#string, #position, charindex(#separator,#string,#position) - #position)
SET #position = charindex(#separator,#string,#position) + 1
END
RETURN
END
go
declare #tableA table (
id int,
misc char(1)
)
declare #tableB table (
misc varchar(10),
miscTitle varchar(20)
)
insert into #tableA
(id, misc)
values
(1, '1')
insert into #tableB
(misc, miscTitle)
values
('1,2,3','help me please')
select id
from #tableB b
cross apply dbo.fnParseStringTSQL(b.misc,',') p
inner join #tableA a
on a.misc = p.string
where b.miscTitle = 'help me please'
drop function dbo.fnParseStringTSQL
Is ID also in tableB? If so, you can reverse the tables, and run the IN backwards, in the WHERE section, like so:
SELECT ID
FROM tableB
WHERE tableB.miscTitle = 'help me please'
AND tableB.misc IN (SELECT tableA.misc FROM tableA)
If it's not, you could use a cross join to get all combinations of rows between the tables, then remove the rows that don't obey the IN. WARNING: This will become a huge join if the tables are large. Example:
SELECT ID
FROM tableA
CROSS JOIN tableB
WHERE tableB.miscTitle = 'help me please'
AND tableB.misc IN tableA.misc
EDIT: didn't realize "in a list" meant a comma-delimited VARCHAR. SQL's IN won't work for that, nor should you ever store joinable data that way in a database.

Define variable to use with IN operator (T-SQL)

I have a Transact-SQL query that uses the IN operator. Something like this:
select * from myTable where myColumn in (1,2,3,4)
Is there a way to define a variable to hold the entire list "(1,2,3,4)"? How should I define it?
declare #myList {data type}
set #myList = (1,2,3,4)
select * from myTable where myColumn in #myList
DECLARE #MyList TABLE (Value INT)
INSERT INTO #MyList VALUES (1)
INSERT INTO #MyList VALUES (2)
INSERT INTO #MyList VALUES (3)
INSERT INTO #MyList VALUES (4)
SELECT *
FROM MyTable
WHERE MyColumn IN (SELECT Value FROM #MyList)
DECLARE #mylist TABLE (Id int)
INSERT INTO #mylist
SELECT id FROM (VALUES (1),(2),(3),(4),(5)) AS tbl(id)
SELECT * FROM Mytable WHERE theColumn IN (select id from #mylist)
There are two ways to tackle dynamic csv lists for TSQL queries:
1) Using an inner select
SELECT * FROM myTable WHERE myColumn in (SELECT id FROM myIdTable WHERE id > 10)
2) Using dynamically concatenated TSQL
DECLARE #sql varchar(max)
declare #list varchar(256)
select #list = '1,2,3'
SELECT #sql = 'SELECT * FROM myTable WHERE myColumn in (' + #list + ')'
exec sp_executeSQL #sql
3) A possible third option is table variables. If you have SQl Server 2005 you can use a table variable. If your on Sql Server 2008 you can even pass whole table variables in as a parameter to stored procedures and use it in a join or as a subselect in the IN clause.
DECLARE #list TABLE (Id INT)
INSERT INTO #list(Id)
SELECT 1 UNION ALL SELECT 2 UNION ALL SELECT 3 UNION ALL SELECT 4
SELECT
*
FROM
myTable
JOIN #list l ON myTable.myColumn = l.Id
SELECT
*
FROM
myTable
WHERE
myColumn IN (SELECT Id FROM #list)
Use a function like this:
CREATE function [dbo].[list_to_table] (#list varchar(4000))
returns #tab table (item varchar(100))
begin
if CHARINDEX(',',#list) = 0 or CHARINDEX(',',#list) is null
begin
insert into #tab (item) values (#list);
return;
end
declare #c_pos int;
declare #n_pos int;
declare #l_pos int;
set #c_pos = 0;
set #n_pos = CHARINDEX(',',#list,#c_pos);
while #n_pos > 0
begin
insert into #tab (item) values (SUBSTRING(#list,#c_pos+1,#n_pos - #c_pos-1));
set #c_pos = #n_pos;
set #l_pos = #n_pos;
set #n_pos = CHARINDEX(',',#list,#c_pos+1);
end;
insert into #tab (item) values (SUBSTRING(#list,#l_pos+1,4000));
return;
end;
Instead of using like, you make an inner join with the table returned by the function:
select * from table_1 where id in ('a','b','c')
becomes
select * from table_1 a inner join [dbo].[list_to_table] ('a,b,c') b on (a.id = b.item)
In an unindexed 1M record table the second version took about half the time...
I know this is old now but TSQL => 2016, you can use STRING_SPLIT:
DECLARE #InList varchar(255) = 'This;Is;My;List';
WITH InList (Item) AS (
SELECT value FROM STRING_SPLIT(#InList, ';')
)
SELECT *
FROM [Table]
WHERE [Item] IN (SELECT Tag FROM InList)
Starting with SQL2017 you can use STRING_SPLIT and do this:
declare #myList nvarchar(MAX)
set #myList = '1,2,3,4'
select * from myTable where myColumn in (select value from STRING_SPLIT(#myList,','))
DECLARE #myList TABLE (Id BIGINT) INSERT INTO #myList(Id) VALUES (1),(2),(3),(4);
select * from myTable where myColumn in(select Id from #myList)
Please note that for long list or production systems it's not recommended to use this way as it may be much more slower than simple INoperator like someColumnName in (1,2,3,4) (tested using 8000+ items list)
slight improvement on #LukeH, there is no need to repeat the "INSERT INTO":
and #realPT's answer - no need to have the SELECT:
DECLARE #MyList TABLE (Value INT)
INSERT INTO #MyList VALUES (1),(2),(3),(4)
SELECT * FROM MyTable
WHERE MyColumn IN (SELECT Value FROM #MyList)
No, there is no such type. But there are some choices:
Dynamically generated queries (sp_executesql)
Temporary tables
Table-type variables (closest thing that there is to a list)
Create an XML string and then convert it to a table with the XML functions (really awkward and roundabout, unless you have an XML to start with)
None of these are really elegant, but that's the best there is.
If you want to do this without using a second table, you can do a LIKE comparison with a CAST:
DECLARE #myList varchar(15)
SET #myList = ',1,2,3,4,'
SELECT *
FROM myTable
WHERE #myList LIKE '%,' + CAST(myColumn AS varchar(15)) + ',%'
If the field you're comparing is already a string then you won't need to CAST.
Surrounding both the column match and each unique value in commas will ensure an exact match. Otherwise, a value of 1 would be found in a list containing ',4,2,15,'
As no one mentioned it before, starting from Sql Server 2016 you can also use json arrays and OPENJSON (Transact-SQL):
declare #filter nvarchar(max) = '[1,2]'
select *
from dbo.Test as t
where
exists (select * from openjson(#filter) as tt where tt.[value] = t.id)
You can test it in
sql fiddle demo
You can also cover more complicated cases with json easier - see Search list of values and range in SQL using WHERE IN clause with SQL variable?
This one uses PATINDEX to match ids from a table to a non-digit delimited integer list.
-- Given a string #myList containing character delimited integers
-- (supports any non digit delimiter)
DECLARE #myList VARCHAR(MAX) = '1,2,3,4,42'
SELECT * FROM [MyTable]
WHERE
-- When the Id is at the leftmost position
-- (nothing to its left and anything to its right after a non digit char)
PATINDEX(CAST([Id] AS VARCHAR)+'[^0-9]%', #myList)>0
OR
-- When the Id is at the rightmost position
-- (anything to its left before a non digit char and nothing to its right)
PATINDEX('%[^0-9]'+CAST([Id] AS VARCHAR), #myList)>0
OR
-- When the Id is between two delimiters
-- (anything to its left and right after two non digit chars)
PATINDEX('%[^0-9]'+CAST([Id] AS VARCHAR)+'[^0-9]%', #myList)>0
OR
-- When the Id is equal to the list
-- (if there is only one Id in the list)
CAST([Id] AS VARCHAR)=#myList
Notes:
when casting as varchar and not specifying byte size in parentheses the default length is 30
% (wildcard) will match any string of zero or more characters
^ (wildcard) not to match
[^0-9] will match any non digit character
PATINDEX is an SQL standard function that returns the position of a pattern in a string
DECLARE #StatusList varchar(MAX);
SET #StatusList='1,2,3,4';
DECLARE #Status SYS_INTEGERS;
INSERT INTO #Status
SELECT Value
FROM dbo.SYS_SPLITTOINTEGERS_FN(#StatusList, ',');
SELECT Value From #Status;
Most of these seem to focus on separating-out each INT into its own parenthetical, for example:
(1),(2),(3), and so on...
That isn't always convenient. Especially since, many times, you already start with a comma-separated list, for example:
(1,2,3,...) and so on...
In these situations, you may care to do something more like this:
DECLARE #ListOfIds TABLE (DocumentId INT);
INSERT INTO #ListOfIds
SELECT Id FROM [dbo].[Document] WHERE Id IN (206,235,255,257,267,365)
SELECT * FROM #ListOfIds
I like this method because, more often than not, I am trying to work with IDs that should already exist in a table.
My experience with a commonly proposed technique offered here,
SELECT * FROM Mytable WHERE myColumn IN (select id from #mylist)
is that it induces a major performance degradation if the primary data table (Mytable) includes a very large number of records. Presumably, that is because the IN operator’s list-subquery is re-executed for every record in the data table.
I’m not seeing any offered solution here that provides the same functional result by avoiding the IN operator entirely. The general problem isn’t a need for a parameterized IN operation, it’s a need for a parameterized inclusion constraint. My favored technique for that is to implement it using an (inner) join:
DECLARE #myList varchar(50) /* BEWARE: if too small, no error, just missing data! */
SET #myList = '1,2,3,4'
SELECT *
FROM myTable
JOIN STRING_SPLIT(#myList,',') MyList_Tbl
ON myColumn = MyList_Tbl.Value
It is so much faster because the generation of the constraint-list table (MyList_Tbl) is executed only once for the entire query execution. Typically, for large data sets, this technique executes at least five times faster than the functionally equivalent parameterized IN operator solutions, like those offered here.
I think you'll have to declare a string and then execute that SQL string.
Have a look at sp_executeSQL