Use function argument to declare which column to use from table - sql

I want to have an argument in a function that determines which column from a table to use in the function. The table looks like below, but could in the future have more than two different sets/columns of limits.
Is this possible? Could I use the varchar argument like LimitsTable.#MyVarcharArg? (I know that syntax wouldn't work)

Here is something that I was alluding to.
As I mentioned in the comments, this can get expensive
Example
Declare #YourTable Table ([ID] int,[Col1] decimal(10,2),[Col2] decimal(10,2),[Col3] decimal(10,2))
Insert Into #YourTable Values
(1,1.0,null,null)
,(2,0.05,0.10,.15)
,(3,0.05,0.10,.15)
Select A.ID
,B.*
From #YourTable A
Cross Apply [dbo].[tvf-JSON-GetCol](concat('Col',ID),(Select A.* For JSON Path,Without_Array_Wrapper ) ) B
** Not Clear on the Column Selection Criteria, so I simply concat 'Col' and ID
Results
ID Value
1 1.00
2 0.10
3 0.15
The TVF if Desired
CREATE FUNCTION [dbo].[tvf-JSON-GetCol](#col varchar(150),#json varchar(max))
Returns Table
As Return
Select Value = JSON_VALUE(#json,'$.'+#col)
EDIT - Just for Fun, it could be as simple as:
Select A.ID
,Value = JSON_VALUE((Select A.* For JSON Path,Without_Array_Wrapper ),'$.'+concat('Col',ID))
From #YourTable A
2016 requires a literal key name while 2017+ can be dynamic.

Related

Filter records using CAST() and sub query in SQL Server

I am stuck with a scenario where I need to cast a particular column as BIGINT and check whether the number is not greater than X, but, the column will not always have numeric values.
I tried the following approach but it is throwing an error.
DECLARE #RowType TABLE
(
RowTypeID INT IDENTITY,
RowType VARCHAR(10)
);
INSERT #RowType VALUES('Numeric');
INSERT #RowType VALUES('NonNumeric');
DECLARE #TempTable TABLE
(
ID INT IDENTITY,
RowTypeID INT,
Value VARCHAR(10)
);
INSERT #TempTable VALUES(1, '10');
INSERT #TempTable VALUES(1, '20');
INSERT #TempTable VALUES(2, '$10'); -- Non Numeric value
-- This select throws error however ever I feel the behaviour
-- to be odd since the innser select should return only records of type 'NUMERIC'
SELECT *
FROM (SELECT T.*
FROM #TempTable T
JOIN #RowType RT
ON RT.RowTypeID = T.RowTypeID
WHERE RT.RowType = 'Numeric') A -- With this sub query I expect only records of type 'NUMERIC' to be returned to the outer select
WHERE CAST(A.Value AS BIGINT) > 10
-- Alternate approach which I can not use since
-- there are already lot of temp tables involved in procedure
--SELECT T.*
--INTO #Temp
--FROM #TempTable T
-- JOIN #RowType RT
-- ON RT.RowTypeID = T.RowTypeID
--WHERE RT.RowType = 'Numeric';
--SELECT *
--FROM #Temp
--WHERE CAST(Value AS BIGINT) > 10
--DROP TABLE #Temp;
Is this the default behaviour? Or am I missing something over here?
Since you are on 2012+, I would suggest try_convert() or even try_cast().
try_convert(BIGINT,A.Value AS BIGINT) > 10
IF the conversion fails, a null value would be returned
Is this the default behaviour?
SQLSERVER is free to rearrange expressions,if the final result stays the same...you should not rely on the behaviour you are expecting..
below is the plan for the query
here a.value is applied before the join,so the cast fails..below screen shot confirms the same
This has been described by Itzik Ben-Gan here..Logical Query Processing Part 6: The WHERE Clause
below is some code which has same issue as yours and this also fails with same issue
WITH C AS
(
SELECT name, datatype, val
FROM dbo.Properties
WHERE datatype IN ('TINYINT', 'SMALLINT', 'INT', 'BIGINT')
)
SELECT *
FROM C
WHERE CAST(val AS BIGINT) > 10;
below is an explanation from Itzik
From a logical query processing perspective, such code should not fail. However, for performance reasons, the SQL Server parser unnests, or inlines, the inner query’s code in the outer query, resulting in code that is equivalent to the original query without the table expression. Consequently, the code fails with the same error.
You could use try cast/convert to overcome conversion errors..
below is one good series for more internals:
Query Optimizer Deep Dive - Part 1

Passing space delimited string to stored procedure to search entire database

How can I pass a string space delimited to a stored procedure and filter the result?
I'm trying to do this
parameter value
__________________
#query key1 key2 key 3
Then in the stored procedure, I want to first
find all the results with key1.
filter step 1 with key2.
filter step2 with key3.
Another example:
col1 | col2 | col3
------------+-----------------------+-----------------------------------
hello xyz | abc is my last name | and I'm a developer
hello xyz | null | and I'm a developer
If I search for any following it should return for each?
"xyz developer" returns 2 rows
"xyz abc" returns 1 row
"abc developer"returns 1 row
"hello" returns 2 rows
"hello developer" returns 2 rows
"xyz" returns 2 rows
I'm using SQL Server 2016. I tried to use split_string to split query string. But I don't know how to pass this to the stored procedure.
Thanks in advance
Full Text Index is the way to go, but this will return your results.
One caveat (that I can think of). If your search expression/pattern contains a column name, that will generate a false-positive
Declare #YourTable table (col1 varchar(50),col2 varchar(50),col3 varchar(50))
Insert Into #YourTable values
('hello xyz','abc is my last name','and I''m a developer'),
('hello xyz', null ,'and I''m a developer')
Declare #Search varchar(max) = 'xyz abc'
Select A.*
From #YourTable A
Cross Apply (Select FullString=(Select A.* FOR XML Raw)) B
Where FullString like '%'+replace(#Search,' ','%')+'%'
Returns
col1 col2 col3
hello xyz abc is my last name and I'm a developer
EDIT - Multi-Word / Any Order Search
Try this not fully tested. I can't imagine this being very efficient especially with larger tables and numerous key words
Declare #YourTable table (col1 varchar(50),col2 varchar(50),col3 varchar(50))
Insert Into #YourTable values
('hello xyz','abc is my last name','and I''m a developer'),
('hello xyz', null ,'and I''m a developer')
Declare #Search varchar(max) = 'developer xyz'
Select *
From (
Select Distinct A.*
,Hits = sum(sign(charindex(C.Value,B.FullString))) over (partition by B.FullString)
,Req = C.Req
From #YourTable A
Cross Apply (Select FullString=(Select A.* FOR XML Raw)) B
Join (Select *,Req=sum(1) over () From String_Split(#Search,' ') ) C on charindex(C.Value,B.FullString)>0
) A
Where Hits=Req
http://dbfiddle.uk/?rdbms=sqlserver_2016&fiddle=c77123a71c810716b36d73a92ac714eb

How to tell T-SQL query not to escape the given string/query result

This is a sample query
select *
from tablename
where tableid in (<sub query>);
The <sub query> here returns null or a string of pattern 'id1','id2','id3'
My <sub query> is something like:
select xml_data.value('(/Node/SubNode)[1]', 'varchar(max)'))
from tablename
where tableid = '9944169f-95a6-4570-89d7-b57a3fe1b693'
The problem :
My sub query returns proper data ('id1','id2','id3') but the parent query considers the complete result as one single string and hence returns 0 rows always.
How can I tell SQL Server not to escape single quotes present in the result of my sub-query?
It's not clear if your subquery is applying to the same table as the first query, but this should show you the general direction
declare #x xml = '<Node><SubNode>t1</SubNode><SubNode>t2</SubNode></Node>'
declare #t table (v varchar(20))
insert #t values ('t1'),('t3')
select table1.*
from
#t table1
inner join
#x.nodes('/Node/SubNode') t(x)
on table1.v = t.x.value('.','varchar(100)')

Define variable to use with IN operator (T-SQL)

I have a Transact-SQL query that uses the IN operator. Something like this:
select * from myTable where myColumn in (1,2,3,4)
Is there a way to define a variable to hold the entire list "(1,2,3,4)"? How should I define it?
declare #myList {data type}
set #myList = (1,2,3,4)
select * from myTable where myColumn in #myList
DECLARE #MyList TABLE (Value INT)
INSERT INTO #MyList VALUES (1)
INSERT INTO #MyList VALUES (2)
INSERT INTO #MyList VALUES (3)
INSERT INTO #MyList VALUES (4)
SELECT *
FROM MyTable
WHERE MyColumn IN (SELECT Value FROM #MyList)
DECLARE #mylist TABLE (Id int)
INSERT INTO #mylist
SELECT id FROM (VALUES (1),(2),(3),(4),(5)) AS tbl(id)
SELECT * FROM Mytable WHERE theColumn IN (select id from #mylist)
There are two ways to tackle dynamic csv lists for TSQL queries:
1) Using an inner select
SELECT * FROM myTable WHERE myColumn in (SELECT id FROM myIdTable WHERE id > 10)
2) Using dynamically concatenated TSQL
DECLARE #sql varchar(max)
declare #list varchar(256)
select #list = '1,2,3'
SELECT #sql = 'SELECT * FROM myTable WHERE myColumn in (' + #list + ')'
exec sp_executeSQL #sql
3) A possible third option is table variables. If you have SQl Server 2005 you can use a table variable. If your on Sql Server 2008 you can even pass whole table variables in as a parameter to stored procedures and use it in a join or as a subselect in the IN clause.
DECLARE #list TABLE (Id INT)
INSERT INTO #list(Id)
SELECT 1 UNION ALL SELECT 2 UNION ALL SELECT 3 UNION ALL SELECT 4
SELECT
*
FROM
myTable
JOIN #list l ON myTable.myColumn = l.Id
SELECT
*
FROM
myTable
WHERE
myColumn IN (SELECT Id FROM #list)
Use a function like this:
CREATE function [dbo].[list_to_table] (#list varchar(4000))
returns #tab table (item varchar(100))
begin
if CHARINDEX(',',#list) = 0 or CHARINDEX(',',#list) is null
begin
insert into #tab (item) values (#list);
return;
end
declare #c_pos int;
declare #n_pos int;
declare #l_pos int;
set #c_pos = 0;
set #n_pos = CHARINDEX(',',#list,#c_pos);
while #n_pos > 0
begin
insert into #tab (item) values (SUBSTRING(#list,#c_pos+1,#n_pos - #c_pos-1));
set #c_pos = #n_pos;
set #l_pos = #n_pos;
set #n_pos = CHARINDEX(',',#list,#c_pos+1);
end;
insert into #tab (item) values (SUBSTRING(#list,#l_pos+1,4000));
return;
end;
Instead of using like, you make an inner join with the table returned by the function:
select * from table_1 where id in ('a','b','c')
becomes
select * from table_1 a inner join [dbo].[list_to_table] ('a,b,c') b on (a.id = b.item)
In an unindexed 1M record table the second version took about half the time...
I know this is old now but TSQL => 2016, you can use STRING_SPLIT:
DECLARE #InList varchar(255) = 'This;Is;My;List';
WITH InList (Item) AS (
SELECT value FROM STRING_SPLIT(#InList, ';')
)
SELECT *
FROM [Table]
WHERE [Item] IN (SELECT Tag FROM InList)
Starting with SQL2017 you can use STRING_SPLIT and do this:
declare #myList nvarchar(MAX)
set #myList = '1,2,3,4'
select * from myTable where myColumn in (select value from STRING_SPLIT(#myList,','))
DECLARE #myList TABLE (Id BIGINT) INSERT INTO #myList(Id) VALUES (1),(2),(3),(4);
select * from myTable where myColumn in(select Id from #myList)
Please note that for long list or production systems it's not recommended to use this way as it may be much more slower than simple INoperator like someColumnName in (1,2,3,4) (tested using 8000+ items list)
slight improvement on #LukeH, there is no need to repeat the "INSERT INTO":
and #realPT's answer - no need to have the SELECT:
DECLARE #MyList TABLE (Value INT)
INSERT INTO #MyList VALUES (1),(2),(3),(4)
SELECT * FROM MyTable
WHERE MyColumn IN (SELECT Value FROM #MyList)
No, there is no such type. But there are some choices:
Dynamically generated queries (sp_executesql)
Temporary tables
Table-type variables (closest thing that there is to a list)
Create an XML string and then convert it to a table with the XML functions (really awkward and roundabout, unless you have an XML to start with)
None of these are really elegant, but that's the best there is.
If you want to do this without using a second table, you can do a LIKE comparison with a CAST:
DECLARE #myList varchar(15)
SET #myList = ',1,2,3,4,'
SELECT *
FROM myTable
WHERE #myList LIKE '%,' + CAST(myColumn AS varchar(15)) + ',%'
If the field you're comparing is already a string then you won't need to CAST.
Surrounding both the column match and each unique value in commas will ensure an exact match. Otherwise, a value of 1 would be found in a list containing ',4,2,15,'
As no one mentioned it before, starting from Sql Server 2016 you can also use json arrays and OPENJSON (Transact-SQL):
declare #filter nvarchar(max) = '[1,2]'
select *
from dbo.Test as t
where
exists (select * from openjson(#filter) as tt where tt.[value] = t.id)
You can test it in
sql fiddle demo
You can also cover more complicated cases with json easier - see Search list of values and range in SQL using WHERE IN clause with SQL variable?
This one uses PATINDEX to match ids from a table to a non-digit delimited integer list.
-- Given a string #myList containing character delimited integers
-- (supports any non digit delimiter)
DECLARE #myList VARCHAR(MAX) = '1,2,3,4,42'
SELECT * FROM [MyTable]
WHERE
-- When the Id is at the leftmost position
-- (nothing to its left and anything to its right after a non digit char)
PATINDEX(CAST([Id] AS VARCHAR)+'[^0-9]%', #myList)>0
OR
-- When the Id is at the rightmost position
-- (anything to its left before a non digit char and nothing to its right)
PATINDEX('%[^0-9]'+CAST([Id] AS VARCHAR), #myList)>0
OR
-- When the Id is between two delimiters
-- (anything to its left and right after two non digit chars)
PATINDEX('%[^0-9]'+CAST([Id] AS VARCHAR)+'[^0-9]%', #myList)>0
OR
-- When the Id is equal to the list
-- (if there is only one Id in the list)
CAST([Id] AS VARCHAR)=#myList
Notes:
when casting as varchar and not specifying byte size in parentheses the default length is 30
% (wildcard) will match any string of zero or more characters
^ (wildcard) not to match
[^0-9] will match any non digit character
PATINDEX is an SQL standard function that returns the position of a pattern in a string
DECLARE #StatusList varchar(MAX);
SET #StatusList='1,2,3,4';
DECLARE #Status SYS_INTEGERS;
INSERT INTO #Status
SELECT Value
FROM dbo.SYS_SPLITTOINTEGERS_FN(#StatusList, ',');
SELECT Value From #Status;
Most of these seem to focus on separating-out each INT into its own parenthetical, for example:
(1),(2),(3), and so on...
That isn't always convenient. Especially since, many times, you already start with a comma-separated list, for example:
(1,2,3,...) and so on...
In these situations, you may care to do something more like this:
DECLARE #ListOfIds TABLE (DocumentId INT);
INSERT INTO #ListOfIds
SELECT Id FROM [dbo].[Document] WHERE Id IN (206,235,255,257,267,365)
SELECT * FROM #ListOfIds
I like this method because, more often than not, I am trying to work with IDs that should already exist in a table.
My experience with a commonly proposed technique offered here,
SELECT * FROM Mytable WHERE myColumn IN (select id from #mylist)
is that it induces a major performance degradation if the primary data table (Mytable) includes a very large number of records. Presumably, that is because the IN operator’s list-subquery is re-executed for every record in the data table.
I’m not seeing any offered solution here that provides the same functional result by avoiding the IN operator entirely. The general problem isn’t a need for a parameterized IN operation, it’s a need for a parameterized inclusion constraint. My favored technique for that is to implement it using an (inner) join:
DECLARE #myList varchar(50) /* BEWARE: if too small, no error, just missing data! */
SET #myList = '1,2,3,4'
SELECT *
FROM myTable
JOIN STRING_SPLIT(#myList,',') MyList_Tbl
ON myColumn = MyList_Tbl.Value
It is so much faster because the generation of the constraint-list table (MyList_Tbl) is executed only once for the entire query execution. Typically, for large data sets, this technique executes at least five times faster than the functionally equivalent parameterized IN operator solutions, like those offered here.
I think you'll have to declare a string and then execute that SQL string.
Have a look at sp_executeSQL

sql like operator to get the numbers only

This is I think a simple problem but not getting the solution yet. I would like to get the valid numbers only from a column as explained here.
Lets say we have a varchar column with following values
ABC
Italy
Apple
234.62
2:234:43:22
France
6435.23
2
Lions
Here the problem is to select numbers only
select * from tbl where answer like '%[0-9]%' would have done it but it returns
234.62
2:234:43:22
6435.23
2
Here, obviously, 2:234:43:22 is not desired as it is not valid number.
The desired result is
234.62
6435.23
2
Is there a way to do this?
You can use the following to only include valid characters:
SQL
SELECT * FROM #Table
WHERE Col NOT LIKE '%[^0-9.]%'
Results
Col
---------
234.62
6435.23
2
You can try this
ISNUMERIC (Transact-SQL)
ISNUMERIC returns 1 when the input
expression evaluates to a valid
numeric data type; otherwise it
returns 0.
DECLARE #Table TABLE(
Col VARCHAR(50)
)
INSERT INTO #Table SELECT 'ABC'
INSERT INTO #Table SELECT 'Italy'
INSERT INTO #Table SELECT 'Apple'
INSERT INTO #Table SELECT '234.62'
INSERT INTO #Table SELECT '2:234:43:22'
INSERT INTO #Table SELECT 'France'
INSERT INTO #Table SELECT '6435.23'
INSERT INTO #Table SELECT '2'
INSERT INTO #Table SELECT 'Lions'
SELECT *
FROM #Table
WHERE ISNUMERIC(Col) = 1
Try something like this - it works for the cases you have mentioned.
select * from tbl
where answer like '%[0-9]%'
and answer not like '%[:]%'
and answer not like '%[A-Z]%'
With SQL 2012 and later, you could use TRY_CAST/TRY_CONVERT to try converting to a numeric type, e.g. TRY_CAST(answer AS float) IS NOT NULL -- note though that this will match scientific notation too (1+E34). (If you use decimal, then scientific notation won't match)
what might get you where you want in plain SQL92:
select * from tbl where lower(answer) = upper(answer)
or, if you also want to be robust for leading/trailing spaces:
select * from tbl where lower(answer) = trim(upper(answer))