Use column if it exists, another if doesn't in SQL Server - sql

I have a number of SQL Server databases (different versions from 2012 to 2019). The schema in each one is very similar but not exactly the same. For example, there's table ORDERS, which has about 50 columns - and one column is called differently in two different databases:
in DB1: select p_user from orders
in DB2: select userpk from orders
Note that I showed two databases above, but there are actually more than 20 - some are DB1 type, the others are DB2 type
I can't do much about these differences - they are historic - and changing the schema to match is not an option.
I want to be able to run the same SQL statement against all of these databases at once. I'd like to write the query in such a way that it would use one column if it exists and another if it doesn't. For example:
select
case
when COL_LENGTH('orders', 'p_user') IS NOT NULL
then
orders.p_user
else
orders.userpk
end
from orders
This unfortunately doesn't work, as SQL server seems to try to evaluate both results regardless of whether the condition is true or false. The same thing happens if I use IIF function.
If I simply run
select
case
when COL_LENGTH('orders', 'p_user') IS NOT NULL
then
'orders.p_user'
else
'orders.userpk'
end
then I do get the correct string, which means my condition is correct.
How can I formulate the SQL statement to use one or the other column based on whether the first one exists?

If you can't change anything then your best (and maybe only) option is to use dynamic SQL. A query will only compile if all parts can be resolved at compile time (before anything runs) - which is why e.g. this will not compile:
IF COL_LENGTH('orders', 'p_user') IS NOT NULL THEN
select p_user from orders
ELSE
select userpk as p_user from orders
END
But this will work:
DECLARE #SQL NVARCHAR(MAX)
IF COL_LENGTH('orders', 'p_user') IS NOT NULL THEN
SET #SQL = 'select p_user from orders'
ELSE
SET #SQL = 'select userpk as p_user from orders'
END
EXEC (#SQL)

Fix your tables by adding a computed column:
alter table db1..orders
add statuspk as (p_status);
(Or choose the other name.)
Then, your queries will just work without adding unnecessary complication to queries.

create table orders1(colA int, colB int, colABC int);
insert into orders1 values(1, 2, 3);
go
create table orders2(colA int, colB int, colKLM int);
insert into orders2 values(5, 6, 7);
go
create table orders3(colA int, colB int, colXYZ int);
insert into orders3 values(10, 11, 12);
go
select colA, colB, vcolname as [ABC_KLM_XYZ]
from
(
select *,
(select o.* for xml path(''), elements, type).query('
/*[local-name() = ("colABC", "colKLM", "colXYZ")][1]
').value('.', 'int') as vcolname
from orders1 as o
) as src;
select colA, colB, vcolname as [ABC_KLM_XYZ]
from
(
select *,
(select o.* for xml path(''), elements, type).query('
/*[local-name() = ("colABC", "colKLM", "colXYZ")][1]
').value('.', 'int') as vcolname
from orders2 as o
) as src;
select colA, colB, vcolname as [ABC_KLM_XYZ]
from
(
select *,
(select o.* for xml path(''), elements, type).query('
/*[local-name() = ("colABC", "colKLM", "colXYZ")][1]
').value('.', 'int') as vcolname
from orders3 as o
) as src;
go
drop table orders1
drop table orders2
drop table orders3
go

I ended up using dynamic sql like so:
declare #query nvarchar(1000)
set #query = concat(
'select count(distinct ', (case when COL_LENGTH('orders', 'p_user') IS NOT NULL then 'orders.p_user' else 'orders.userpk' end), ')
from orders'
);
execute sp_executesql #query
This solved my immediate issue.

Related

How to interrogate multiple tables with different structure?

I am using Sql-Server 2016 in a C# application.
Let's say I have two tables:
CREATE TABLE Table_A
(
UserID NVARCHAR2(15),
FullName NVARCHAR2(25),
Available NUMBER(1),
MachineID NVARCHAR2(20),
myDate date
);
and
CREATE TABLE Table_B
(
UserID NVARCHAR2(15),
FullName NVARCHAR2(25),
Team NVARCHAR2(15),
MachineID NVARCHAR2(20),
Stuff NUMBER(2)
);
I want to perform a global select so that I will get as result data from both tables, somehow concatenated and of course, when a column does not exist in one of the tables, that column to be automatically populated with NULL, and if a column exists on both tables the results must be merged in a single column.
The first solution that pops-up is a UNION with NULL aliases for the missing columns, sure. The problem is that at runtime I will not be able to know in advance which tables are interrogated so that I could anticipate the column names. I need a more general solution.
The expected result from the two tables must look like this:
user_Table_A; fullName_Table_A; 1; machineID_Table_A; 12-JUN-18; NULL; 10;
user_Table_B; fullName_Table_B; NULL; machineID_Table_B; NULL; team_Table_B; 20;
The data for the two tables is inserted with the following commands:
INSERT INTO Table_A VALUES ('user_Table_A', 'fullName_Table_A', 1, 'machineID_Table_A', TO_DATE('12-06-2018', 'DD-MM-YYYY'));
INSERT INTO Table_B VALUES ('user_Table_B', 'fullName_Table_B', 'team_Table_B', 'machineID_Table_B', 20);
You can do something like this. I havent have time to completely tweak it, so there can be something the order of the columns. But perhaps it can get you started:
You also write that you use Oracle - Im not sure what you wanted, but this is in pure sql-server version.
SQL:
IF OBJECT_ID('tempdb..#temp') IS NOT NULL
/*Then it exists*/
DROP TABLE #temp;
GO
DECLARE #SQLList nvarchar(max)
DECLARE #SQLList2 nvarchar(max)
DECLARE #SQL nvarchar(max)
with table_a as (
select column_name as Table_aColumnName,ORDINAL_POSITION from INFORMATION_SCHEMA.COLUMNS
where TABLE_NAME = 'table_a'
)
,
table_b as (
select column_name as Table_bColumnName,ORDINAL_POSITION from INFORMATION_SCHEMA.COLUMNS
where TABLE_NAME = 'table_b'
)
,preresult as (
select case when Table_aColumnName IS null then 'NULL as ' + Table_bColumnName else Table_aColumnName end as Table_a_ColumnName,case when Table_bColumnName IS null then 'NULL as ' +Table_aColumnName else Table_bColumnName end as Table_b_ColumnName
,a.ORDINAL_POSITION,b.ORDINAL_POSITION as Table_b_Ordinal from table_a a full join Table_B b on a.Table_aColumnName = b.Table_bColumnName
)
select * into #temp from preresult
SET #SQLList = (
select distinct display = STUFF((select ','+table_a_columnName from #temp b order by table_b_ordinal FOR XML PATH('')),1,1,'') from #temp a
)
SET #SQLList2 = (
select distinct display = STUFF((select ','+table_b_columnName from #temp b order by Table_b_Ordinal FOR XML PATH('')),1,1,'') from #temp a
)
SET #SQL = 'select ' +#SQLList +' from dbo.Table_a union all select ' + #SQLList2 + ' from dbo.table_b'
exec(#SQL)
Result:

Filter records using CAST() and sub query in SQL Server

I am stuck with a scenario where I need to cast a particular column as BIGINT and check whether the number is not greater than X, but, the column will not always have numeric values.
I tried the following approach but it is throwing an error.
DECLARE #RowType TABLE
(
RowTypeID INT IDENTITY,
RowType VARCHAR(10)
);
INSERT #RowType VALUES('Numeric');
INSERT #RowType VALUES('NonNumeric');
DECLARE #TempTable TABLE
(
ID INT IDENTITY,
RowTypeID INT,
Value VARCHAR(10)
);
INSERT #TempTable VALUES(1, '10');
INSERT #TempTable VALUES(1, '20');
INSERT #TempTable VALUES(2, '$10'); -- Non Numeric value
-- This select throws error however ever I feel the behaviour
-- to be odd since the innser select should return only records of type 'NUMERIC'
SELECT *
FROM (SELECT T.*
FROM #TempTable T
JOIN #RowType RT
ON RT.RowTypeID = T.RowTypeID
WHERE RT.RowType = 'Numeric') A -- With this sub query I expect only records of type 'NUMERIC' to be returned to the outer select
WHERE CAST(A.Value AS BIGINT) > 10
-- Alternate approach which I can not use since
-- there are already lot of temp tables involved in procedure
--SELECT T.*
--INTO #Temp
--FROM #TempTable T
-- JOIN #RowType RT
-- ON RT.RowTypeID = T.RowTypeID
--WHERE RT.RowType = 'Numeric';
--SELECT *
--FROM #Temp
--WHERE CAST(Value AS BIGINT) > 10
--DROP TABLE #Temp;
Is this the default behaviour? Or am I missing something over here?
Since you are on 2012+, I would suggest try_convert() or even try_cast().
try_convert(BIGINT,A.Value AS BIGINT) > 10
IF the conversion fails, a null value would be returned
Is this the default behaviour?
SQLSERVER is free to rearrange expressions,if the final result stays the same...you should not rely on the behaviour you are expecting..
below is the plan for the query
here a.value is applied before the join,so the cast fails..below screen shot confirms the same
This has been described by Itzik Ben-Gan here..Logical Query Processing Part 6: The WHERE Clause
below is some code which has same issue as yours and this also fails with same issue
WITH C AS
(
SELECT name, datatype, val
FROM dbo.Properties
WHERE datatype IN ('TINYINT', 'SMALLINT', 'INT', 'BIGINT')
)
SELECT *
FROM C
WHERE CAST(val AS BIGINT) > 10;
below is an explanation from Itzik
From a logical query processing perspective, such code should not fail. However, for performance reasons, the SQL Server parser unnests, or inlines, the inner query’s code in the outer query, resulting in code that is equivalent to the original query without the table expression. Consequently, the code fails with the same error.
You could use try cast/convert to overcome conversion errors..
below is one good series for more internals:
Query Optimizer Deep Dive - Part 1

Local variable with multiple value list

I use Excel connection to connect to SQL Server to query data from SQL server to Excel.
I have below WHERE clause in the Excel connection couple times. I need to replace the WHERE multiple value list from time to time. To simply the replacement, I want to use a local parameter, #Trans. With the local parameter, I can change it only and all SQL will use it to query.
WHERE Type in ('R','D','C')
If it is single option, below code works.
DECLARE #TRans CHAR(200)= 'R';
SELECT .....
WHERE Type in (#Trans)
If it is multiple options, the below code does not works
DECLARE #TRans CHAR(200)= 'R,D,C';
SELECT .....
WHERE Type in (#Trans)
DECLARE #TRans CHAR(200)= '''R'''+','+'''D'''+','+'''C''';
SELECT .....
WHERE Type in (#Trans)
How to declare #Trans for multiple value list, for example ('R','D','C')? Thank you.
You can use dynamic sql
DECLARE #TRans VARCHAR(200)= '''R'',''D'',''C''';
DECLARE #sql VARCHAR(MAX) = '';
SET #sql = 'SELECT * FROM table WHERE Type in (' + #Trans + ');'
EXEC #sql
Take note of the quotes for the values in #TRans since these character values.
If you want to check the value of #sql which you will see the constructed sql statement, replace EXEC #sql with PRINT #sql.
Result of #sql
SELECT * FROM table WHERE Type in ('R','D','C');
As you can see by now, SQL Server does NOT support macro substition. This leaves a couple of options. One is to split the string.
If not 2016, here is a quick in-line approach which does not require a Table-Valued Function
Example
Declare #Trans varchar(max)='R,D,C' -- Notice no single quotes
Select ...
Where Type in (
Select RetVal = LTrim(RTrim(B.i.value('(./text())[1]', 'varchar(max)')))
From (Select x = Cast('<x>' + replace(#Trans,',','</x><x>')+'</x>' as xml).query('.')) as A
Cross Apply x.nodes('x') AS B(i)
)
You can create a table named LocalParameter and keep local variables there. You can only get datas by updating LocalParameter table without changing the queries.
CREATE TABLE LocalParameter (Trans VARCHAR(MAX))
INSERT INTO LocalParameter
VALUES
(
',R,'
)
With LIKE you can use it like this:
SELECT .....
WHERE (SELECT TOP 1 A.Trans FROM LocalParameter A) LIKE ',' + Type + ','
To change WHERE clause:
UPDATE LocalParameter
SET Trans = ',R,D,C,'
Queries:
SELECT .....
WHERE (SELECT TOP 1 A.Trans FROM LocalParameter A) LIKE ',' + Type + ','
Local variables are added to the beginning and end of the comma.
You can use a split method to split csv values as shown below
DECLARE #delimiter VARCHAR(10)=','
DECLARE #input_string VARCHAR(200)='R,D,C'
;WITH CTE AS
(
SELECT
SUBSTRING(#input_string,0,CHARINDEX(#delimiter,#input_string)) AS ExtractedString,
SUBSTRING(#input_string,CHARINDEX(#delimiter,#input_string) + 1,LEN(#input_string)) AS PartString
WHERE CHARINDEX(#delimiter,#input_string)>0
UNION ALL
SELECT
SUBSTRING(PartString,0,CHARINDEX(#delimiter,PartString)) AS ExtractedString,
SUBSTRING(PartString,CHARINDEX(#delimiter,PartString)+1,LEN(PartString)) AS PartString
FROM CTE WHERE CHARINDEX(#delimiter,PartString)>0
)
SELECT ExtractedString FROM CTE
UNION ALL
SELECT
CASE WHEN CHARINDEX(#delimiter,REVERSE(#input_string))>0
THEN REVERSE(SUBSTRING(REVERSE(#input_string),0,CHARINDEX(#delimiter,REVERSE(#input_string))))
ELSE #input_string END
OPTION (MAXRECURSION 0)
This split method doesnt have any loops so it will be fast. then you integrate this with your query as below mentioned
DECLARE #delimiter VARCHAR(10)=','
DECLARE #input_string VARCHAR(200)='R,D,C'
;WITH CTE AS
(
SELECT
SUBSTRING(#input_string,0,CHARINDEX(#delimiter,#input_string)) AS ExtractedString,
SUBSTRING(#input_string,CHARINDEX(#delimiter,#input_string) + 1,LEN(#input_string)) AS PartString
WHERE CHARINDEX(#delimiter,#input_string)>0
UNION ALL
SELECT
SUBSTRING(PartString,0,CHARINDEX(#delimiter,PartString)) AS ExtractedString,
SUBSTRING(PartString,CHARINDEX(#delimiter,PartString)+1,LEN(PartString)) AS PartString
FROM CTE WHERE CHARINDEX(#delimiter,PartString)>0
)
SELECT * FROM [YourTableName] WHERE Type IN
(SELECT ExtractedString FROM CTE
UNION ALL
SELECT
CASE WHEN CHARINDEX(#delimiter,REVERSE(#input_string))>0
THEN REVERSE(SUBSTRING(REVERSE(#input_string),0,CHARINDEX(#delimiter,REVERSE(#input_string))))
ELSE #input_string END
)OPTION (MAXRECURSION 0)
If possible add a new table and then join to it in all your queries:
CREATE TABLE SelectedType
(
[Type] CHAR(1) PRIMARY KEY
)
INSERT INTO SelectedType
VALUES ('R','D','C')
Then your queries become:
SELECT *
FROM MyTable MT
INNER JOIN SelectedType [ST]
ON ST.[Type] = MT.[Type]
If you need to add, update or delete types then update the rows in SelectedType table.
This has the benefit of using SET BASED queries, is easy to understand and easy to add, update or delete required types.

Dynamic SP returning values in reverse order

I am using MS SQL and created one Dynamic stored procedure:
ALTER Procedure [dbo].[sp_MTracking]
(
#OList varchar(MAX)
)
As
BEGIN TRY
SET NOCOUNT ON
DECLARE #SQL varchar(600)
SET #SQL = 'select os.X,os.Y from Table1 as os join Table2 as s on os.sID=s.sID where s.SCode IN ('+ #OList +')'
exec (#SQL)
END TRY
BEGIN CATCH
Execute sp_DB_ErrorInfo
Select -1 Result
END CATCH
GO
It is working properly, but I am getting x,y values in reverse order.
For example if I am passing 'scode1,scode2' as parameter, I am getting x,y values for scode1 in 2nd row and x,y values for scode2 as first row.
How can I fix this issue
Thanks
This is a bit long for a comment.
SQL tables and results sets represent unordered sets. There is no ordering, unless you explicitly use an ORDER BY clause.
Your query does not have an ORDER BY. Hence, you have no reason to expect the results in any particular order. In addition, the ordering may be different on different runs of the query. If you want the results in a particular order, add ORDER BY.
Probably the easiest way is to use charindex():
order by charindex(',' + s.code + ',' , ',''' + #olist + ''',')
This is a bit more cumbersome in dynamic sql:
SET #SQL = '
select os.X,os.Y
from Table1 os join
Table2 s
on os.sID = s.sID
where s.SCode IN (' + #OList + ')
order by charindex('','' + s.code + '','', '',''' + #OList + ''', '')
';
Well, there are a couple of things here.
The first thing is what Gordon wrote - to ensure the order of the result set you must use the order by clause.
Second, like Devart demonstrated in his answer, you don't need dynamic sql for this kind of procedures.
Third, if you want your results ordered by the order of the parameters in the list, you should use a slightly different approach then Devart wrote.
Therefor, here are my 2 cents:
If you can change the stored procedure to accept a table valued parameter instead of VARCHAR(max) that would be your best option IMHO.
If not, you must use a split function to create a table from that varchar and then use that table in your select.
Note that you will have to choose a split function that returns a table with two columns - one for the value and one for it's position in the original string.
Whatever the case may be, the rest of the sql should be something like this:
SELECT os.X, os.Y
FROM Table1 os
INNER JOIN Table2 s ON os.[sID] = s.[sID]
INNER JOIN #TVP t ON s.SCode = t.Value
ORDER BY t.Sort
That's assuming #TVP to be a Table containing a Value column that's the same data type of SCode in table2, and a Sort column (an int, naturally).
Without dynamic sql -
ALTER PROCEDURE [dbo].[sp_MTracking]
(
#OList VARCHAR(MAX)
)
AS BEGIN
SET NOCOUNT ON
DECLARE #t TABLE (val VARCHAR(50) PRIMARY KEY WITH(IGNORE_DUP_KEY=ON))
INSERT INTO #t
SELECT item = t.c.value('.', 'INT')
FROM (
SELECT txml = CAST('<r>' + REPLACE(#OList, ',', '</r><r>') + '</r>' AS XML)
) r
CROSS APPLY txml.nodes('/r') t(c)
SELECT os.X, os.Y
FROM Table1 os
JOIN Table2 s ON os.[sID] = s.[sID]
WHERE s.SCode IN (SELECT * FROM #t)
--OPTION(RECOMPILE)
END
GO

Select with IN and Like

I have a very interesting problem. I have an SSRS report with a multiple select drop down.
The drop down allows to select more than one value, or all values.
All values is not the problem.
The problem is 1 or the combination of more than 1 option
When I select in the drop down 'AAA' it should return 3 values: 'AAA','AAA 1','AAA 2'
Right now is only returning 1 value.
QUESTION:
How can make the IN statement work like a LIKE?
The Drop down select
SELECT '(All)' AS team, '(All)' AS Descr
UNION ALL
SELECT 'AAA' , 'AAA'
UNION ALL
SELECT 'BBB' , 'BBB'
Table Mytable
ColumnA Varchar(5)
Values for ColumnA
'AAA'
'AAA 1'
'AAA 2'
'BBB'
'BBB 1'
'BBB 2'
SELECT * FROM Mytable
WHERE ColumnA IN (SELECT * FROM SplitListString(#Team, ',')))
Split function
CREATE FUNCTION [dbo].[SplitListString]
(#InputString NVARCHAR(max), #SplitChar CHAR(1))
RETURNS #ValuesList TABLE
(
param NVARCHAR(MAX)
)
AS
BEGIN
DECLARE #ListValue NVARCHAR(max)
DECLARE #TmpString NVARCHAR(max)
DECLARE #PosSeparator INT
DECLARE #EndValues BIT
SET #TmpString = LTRIM(RTRIM(#InputString));
SET #EndValues = 0
WHILE (#EndValues = 0) BEGIN
SET #PosSeparator = CHARINDEX(#SplitChar, #TmpString)
IF (#PosSeparator) > 1 BEGIN
SELECT #ListValue = LTRIM(RTRIM(SUBSTRING(#TmpString, 1, #PosSeparator -1 )))
END
ELSE BEGIN
SELECT #ListValue = LTRIM(RTRIM(#TmpString))
SET #EndValues = 1
END
IF LEN(#ListValue) > 0 BEGIN
INSERT INTO #ValuesList
SELECT #ListValue
END
SET #TmpString = LTRIM(RTRIM(SUBSTRING(#TmpString, #PosSeparator + 1, LEN(#TmpString) - #PosSeparator)))
END
RETURN
END
You can't. But, you can make the like work like the like:
select *
from mytable t join
SplitListString(#Team, ',') s
on t.ColumnA like '%'+s.param+'%'
That is, move the split list to an explicit join. Replace with the actual column name returned by the function, and use the like function.
Or, if you prefer:
select *
from mytable t cross join
SplitListString(#Team, ',') s
where t.ColumnA like '%'+s.param+'%'
The two versions are equivalent and should produce the same execution plan.
Better approach would be to have a TeamsTable (teamID, teamName, ...) and teamMembersTable (teamMemberID, teamID, teamMemberDetails, ...).
Then you an build your dropdown list as
SELECT ... FROM TeamsTable ...;
and
SELECT ... FROM teamMembersTable WHERE teamID IN (valueFromYourDropDown);
Or you can just store your teamID or teamName (or both) in your (equivalent of) teamMembersTable
You're not going to get IN to work the same as LIKE without a lot of work. You could do something like this though (and it would be nice to see some of your actual data though so we could give better solutions):
SELECT *
FROM table
WHERE LEFT(field,3) IN #Parameter
If you'd like better performance, create a code field on your table and update it like this:
UPDATE table
SET codeField = LEFT(field,3)
Then just add an index on that field and run this query to get your results:
SELECT *
FROM table
WHERE codeField IN #Parameter