Display Columns To Rows - sql

I have a table that looks like this:
AccountNumber, Warning01, Warning01ExpirationDate, Warning02, Waring02ExpirtionDate, .....
1234, 3,'2017-09-06',0, null
78976, 1,'2015-04-03',2,null
I would like to show the result as follow:
AccountNumber,Warning,ExpirationDate
1234,2,'2017-09-06'
78976,1,'2015-04-03'
78976,2,null
if the warning is 0 or null, I want to ignore it.
any ideas?

In T-SQL, just use cross apply and values() to unpivot your dataset:
select x.*
from mytable t
cross apply (values
(t.accountNumber, t.warning01, t.warning01expirationDate),
(t.accountNumber, t.warning02, t.warning02expirationDate)
) as x(accountNumber, warning, expirationDate)
where x.warning <> 0

Related

concatenate all columns from with names of columns also in it, one string for every row

CREATE TABLE myTable
(
COL1 int,
COL2 varchar(10),
COL3 float
)
INSERT INTO myTable
VALUES (1, 'c2r1', NULL), (2, 'c2r2', 2.335)
I want an output with for every row of a table one string with all columns and the names in it.
Something like:
COL1=1|COL2=c2r1|COL3=NULL
COL1=2|COL2=c2r2|COL3=2.3335
I have a table with lot of columns so it has to be dynamic (it would use it on different tables also), is there an easy solution where I can do it and choose separator and things like that... (It has to deal with NULL-values & numeric values also.)
I am using SQL Server 2019.
Since you are on 2019, string_agg() with a bit if JSON
Example
Select NewVal
From MyTable A
Cross Apply ( Select NewVal = string_agg([key]+'='+isnull(value,'null'),'|')
From OpenJson((Select A.* For JSON Path,Without_Array_Wrapper,INCLUDE_NULL_VALUES ))
) B
Results
NewVal
COL1=1|COL2=c2r1|COL3=null
COL1=2|COL2=c2r2|COL3=2.335000000000000e+000 -- Don't like the float
EDIT to Trap FLOATs
Select NewVal
From MyTable A
Cross Apply ( Select NewVal = string_agg([key]+'='+isnull(case when value like '%0e+0%' then concat('',convert(decimal(15,3),convert(float,value))) else value end,'null'),'|')
From OpenJson((Select A.* For JSON Path,Without_Array_Wrapper,INCLUDE_NULL_VALUES ))
) B
Results
NewVal
COL1=1|COL2=c2r1|COL3=null
COL1=2|COL2=c2r2|COL3=2.335
Would one dare to abuse json for this?
SELECT REPLACE (REPLACE (REPLACE (REPLACE (REPLACE (ca.js,'":','='), ',"','|'), '"',''), '[{','') ,'}]','') AS data
FROM (SELECT col1 as id FROM myTable) AS list
CROSS APPLY
(
SELECT t.col1
, t.col2
, cast(t.col3 as decimal(16,3)) as col3
FROM myTable t
WHERE t.col1 = list.id
FOR JSON AUTO, INCLUDE_NULL_VALUES
) ca(js)
It'll work with a simple SELECT t.* in the cross apply.
But the floats tend to be bit too long then.

Given a specific column value, merge two columns in T-SQL

I have a table with the following content (simplified):
And this is the desired result:
In short, the first column has hundreds of values and sometimes repeated, for a given value of IDPRODUCTFIRST I want a RESULT column with the given value + the values ​​of IDPRODUCTSECOND.
SELECT IDPRODUCTSECOND AS RESULT
FROM [SCIOHIST].[dbo].[RELATIONPRODUCTMATCHES]
WHERE IDPRODUCTFIRST = 228697
With the query above, I can only get the values ​​from the second column, how could I add to the result column the given value (e.g. 228697) from the first column?
One method is to unpivot and select distinct values:
SELECT DISTINCT v.RESULT
FROM [SCIOHIST].[dbo].[RELATIONPRODUCTMATCHES] RPM CROSS APPLY
(VALUES (IDPRODUCTFIRST), (IDPRODUCTSECOND)) V(RESULT)
WHERE IDPRODUCTFIRST = 228697;
SELECT DISTINCT IDPRODUCTFIRST AS RESULT
FROM [SCIOHIST].[dbo].[RELATIONPRODUCTMATCHES]
--WHERE IDPRODUCTFIRST = 228697
UNION
SELECT DISTINCT IDPRODUCTSECOND AS RESULT
FROM [SCIOHIST].[dbo].[RELATIONPRODUCTMATCHES]
--WHERE IDPRODUCTFIRST = 228697
where clauses can exist or not.
IF you want duplicate value in both column are in your result you can use from "UNION ALL" instead of "UNION".
You can use Union
; With cteProd
as
(
SELECT IDPRODUCTFIRST, IDPRODUCTSECOND
FROM [SCIOHIST].[dbo].[RELATIONPRODUCTMATCHES]
)
Select RESULT from
(
SELECT IDPRODUCTFIRST, IDPRODUCTFIRST AS RESULT
FROM cteProd
Union
SELECT IDPRODUCTFIRST, IDPRODUCTSECOND AS RESULT
FROM cteProd
) Q
WHERE IDPRODUCTFIRST = 228697
Here is the fiddle
Yet another option is UNPIVOT
Example
Declare #YourTable Table ([IDPRODUCTFIRST] varchar(50),[IDPRODUCTSECOND] varchar(50)) Insert Into #YourTable Values
(228697,228699)
,(228697,228701)
Select Distinct Result
From (Select [IDPRODUCTFIRST],[IDPRODUCTSECOND]
From #YourTable
Where [IDPRODUCTFIRST] = 228697
) a
Unpivot ( Result for Item in ([IDPRODUCTFIRST],[IDPRODUCTSECOND]) ) unp
Returns
Result
228697
228699
228701

can you set environmental variable to round numbers in SQL Server query?

I have a bunch of calculations in a SQL Server 2012 query, kind of like:
select T1_month
,a.some_value, b.value_to_compare,(select (some_value - value_to_compare)/value_to_compare*100 where value_to_compare != 0) percent_diff
from
(select T1_month
,sum(some_value) some_value
from T1
group by T1_month) a
join
(select T2_month
,sum(value_to_compare) value_to_compare
from T2
group by T2_month) b
on a.T1_month = b.T2_month
order by T1_month;
I used a round function here, but I need to add a lot more similar lines. Is there any way to just set a global variable to round all columns in one shot? Otherwise it's just a lot of leg work.
round((some_value - value_to_compare)/value_to_compare*100, 2)
I'll be pasting to Excel but it would be nice to round it in the source without having to use the round function so many times.
Here is a workaround, no need to type Round() function for every line:
SELECT 9.0 / 7 * 100 AS Direct_Query
CREATE TABLE #Table_1 (
[Column_1] [NUMERIC](18, 2) NULL
)
INSERT INTO #Table_1
SELECT 9.0/7 * 100
SELECT Column_1 AS Temp_Table_Formatted FROM #Table_1
Edit:
If you cannot use temp table, you can wrap your query with CTE then just Round() the result of CTE, which is pretty easy to do with help of a multiline editor like Sublime Text or VisualStudio Code:
WITH CTE_Result_To_Format
AS (
SELECT
T1_month
,a.some_value
,b.value_to_compare
,(
SELECT
(some_value - value_to_compare) / value_to_compare * 100
WHERE value_to_compare != 0
)
percent_diff
FROM (
SELECT
T1_month
,SUM(some_value) some_value
FROM T1
GROUP BY
T1_month
) a
JOIN (
SELECT
T2_month
,SUM(value_to_compare) value_to_compare
FROM T2
GROUP BY
T2_month
) b
ON a.T1_month = b.T2_month
)
SELECT
r.T1_month
,ROUND(r.some_value, 2) AS some_value
,ROUND(r.value_to_compare, 2) AS value_to_compare
,ROUND(r.percent_diff, 2) AS percent_diff
FROM CTE_Result_To_Format r
ORDER BY
r.T1_month

Convert and sum variable a, grouping by variable b

I would like to convert the variable ar66 from nvarchar to numeric and sum it for the variable ar5.
I create the following code, but it does not work:
select top(10) ar5, (
select
case
when isnumeric(q1.ar66) = 1 then
cast(q1.ar66 AS numeric)
else
NULL
end
AS 'ar66_numeric'
from rmb_loan q1)
from rmb_loan q2
group by q2.ar5
Do you have any suggestion to solve the problem?
Does this do what you want?
select top (10) ar5, sum(try_convert(numeric(38, 6), q1.ar66))
from rmb_loan q2
group by q2.ar5;
When using select top you should normally have an order by clause.

Two or more results of one CASE statement in SQL

Is it possible to SELECT value of two or more columns with one shot of CASE statement? I mean instead of:
select
ColumnA = case when CheckColumn='condition' then 'result1' end
,ColumnB = case when CheckColumn='condition' then 'result2' end
Something like:
select case when CheckColumn='condition' then ColumnA='result1', ColumnB='result2' end
UPDATE
Just the same as we can do with the UPDATE statement:
update CTE
set ColumnA='result1', ColumnB='result2'
where CheckColumn='condition'
It is not possible with CASE expression.
For every column you need new CASE
It is not possible, but you could use a table value constructor as a work around to this, to store each value for columna and columnb against your check column:
SELECT t.CheckColumn,
v.ColumnA,
v.ColumnB
FROM dbo.YourTable AS t
LEFT JOIN
(VALUES
('Condition1', 'Result1', 'Result2'),
('Condition2', 'Result3', 'Result4'),
('Condition3', 'Result5', 'Result6')
) AS v (CheckColumn, ColumnA, ColumnB)
ON v.CheckColumn = t.CheckColumn;
If you have more complex conditions, then you can still apply this logic, but just use a pseudo-result for the join:
SELECT t.CheckColumn,
v.ColumnA,
v.ColumnB
FROM dbo.YourTable AS t
LEFT JOIN
(VALUES
(1, 'Result1', 'Result2'),
(2, 'Result3', 'Result4'),
(3, 'Result5', 'Result6')
) AS v (ConditionID, ColumnA, ColumnB)
ON v.ConditionID = CASE WHEN <some long expression> THEN 1
WHEN <some other long expression> THEN 2
ELSE 3
END;
The equivalent select to the update is:
select 'result1', 'result2'
. . .
where CheckColumn = 'condition';
Your select is different because it produces NULL values. There is an arcane way you can essentially do this with outer apply:
select t2.*
from . . . outer apply
(select t.*
from (select 'result1' as col1, 'result2' as col2) t
where CheckColumn = 'condition'
) t2;
This will return NULL values when there is no match. And, you can have as many columns as you would like.
What I understood from your question is that you want to update multiple columns if certain condition is true.
For such situation you have to use MERGE statements.
Example of using MERGE is as given on msdn here.
Code example:
-- MERGE statement for update.
USE [Database Name];
GO
MERGE Inventory ity
USING Order ord
ON ity.ProductID = ord.ProductID
WHEN MATCHED THEN
UPDATE
SET ity.Quantity = ity.Quantity - ord.Quantity;
More MERGE statement example here.
You could solve this maybe with a CTE or a CROSS APPLY, somehting like
DECLARE #tbl2 TABLE(inx INT, val1 VARCHAR(10),val2 VARCHAR(10));
INSERT INTO #tbl2 VALUES(1,'value1a','value1b'),(2,'value2a','value2b'),(3,'value2a','value2b');
UPDATE yourTable SET col1=subTable.val1,col2=subTable.val2
FROM yourTable
CROSS APPLY(
SELECT val1,val2
FROM #tbl2
WHERE inx=1 --YourCondition
) AS subTable