dynamically add column names and add their values as a row sql server - sql

I have a table table1 with column name a, b, c, d, e, f.
Now the task is to get the value of each column which will definitely be a single row value and insert that into other table2 - columns(x, y, z) . So my query would be like :
insert into table2 (x, y, z)
select a, '', '' from table1
union all
select b, '', '' from table1
union all
select c, '', '' from table1
union all
select d, '', '' from table1
union all
select e, '', '' from table1
.
.
.
union all
select f, '', '' from table1
Now if a new column add in table1 then again I have to add a select statement in this. Just want to avoid this how can I write a dynamic query which automatically consider all the columns and make it shorter.

Seems like your looking for a Dynamic EAV Structure (Entity Attribute Value). Now the cool part is the #YourTable could be any query
Declare #YourTable table (ID int,Col1 varchar(25),Col2 varchar(25),Col3 varchar(25))
Insert Into #YourTable values
(1,'a','z','k')
,(2,'g','b','p')
,(3,'k','d','a')
Select A.ID
,C.*
From #YourTable A
Cross Apply (Select XMLData=cast((Select A.* for XML Raw) as xml)) B
Cross Apply (
Select Attribute = attr.value('local-name(.)','varchar(100)')
,Value = attr.value('.','varchar(max)') -- change datatype if necessary
From B.XMLData.nodes('/row') as A(r)
Cross Apply A.r.nodes('./#*') AS B(attr)
Where attr.value('local-name(.)','varchar(100)') not in ('ID','OtherFieldsToExclude') -- Field Names case sensitive
) C
Returns
ID Attribute Value
1 Col1 a
1 Col2 z
1 Col3 k
2 Col1 g
2 Col2 b
2 Col3 p
3 Col1 k
3 Col2 d
3 Col3 a

A simpler way to do this uses cross apply:
insert into table2 (x, y, z)
select v.x, '', ''
from table1 t1 cross apply
(values (t1.a), (t1.b), (t1.c), (t1.d), (t1.e), (t1.f)
) v(x);
If you want to insert new values when new columns are added to the table, then you would want a DDL and probably a DML trigger. DML triggers are the "standard" triggers.
You can read about DDL triggers in the documentation.
That said, I am highly suspicious of database systems that encourage new columns and new tables to be added. There is probably a better way to design the application, for instance, using an EAV data model that provides greater flexibility with attributes.

try this
insert into table2
select Tmp.id, tb1.* from table1 tb1,
((SELECT B.id FROM (SELECT [value] = CONVERT(XML ,'<v>' + REPLACE('a,b,c,d,e,f' , ',' , '</v><v>')+ '</v>')) A
OUTER APPLY
(SELECT id = N.v.value('.' , 'varchar(100)') FROM A.[value].nodes('/v') N ( v )) B)) Tmp

This, if I am reading it correctly, looks like a perfect time to use
PIVOT.

Related

concatenate all columns from with names of columns also in it, one string for every row

CREATE TABLE myTable
(
COL1 int,
COL2 varchar(10),
COL3 float
)
INSERT INTO myTable
VALUES (1, 'c2r1', NULL), (2, 'c2r2', 2.335)
I want an output with for every row of a table one string with all columns and the names in it.
Something like:
COL1=1|COL2=c2r1|COL3=NULL
COL1=2|COL2=c2r2|COL3=2.3335
I have a table with lot of columns so it has to be dynamic (it would use it on different tables also), is there an easy solution where I can do it and choose separator and things like that... (It has to deal with NULL-values & numeric values also.)
I am using SQL Server 2019.
Since you are on 2019, string_agg() with a bit if JSON
Example
Select NewVal
From MyTable A
Cross Apply ( Select NewVal = string_agg([key]+'='+isnull(value,'null'),'|')
From OpenJson((Select A.* For JSON Path,Without_Array_Wrapper,INCLUDE_NULL_VALUES ))
) B
Results
NewVal
COL1=1|COL2=c2r1|COL3=null
COL1=2|COL2=c2r2|COL3=2.335000000000000e+000 -- Don't like the float
EDIT to Trap FLOATs
Select NewVal
From MyTable A
Cross Apply ( Select NewVal = string_agg([key]+'='+isnull(case when value like '%0e+0%' then concat('',convert(decimal(15,3),convert(float,value))) else value end,'null'),'|')
From OpenJson((Select A.* For JSON Path,Without_Array_Wrapper,INCLUDE_NULL_VALUES ))
) B
Results
NewVal
COL1=1|COL2=c2r1|COL3=null
COL1=2|COL2=c2r2|COL3=2.335
Would one dare to abuse json for this?
SELECT REPLACE (REPLACE (REPLACE (REPLACE (REPLACE (ca.js,'":','='), ',"','|'), '"',''), '[{','') ,'}]','') AS data
FROM (SELECT col1 as id FROM myTable) AS list
CROSS APPLY
(
SELECT t.col1
, t.col2
, cast(t.col3 as decimal(16,3)) as col3
FROM myTable t
WHERE t.col1 = list.id
FOR JSON AUTO, INCLUDE_NULL_VALUES
) ca(js)
It'll work with a simple SELECT t.* in the cross apply.
But the floats tend to be bit too long then.

Two or more results of one CASE statement in SQL

Is it possible to SELECT value of two or more columns with one shot of CASE statement? I mean instead of:
select
ColumnA = case when CheckColumn='condition' then 'result1' end
,ColumnB = case when CheckColumn='condition' then 'result2' end
Something like:
select case when CheckColumn='condition' then ColumnA='result1', ColumnB='result2' end
UPDATE
Just the same as we can do with the UPDATE statement:
update CTE
set ColumnA='result1', ColumnB='result2'
where CheckColumn='condition'
It is not possible with CASE expression.
For every column you need new CASE
It is not possible, but you could use a table value constructor as a work around to this, to store each value for columna and columnb against your check column:
SELECT t.CheckColumn,
v.ColumnA,
v.ColumnB
FROM dbo.YourTable AS t
LEFT JOIN
(VALUES
('Condition1', 'Result1', 'Result2'),
('Condition2', 'Result3', 'Result4'),
('Condition3', 'Result5', 'Result6')
) AS v (CheckColumn, ColumnA, ColumnB)
ON v.CheckColumn = t.CheckColumn;
If you have more complex conditions, then you can still apply this logic, but just use a pseudo-result for the join:
SELECT t.CheckColumn,
v.ColumnA,
v.ColumnB
FROM dbo.YourTable AS t
LEFT JOIN
(VALUES
(1, 'Result1', 'Result2'),
(2, 'Result3', 'Result4'),
(3, 'Result5', 'Result6')
) AS v (ConditionID, ColumnA, ColumnB)
ON v.ConditionID = CASE WHEN <some long expression> THEN 1
WHEN <some other long expression> THEN 2
ELSE 3
END;
The equivalent select to the update is:
select 'result1', 'result2'
. . .
where CheckColumn = 'condition';
Your select is different because it produces NULL values. There is an arcane way you can essentially do this with outer apply:
select t2.*
from . . . outer apply
(select t.*
from (select 'result1' as col1, 'result2' as col2) t
where CheckColumn = 'condition'
) t2;
This will return NULL values when there is no match. And, you can have as many columns as you would like.
What I understood from your question is that you want to update multiple columns if certain condition is true.
For such situation you have to use MERGE statements.
Example of using MERGE is as given on msdn here.
Code example:
-- MERGE statement for update.
USE [Database Name];
GO
MERGE Inventory ity
USING Order ord
ON ity.ProductID = ord.ProductID
WHEN MATCHED THEN
UPDATE
SET ity.Quantity = ity.Quantity - ord.Quantity;
More MERGE statement example here.
You could solve this maybe with a CTE or a CROSS APPLY, somehting like
DECLARE #tbl2 TABLE(inx INT, val1 VARCHAR(10),val2 VARCHAR(10));
INSERT INTO #tbl2 VALUES(1,'value1a','value1b'),(2,'value2a','value2b'),(3,'value2a','value2b');
UPDATE yourTable SET col1=subTable.val1,col2=subTable.val2
FROM yourTable
CROSS APPLY(
SELECT val1,val2
FROM #tbl2
WHERE inx=1 --YourCondition
) AS subTable

SQL Querying on tuple values

I need to write a write a SQL query that selects values from a table based on several tuples of selection criteria. It could be done using a where clause like this :
where (a = 1 and b='a') or (a=5 and b='s')
Is the best way to select:
select a, pk from x where a in (1,5)
select b, pk from x where b in ('a','s')
and join the result of the two queries using the primary key?
do you mean something(a self join) like this:
select x.a, x.pk
from x
join x x2 on x.pk=x2.pk
where x.a in (1,5)
and x2.b in ('a','s')
?
You can use join on table expression from VALUES. You can add in VALUES as much rows as you want. It will work on MSSQL:
DECLARE #x TABLE ( a INT, b CHAR(1) )
INSERT INTO #x
VALUES ( 1, 'a' ),
( 1, 'b' ),
( 1, 'c' ),
( 2, 'd' ),
( 2, 'e' ),
( 5, 'f' ),
( 5, 's' )
SELECT x.*
FROM #x x
JOIN (
VALUES ( 1, 'a'),
( 5, 's')
) AS v( a, b ) ON x.a = v.a AND x.b = v.b
Output:
a b
1 a
5 s
Based on my understanding you want write a SQL that uses a combination of two filters. Here is a simple solution that will work in any database.
Create a new column say "COLUMN_NEW" in the same table or build a temp table or a view with a new column (plus existing columns from original table).
Insert concatenated values of column a and column b in "COLUMN_NEW". Based on the example mentioned by you values in "COLUMN_NEW" will be "1a" and "5s"
Now you may have a different syntax for concat in different databases. Example concat(a,b) in SQL server.
SQL to select records from the table will be select * from table where COLUMN_NEW in ("1a",5s");

Double IN Statements in SQL

Just curious about the IN statement in SQL.
I know I can search multiple columns with one value by doing
'val1' IN (col1,col2)
And can search a column for multiple values
col1 IN ('val1','val2')
But is there a way to do both of these simultaneously, without restorting to an repeating AND / OR in the SQl? I am looking to do this in the most scalable way, so independent of how many vals / cols i need to search in.
So essentially:
('val1','val2') IN (col1,col2)
but valid.
You could do something like this (which I've also put on SQLFiddle):
-- Test data:
WITH t(col1, col2) AS (
SELECT 'val1', 'valX' UNION ALL
SELECT 'valY', 'valZ'
)
-- Solution:
SELECT *
FROM t
WHERE EXISTS (
SELECT 1
-- Join all columns with all values to see if any column matches any value
FROM (VALUES(t.col1),(t.col2)) t1(col)
JOIN (VALUES('val1'),('val2')) t2(val)
ON col = val
)
Of course, one could argue, which version is more concise.
Yes, for example you can do this in Oracle:
select x, y from (select 1 as x, 2 as y from dual)
where (x,y) in (select 1 as p, 2 as q from dual)

Easiest way to eliminate NULLs in SELECT DISTINCT?

I am working on a query that is fairly similar the following:
CREATE TABLE #test (a char(1), b char(1))
INSERT INTO #test(a,b) VALUES
('A',NULL),
('A','B'),
('B',NULL),
('B',NULL)
SELECT DISTINCT a,b FROM #test
DROP TABLE #test
The result is, unsurprisingly,
a b
-------
A NULL
A B
B NULL
The output I would like to see in actuality is:
a b
-------
A B
B NULL
That is, if a column has a value in some records but not in others, I want to throw out the row with NULL for that column. However, if a column has a NULL value for all records, I want to preserve that NULL.
What's the simplest/most elegant way to do this in a single query?
I have a feeling that this would be simple if I weren't exhausted on a Friday afternoon.
Try this:
select distinct * from test
where b is not null or a in (
select a from test
group by a
having max(b) is null)
You can get the fiddle here.
Note if you can only have one non-null value in b, this can be simplified to:
select a, max(b) from test
group by a
Try this:
create table test(
x char(1),
y char(1)
);
insert into test(x,y) values
('a',null),
('a','b'),
('b', null),
('b', null)
Query:
with has_all_y_null as
(
select x
from test
group by x
having sum(case when y is null then 1 end) = count(x)
)
select distinct x,y from test
where
(
-- if a column has a value in some records but not in others,
x not in (select x from has_all_y_null)
-- I want to throw out the row with NULL
and y is not null
)
or
-- However, if a column has a NULL value for all records,
-- I want to preserve that NULL
(x in (select x from has_all_y_null))
order by x,y
Output:
X Y
A B
B NULL
Live test: http://sqlfiddle.com/#!3/259d6/16
EDIT
Seeing Mosty's answer, I simplified my code:
with has_all_y_null as
(
select x
from test
group by x
-- having sum(case when y is null then 1 end) = count(x)
-- should have thought of this instead of the code above. Mosty's logic is good:
having max(y) is null
)
select distinct x,y from test
where
y is not null
or
(x in (select x from has_all_y_null))
order by x,y
I just prefer CTE approach, it has a more self-documenting logic :-)
You can also put documentation on non-CTE approach, if you are conscious of doing so:
select distinct * from test
where b is not null or a in
( -- has all b null
select a from test
group by a
having max(b) is null)
;WITH CTE
AS
(
SELECT DISTINCT * FROM #test
)
SELECT a,b
FROM CTE
ORDER BY CASE WHEN b IS NULL THEN 9999 ELSE b END ;
SELECT DISTINCT t.a, t.b
FROM #test t
WHERE b IS NOT NULL
OR NOT EXISTS (SELECT 1 FROM #test u WHERE t.a = u.a AND u.b IS NOT NULL)
ORDER BY t.a, t.b
This is a really weird requirement. I wonder how you need it.
SELECT DISTINCT a, b
FROM test t
WHERE NOT ( b IS NULL
AND EXISTS
( SELECT *
FROM test ta
WHERE ta.a = t.a
AND ta.b IS NOT NULL
)
)
AND NOT ( a IS NULL
AND EXISTS
( SELECT *
FROM test tb
WHERE tb.b = t.b
AND tb.a IS NOT NULL
)
)
Well, I don't particularly like this solution, but it seems the most appropriate to me. Note that your description of what you want sounds exactly like what you get with a LEFT JOIN, so:
SELECT DISTINCT a.a, b.b
FROM #test a
LEFT JOIN #test b ON a.a = b.a
AND b.b IS NOT NULL
SELECT a,b FROM #test t where b is not null
union
SELECT a,b FROM #test t where b is null
and not exists(select 1 from #test where a=t.a and b is not null)
Result:
a b
---- ----
A B
B NULL
I'll just put here a mix of two answers that solved my issue, because my View was more complex
--IdCompe int,
--Nome varchar(30),
--IdVanBanco int,
--IdVan int
--FlagAtivo bit,
--FlagPrincipal bit
select IdCompe
, Nome
, max(IdVanBanco)
, max(IdVan)
, CAST(MAX(CAST(FlagAtivo as INT)) AS BIT) FlagAtivo
, CAST(MAX(CAST(FlagPrincipal as INT)) AS BIT) FlagPrincipal
from VwVanBanco
where IdVan = {IdVan} or IdVan is null
group by IdCompe, Nome order by IdCompe asc
Thanks to mosty mostacho and
kenwarner