concatenate all columns from with names of columns also in it, one string for every row - sql

CREATE TABLE myTable
(
COL1 int,
COL2 varchar(10),
COL3 float
)
INSERT INTO myTable
VALUES (1, 'c2r1', NULL), (2, 'c2r2', 2.335)
I want an output with for every row of a table one string with all columns and the names in it.
Something like:
COL1=1|COL2=c2r1|COL3=NULL
COL1=2|COL2=c2r2|COL3=2.3335
I have a table with lot of columns so it has to be dynamic (it would use it on different tables also), is there an easy solution where I can do it and choose separator and things like that... (It has to deal with NULL-values & numeric values also.)
I am using SQL Server 2019.

Since you are on 2019, string_agg() with a bit if JSON
Example
Select NewVal
From MyTable A
Cross Apply ( Select NewVal = string_agg([key]+'='+isnull(value,'null'),'|')
From OpenJson((Select A.* For JSON Path,Without_Array_Wrapper,INCLUDE_NULL_VALUES ))
) B
Results
NewVal
COL1=1|COL2=c2r1|COL3=null
COL1=2|COL2=c2r2|COL3=2.335000000000000e+000 -- Don't like the float
EDIT to Trap FLOATs
Select NewVal
From MyTable A
Cross Apply ( Select NewVal = string_agg([key]+'='+isnull(case when value like '%0e+0%' then concat('',convert(decimal(15,3),convert(float,value))) else value end,'null'),'|')
From OpenJson((Select A.* For JSON Path,Without_Array_Wrapper,INCLUDE_NULL_VALUES ))
) B
Results
NewVal
COL1=1|COL2=c2r1|COL3=null
COL1=2|COL2=c2r2|COL3=2.335

Would one dare to abuse json for this?
SELECT REPLACE (REPLACE (REPLACE (REPLACE (REPLACE (ca.js,'":','='), ',"','|'), '"',''), '[{','') ,'}]','') AS data
FROM (SELECT col1 as id FROM myTable) AS list
CROSS APPLY
(
SELECT t.col1
, t.col2
, cast(t.col3 as decimal(16,3)) as col3
FROM myTable t
WHERE t.col1 = list.id
FOR JSON AUTO, INCLUDE_NULL_VALUES
) ca(js)
It'll work with a simple SELECT t.* in the cross apply.
But the floats tend to be bit too long then.

Related

Postgresql subtract comma separated string in one column from another column

The format is like:
col1
col2
V1,V2,V3,V4,V5,V6
V4,V1,V6
V1,V2,V3
V2,V3
I want to create another column called col3 which contains the subtraction of two columns.
What I have tried:
UPDATE myTable
SET col3=(replace(col1,col2,''))
It works well for rows like row2 since the order of replacing patterns matters.
I was wondering if there's a perfect way to achieve the same goal for rows like row1.
So the desired output would be:
col1
col2
col3
V1,V2,V3,V4,V5,V6
V4,V1,V6
V2,V3,V5
V1,V2,V3
V2,V3
V1
Any suggestions would be appreciated!
Split values into tables, subtract sets and then assemble it back. Everything is possible as an expression defining new query column.
with t (col1,col2) as (values
('V1,V2,V3,V4,V5,V6','V4,V1,V6'),
('V1,V2,V3','V2,V3')
)
select col1,col2
, (
select string_agg(v,',')
from (
select v from unnest(string_to_array(t.col1,',')) as a1(v)
except
select v from unnest(string_to_array(t.col2,',')) as a2(v)
) x
)
from t
DB fiddle
You will have to unnest the elements then apply an EXCEPT clause on the "unnested" rows and aggregate back:
select col1,
col2,
(select string_agg(item,',' order by item)
from (
select *
from string_to_table(col1, ',') as c1(item)
except
select *
from string_to_table(col2, ',') as c2(item)
) t)
from the_table;
I wouldn't store that result in a separate column, but if you really need to introduce even more problems by storing another comma separated list.
update the_table
set col3 = (select string_agg(item,',' order by item)
from (
select *
from string_to_table(col1, ',') as c1(item)
except
select *
from string_to_table(col2, ',') as c2(item)
) t)
;
string_to_table() requires Postgres 14 or newer. If you are using an older version, you need to use unnest(string_to_array(col1, ',')) instead
If you need that a lot, consider creating a function:
create function remove_items(p_one text, p_other text)
returns text
as
$$
select string_agg(item,',' order by item)
from (
select *
from string_to_table(col1, ',') as c1(item)
except
select *
from string_to_table(col2, ',') as c2(item)
) t;
$$
language sql
immutable;
Then the above can be simplified to:
select col1, col2, remove_items(col1, col2)
from the_table;
Note, POSTGRESQL is not my forte, but thought I'd have a go at it. Try:
SELECT col1, col2, RTRIM(REGEXP_REPLACE(Col1,CONCAT('\m(?:', REPLACE(Col2,',','|'),')\M,?'),'','g'), ',') as col3 FROM myTable
See an online fidle.
The idea is to use a regular expession to replace all values, based on following pattern:
\m - Word-boundary at start of word;
(?:V4|V1|V6) - A non-capture group that holds the alternatives from col2;
\M - Word-boundary at end of word;
,? - Optional comma.
When replaced with nothing we need to clean up a possible trailing comma with RTRIM(). See an online demo where I had to replace the word-boundaries with the \b word-boundary to showcase the outcome.

Presto insert value in to a column of (array<struct<pos:int, date:string>>)

I have a column 'col2' which is of type
array<struct<pos:int, date:string>>
I need to check if the column is empty and then insert values to the column and then unnest the values in the column
case WHEN CARDINALITY(col2) = 0 THEN ARRAY[(0,'value1'),(0,'value2')] else col2 end as col2
Below is sql
WITH CTE AS
(SELECT
col1,
case
WHEN CARDINALITY(col2) = 0 THEN ARRAY[(0,'value1'),(0,'value2')]
else col2
end as col2
FROM table1
)
SELECT
col1
column2.value1 AS pos,
column2.value2 AS date,
FROM CTE
CROSS JOIN UNNEST(col2) AS t(column2)
Because the case expression returns [{field1=1,field2=2020-03-01},{field1=1,field2=2020-01-09}]
i am not able to unpack it as value1 and value2, and above expression throws error.
Can anyone help me to fix this?
When the elements of an array are of type row, UNNEST expands them into separate columns. You need to adjust the UNNEST clause to reflect this.
Here's an example (tested with Trino 351, formerly known as Presto SQL):
WITH
data(entries) AS (VALUES
ARRAY[],
ARRAY[(1,'x'),(2,'y')]
),
cte(entries) AS (
SELECT if(cardinality(entries) = 0, ARRAY[(0,'value1'),(0,'value2')], entries)
FROM data
)
SELECT pos, date
FROM cte
CROSS JOIN UNNEST(entries) AS t(pos, date)

dynamically add column names and add their values as a row sql server

I have a table table1 with column name a, b, c, d, e, f.
Now the task is to get the value of each column which will definitely be a single row value and insert that into other table2 - columns(x, y, z) . So my query would be like :
insert into table2 (x, y, z)
select a, '', '' from table1
union all
select b, '', '' from table1
union all
select c, '', '' from table1
union all
select d, '', '' from table1
union all
select e, '', '' from table1
.
.
.
union all
select f, '', '' from table1
Now if a new column add in table1 then again I have to add a select statement in this. Just want to avoid this how can I write a dynamic query which automatically consider all the columns and make it shorter.
Seems like your looking for a Dynamic EAV Structure (Entity Attribute Value). Now the cool part is the #YourTable could be any query
Declare #YourTable table (ID int,Col1 varchar(25),Col2 varchar(25),Col3 varchar(25))
Insert Into #YourTable values
(1,'a','z','k')
,(2,'g','b','p')
,(3,'k','d','a')
Select A.ID
,C.*
From #YourTable A
Cross Apply (Select XMLData=cast((Select A.* for XML Raw) as xml)) B
Cross Apply (
Select Attribute = attr.value('local-name(.)','varchar(100)')
,Value = attr.value('.','varchar(max)') -- change datatype if necessary
From B.XMLData.nodes('/row') as A(r)
Cross Apply A.r.nodes('./#*') AS B(attr)
Where attr.value('local-name(.)','varchar(100)') not in ('ID','OtherFieldsToExclude') -- Field Names case sensitive
) C
Returns
ID Attribute Value
1 Col1 a
1 Col2 z
1 Col3 k
2 Col1 g
2 Col2 b
2 Col3 p
3 Col1 k
3 Col2 d
3 Col3 a
A simpler way to do this uses cross apply:
insert into table2 (x, y, z)
select v.x, '', ''
from table1 t1 cross apply
(values (t1.a), (t1.b), (t1.c), (t1.d), (t1.e), (t1.f)
) v(x);
If you want to insert new values when new columns are added to the table, then you would want a DDL and probably a DML trigger. DML triggers are the "standard" triggers.
You can read about DDL triggers in the documentation.
That said, I am highly suspicious of database systems that encourage new columns and new tables to be added. There is probably a better way to design the application, for instance, using an EAV data model that provides greater flexibility with attributes.
try this
insert into table2
select Tmp.id, tb1.* from table1 tb1,
((SELECT B.id FROM (SELECT [value] = CONVERT(XML ,'<v>' + REPLACE('a,b,c,d,e,f' , ',' , '</v><v>')+ '</v>')) A
OUTER APPLY
(SELECT id = N.v.value('.' , 'varchar(100)') FROM A.[value].nodes('/v') N ( v )) B)) Tmp
This, if I am reading it correctly, looks like a perfect time to use
PIVOT.

T-SQL function to split string with two delimiters as column separators into table

I'm looking for a t-sql function to get a string like:
a:b,c:d,e:f
and convert it to a table like
ID Value
a b
c d
e f
Anything I found in Internet incorporated single column parsing (e.g. XMLSplit function variations) but none of them letting me describe my string with two delimiters, one for column separation & the other for row separation.
Can you please guiding me regarding the issue? I have a very limited t-sql knowledge and cannot fork those read-made functions to get two column solution?
You can find a split() function on the web. Then, you can do string logic:
select left(val, charindex(':', val)) as col1,
substring(val, charindex(':', val) + 1, len(val)) as col2
from dbo.split(#str, ';') s(val);
You can use a custom SQL Split function in order to separate data-value columns
Here is a sql split function that you can use on a development system
It returns an ID value that can be helpful to keep id and value together
You need to split twice, first using "," then a second split using ";" character
declare #str nvarchar(100) = 'a:b,c:d,e:f'
select
id = max(id),
value = max(value)
from (
select
rowid,
id = case when id = 1 then val else null end,
value = case when id = 2 then val else null end
from (
select
s.id rowid, t.id, t.val
from (
select * from dbo.Split(#str, ',')
) s
cross apply dbo.Split(s.val, ':') t
) k
) m group by rowid

Two or more results of one CASE statement in SQL

Is it possible to SELECT value of two or more columns with one shot of CASE statement? I mean instead of:
select
ColumnA = case when CheckColumn='condition' then 'result1' end
,ColumnB = case when CheckColumn='condition' then 'result2' end
Something like:
select case when CheckColumn='condition' then ColumnA='result1', ColumnB='result2' end
UPDATE
Just the same as we can do with the UPDATE statement:
update CTE
set ColumnA='result1', ColumnB='result2'
where CheckColumn='condition'
It is not possible with CASE expression.
For every column you need new CASE
It is not possible, but you could use a table value constructor as a work around to this, to store each value for columna and columnb against your check column:
SELECT t.CheckColumn,
v.ColumnA,
v.ColumnB
FROM dbo.YourTable AS t
LEFT JOIN
(VALUES
('Condition1', 'Result1', 'Result2'),
('Condition2', 'Result3', 'Result4'),
('Condition3', 'Result5', 'Result6')
) AS v (CheckColumn, ColumnA, ColumnB)
ON v.CheckColumn = t.CheckColumn;
If you have more complex conditions, then you can still apply this logic, but just use a pseudo-result for the join:
SELECT t.CheckColumn,
v.ColumnA,
v.ColumnB
FROM dbo.YourTable AS t
LEFT JOIN
(VALUES
(1, 'Result1', 'Result2'),
(2, 'Result3', 'Result4'),
(3, 'Result5', 'Result6')
) AS v (ConditionID, ColumnA, ColumnB)
ON v.ConditionID = CASE WHEN <some long expression> THEN 1
WHEN <some other long expression> THEN 2
ELSE 3
END;
The equivalent select to the update is:
select 'result1', 'result2'
. . .
where CheckColumn = 'condition';
Your select is different because it produces NULL values. There is an arcane way you can essentially do this with outer apply:
select t2.*
from . . . outer apply
(select t.*
from (select 'result1' as col1, 'result2' as col2) t
where CheckColumn = 'condition'
) t2;
This will return NULL values when there is no match. And, you can have as many columns as you would like.
What I understood from your question is that you want to update multiple columns if certain condition is true.
For such situation you have to use MERGE statements.
Example of using MERGE is as given on msdn here.
Code example:
-- MERGE statement for update.
USE [Database Name];
GO
MERGE Inventory ity
USING Order ord
ON ity.ProductID = ord.ProductID
WHEN MATCHED THEN
UPDATE
SET ity.Quantity = ity.Quantity - ord.Quantity;
More MERGE statement example here.
You could solve this maybe with a CTE or a CROSS APPLY, somehting like
DECLARE #tbl2 TABLE(inx INT, val1 VARCHAR(10),val2 VARCHAR(10));
INSERT INTO #tbl2 VALUES(1,'value1a','value1b'),(2,'value2a','value2b'),(3,'value2a','value2b');
UPDATE yourTable SET col1=subTable.val1,col2=subTable.val2
FROM yourTable
CROSS APPLY(
SELECT val1,val2
FROM #tbl2
WHERE inx=1 --YourCondition
) AS subTable