sql unpivot table error conflict - sql

I am running the below unpivot scode but it errors with
"The type of column "TransDate" conflicts with the type of other columns specified in the UNPIVOT list " Can someone advise what I need to convert etc? I seems to not like the datetime column transdate. everything else is nvarchar in the table.
select DataLoadSysId, DataLoadBatchSysId, Rowid, ColumnName, ColumnValue As ColumnValue
from (
select ExtractSource, RecordTypeNo, RecordLevel1Code, RecordLevel2Code, TransDate,
MainAccount, Amount, PeriodCode, DataAreaId, SourceFile, DataLoadBatchSysId, LoadDate, ValidationErrors, DataLoadSysId, RowId
from [Staging].[FactFinancialsCoded_Abbas_InitialValidationTest]
) x
UNPIVOT
(
ColumnValue
FOR ColumnName
IN ([ExtractSource], [RecordTypeNo], [RecordLevel1Code], [RecordLevel2Code], [TransDate], [MainAccount], [Amount], [PeriodCode], [DataAreaId])
)
As UnpivotExample

I'm not a fan of the unpivot keyword. I find it easier to just use apply:
select ivt.DataLoadSysId, ivt.DataLoadBatchSysId, ivt.Rowid,
v.ColumnName, v.ColumnValue
from [Staging].[FactFinancialsCoded_Abbas_InitialValidationTest] ivt CROSS APPLY
(VALUES ('ExtractSource', ExtractSource),
('RecordTypeNo', RecordTypeNo),
('RecordLevel1Code', RecordLevel1Code),
('RecordLevel2Code', RecordLevel2Code),
('TransDate', TransDate),
('MainAccount', MainAccount),
('Amount', Amount),
('PeriodCode', PeriodCode),
('DataAreaId', DataAreaId)
) v(columname, columnvalue);
This doesn't fix the problem. I prefer this because apply is very powerful and unpivoting is one convenient application to learn about the syntax (technically implementing "lateral joins").
Your problem is competing types. You need to convert everything to a string. I can only guess what some of the non-string values are, but something like:
select ivt.DataLoadSysId, ivt.DataLoadBatchSysId, ivt.Rowid,
v.ColumnName, v.ColumnValue
from [Staging].[FactFinancialsCoded_Abbas_InitialValidationTest] ivt CROSS APPLY
(VALUES ('ExtractSource', ExtractSource),
('RecordTypeNo', RecordTypeNo),
('RecordLevel1Code', RecordLevel1Code),
('RecordLevel2Code', RecordLevel2Code),
('TransDate', convert(varchar(255), TransDate)),
('MainAccount', MainAccount),
('Amount', convert(varchar(255), Amount)),
('PeriodCode', PeriodCode),
('DataAreaId', DataAreaId)
) v(columname, columnvalue);

Related

How to convert fields to JSON in Postgresql

I have a table with the following schema (postgresql 14):
message sentiment classification
any text positive mobile, communication
message are only string, phrases.
sentiment is a string, only one word
classification are string but can have 1 to many word comma separated
I would like to create a json field with these columns, like this:
{"msg":"any text", "sentiment":"positive","classification":["mobile,"communication"]}
Also, if possible, is there a way to consider the classification this way:
{"msg":"any text", "sentiment":"positive","classification 1":"mobile","classification 2" communication"}
The first part of question is easy - Postgres provides functions for splitting string and converting to json:
with t(message, sentiment, classification) as (values
('any text','positive','mobile, communication')
)
select row_to_json(x.*)
from (
select t.message
, t.sentiment
, array_to_json(string_to_array(t.classification, ', ')) as classification
from t
) x
The second part is harder - your want json to have variable number of attributes, mixed of grouped and nongrouped data. I suggest to unwind all attributes and then assemble them back (note the numbered CTE is actually not needed if your real table has id - I just needed some column to group by):
with t(message, sentiment, classification) as (values
('any text','positive','mobile, communication')
)
, numbered (id, message, sentiment, classification) as (
select row_number() over (order by null)
, t.*
from t
)
, extracted (id,message,sentiment,classification,index) as (
select n.id
, n.message
, n.sentiment
, l.c
, l.i
from numbered n
join lateral unnest(string_to_array(n.classification, ', ')) with ordinality l(c,i) on true
), unioned (id, attribute, value) as (
select id, concat('classification ', index::text), classification
from extracted
union all
select id, 'message', message
from numbered
union all
select id, 'sentiment', sentiment
from numbered
)
select json_object_agg(attribute, value)
from unioned
group by id;
DB fiddle
Use jsonb_build_object and concatenate the columns you want
SELECT
jsonb_build_object(
'msg',message,
'sentiment',sentiment,
'classification',
string_to_array(classification,','))
FROM mytable;
Demo: db<>fiddle
The second output is definitely not trivial. The SQL code would be much larger and harder to maintain - not to mention that parsing such file also requires a little more effort.
You can use a cte to handle the flattening of the classification attributes and then perform the necessary grouping in the main queries for each problem component:
with cte(r, m, s, k) as (
select row_number() over (order by t.message), t.message, t.sentiment, v.* from tbl t
cross join json_array_elements(array_to_json(string_to_array(t.classification, ', '))) v
)
-- first part --
select json_build_object('msg', t1.message, 'sentiment', t1.sentiment, 'classification', string_to_array(t1.classification, ', ')) from tbl t1
-- second part --
select jsonb_build_object('msg', t1.m, 'sentiment', t1.s)||('{'||t1.g||'}')::jsonb
from (select c.m, c.s, array_to_string(array_agg('"classification '||c.r||'":'||c.k), ', ') g
from cte c group by c.m, c.s) t1

concatenate all columns from with names of columns also in it, one string for every row

CREATE TABLE myTable
(
COL1 int,
COL2 varchar(10),
COL3 float
)
INSERT INTO myTable
VALUES (1, 'c2r1', NULL), (2, 'c2r2', 2.335)
I want an output with for every row of a table one string with all columns and the names in it.
Something like:
COL1=1|COL2=c2r1|COL3=NULL
COL1=2|COL2=c2r2|COL3=2.3335
I have a table with lot of columns so it has to be dynamic (it would use it on different tables also), is there an easy solution where I can do it and choose separator and things like that... (It has to deal with NULL-values & numeric values also.)
I am using SQL Server 2019.
Since you are on 2019, string_agg() with a bit if JSON
Example
Select NewVal
From MyTable A
Cross Apply ( Select NewVal = string_agg([key]+'='+isnull(value,'null'),'|')
From OpenJson((Select A.* For JSON Path,Without_Array_Wrapper,INCLUDE_NULL_VALUES ))
) B
Results
NewVal
COL1=1|COL2=c2r1|COL3=null
COL1=2|COL2=c2r2|COL3=2.335000000000000e+000 -- Don't like the float
EDIT to Trap FLOATs
Select NewVal
From MyTable A
Cross Apply ( Select NewVal = string_agg([key]+'='+isnull(case when value like '%0e+0%' then concat('',convert(decimal(15,3),convert(float,value))) else value end,'null'),'|')
From OpenJson((Select A.* For JSON Path,Without_Array_Wrapper,INCLUDE_NULL_VALUES ))
) B
Results
NewVal
COL1=1|COL2=c2r1|COL3=null
COL1=2|COL2=c2r2|COL3=2.335
Would one dare to abuse json for this?
SELECT REPLACE (REPLACE (REPLACE (REPLACE (REPLACE (ca.js,'":','='), ',"','|'), '"',''), '[{','') ,'}]','') AS data
FROM (SELECT col1 as id FROM myTable) AS list
CROSS APPLY
(
SELECT t.col1
, t.col2
, cast(t.col3 as decimal(16,3)) as col3
FROM myTable t
WHERE t.col1 = list.id
FOR JSON AUTO, INCLUDE_NULL_VALUES
) ca(js)
It'll work with a simple SELECT t.* in the cross apply.
But the floats tend to be bit too long then.

Display Columns To Rows

I have a table that looks like this:
AccountNumber, Warning01, Warning01ExpirationDate, Warning02, Waring02ExpirtionDate, .....
1234, 3,'2017-09-06',0, null
78976, 1,'2015-04-03',2,null
I would like to show the result as follow:
AccountNumber,Warning,ExpirationDate
1234,2,'2017-09-06'
78976,1,'2015-04-03'
78976,2,null
if the warning is 0 or null, I want to ignore it.
any ideas?
In T-SQL, just use cross apply and values() to unpivot your dataset:
select x.*
from mytable t
cross apply (values
(t.accountNumber, t.warning01, t.warning01expirationDate),
(t.accountNumber, t.warning02, t.warning02expirationDate)
) as x(accountNumber, warning, expirationDate)
where x.warning <> 0

How to resolve datatype length difference errors while using unpivot in SQL Server?

I am running below SQL statements in SQL Server which is causing issues due to difference in length of the column types (name=nvarchar(100), address=nvarchar(250)).
select distinct
Id, Label, [Value]
from
(select distinct
coalesce([Value], 'unknown') as Id,
coalesce([Value], 'unknown') + ':' + I as label,
coalesce([Value], 'unknown') as [Value]
from
[dummyDB].[test].[test]
unpivot
([Value] for I in (name, address)) as dataTable
) as t
Error:
Msg 8167, Level 16, State 1, Line 7
The type of column "address" conflicts with the type of other columns specified in the UNPIVOT list.
How to get this resolved?
If you use APPLY and VALUES to unpivot the data instead, you don't get this error. Using these tools is more versatile that the UNPIVOT operator anyway, so I personally prefer them:
SELECT T.ID,
V.Label,
V.[Value]
FROM dbo.Test T
CROSS APPLY (VALUES('Name',T.Name),
('Address',T.Address))V(Label,Value);
If you have non string-type columns, you'll need to explicitly convert them (possibly with a style code):
SELECT T.ID,
V.Label,
V.[Value]
FROM dbo.Test T
CROSS APPLY (VALUES('Name',T.Name),
('Address',T.Address),
('SomeDate',CONVERT(nvarchar(10),T.SomeDate,112)),
('SomeInt',CONVERT(nvarchar(5),T.SomeInt)))V(Label,Value);

sql server string split last but one

Table has a column with values
ColA
------
a.b.c.d.e (car.make.model, car.la, kg)
ab.cd.ef (car.make.model)
a1.b2.c3.d4.e5(car.make.model, car.la, kg, av.vc.de)
I want to write a sql query to split the ColA by delimiter "." and pick last but one.
Expected output
Result
------
d
cd
d4
I have tried ParseName but dont see option to pick last but one.
Thank you
Using Jeff Moden's DelimitedSplit8K:
USE Sandbox;
GO
CREATE TABLE #Sample (ColA varchar(500));
GO
INSERT INTO #Sample
VALUES ('a.b.c.d.e'),
('ab.cd.ef'),
('a1.b2.c3.d4.e5');
GO
SELECT *
FROM #Sample;
WITH Split AS(
SELECT S.ColA,
DS.*,
MAX(DS.ItemNumber) OVER (PARTITION BY S.ColA) AS Items
FROM #Sample S
CROSS APPLY DelimitedSplit8K(S.ColA,'.') DS)
SELECT Item
FROM Split
WHERE ItemNumber = Items - 1;
GO
DROP TABLE #Sample
Ideally, though, don't store your data in a delimited format. :)
Just to play around using STRING_SPLIT:
SELECT ColA, t.value
FROM table1
CROSS APPLY(SELECT value,
COUNT(*) OVER () as cnt,
ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS rn
FROM STRING_SPLIT(ColA, '.')) AS t
WHERE t.rn = t.cnt - 1
Note: The function is available from SQL Server 2016.
Note 2: The query works provided that the function returns each value in the same order as the one it appears inside the string.
Why not simply using substring?
DECLARE #ColA NVARCHAR(100) = 'a1.b2.c3.d4.e5(car.make.model, car.la, kg, av.vc.de)';
SELECT REVERSE(LEFT(RIGHT(REVERSE(LEFT(#ColA, CHARINDEX('(', #ColA)-1)), LEN(LEFT(#ColA, CHARINDEX('(', #ColA)-1))-CHARINDEX('.',REVERSE(LEFT(#ColA, CHARINDEX('(', #ColA)-1)))), CHARINDEX('.',RIGHT(REVERSE(LEFT(#ColA, CHARINDEX('(', #ColA)-1)), LEN(LEFT(#ColA, CHARINDEX('(', #ColA)-1))-CHARINDEX('.',REVERSE(LEFT(#ColA, CHARINDEX('(', #ColA)-1)))))-1))
However, this last edit does NOT handle the case when there is no . or no ( in the string - feel free t o extend the query accordingly
Try This
;WITH CTE(ColA)
AS
(
SELECT 'a.b.c.d.e' UNION ALL
SELECT 'ab.cd.ef' UNION ALL
SELECT 'a1.b2.c3.d4.e5'
)
SELECT ColA,REVERSE(SUBSTRING(ReqColA,0,CHARINDEX('.',(ColA)))) AS ReqColA
FROM
(
SELECT ColA ,SUBSTRING(REVERSE(ColA),CHARINDEX('.',REVERSE(ColA))+1,LEN(REVERSE(ColA))) AS ReqColA FROM CTE
)dt
Result
ColA ReqColA
-----------------------
a.b.c.d.e d
ab.cd.ef cd
a1.b2.c3.d4.e5 d4