how to convert multiple column data in single row in sql - sql

I have a table of PostalCode, now i want to move all data in single row like give second table
PostalId Country StateId DistrictId
-------- ------- ------- ----------
110051 1 110 10165
Second Table
RowNo Value
----- -----
1 110051
2 1
3 110
4 10165

You can do this with
SELECT V.RowNo,
V.Value
FROM PostalCode
CROSS APPLY (VALUES (1, PostalId),
(2, Country),
(3, StateId),
(4, DistrictId) )V(RowNo, Value);

Please try other one way to achieve the above result using CROSS APPLY with XML Method :
SELECT ROW_NUMBER() OVER(ORDER BY
(
SELECT NULL
)) RowNo,
split.a.value('.', 'VARCHAR(MAX)') Value
FROM
(
SELECT CAST('<X>'+REPLACE(CONVERT(VARCHAR(MAX), PostalId)+','+CONVERT(VARCHAR(MAX), Country)+','+CONVERT(VARCHAR(MAX), StateId)+','+CONVERT(VARCHAR(MAX), DistrictId), ',', '</X><X>')+'</X>' AS XML) AS String
FROM PostalCode
) AS Z
CROSS APPLY String.nodes('/X') AS split(a);
Result :
RowNo Value
----- -----
1 110051
2 1
3 110
4 10165

You can use this.
SELECT ROW_NUMBER() OVER(ORDER BY RowNo) RowNo, Value
FROM PostalCode UNPIVOT( Value FOR
RowNo IN ([PostalId], [Country], [StateId], [DistrictId] ) ) UNPVT

Related

Display the difference between rows in the same table

I have a table named Employee_audit with following schema,
emp_audit_id
eid
name
salary
...
1
1
Daniel
1000
...
2
1
Dani
1000
...
3
1
Danny
3000
...
My goal is to write a SQL query which will return in following format, considering the first row also as changed value from null.
columnName
oldValue
newValue
name
null
Daniel
salary
null
1000
name
Daniel
Dani
name
Dani
Danny
salary
1000
3000
...
...
...
Finally reached to below solution
CREATE TABLE Employee_audit (
emp_audit_id int,
eid int,
name varchar(50),
salary int,
department varchar(50)
)
insert into Employee_audit (emp_audit_id, eid, name, salary,department)
values
(1, 1, 'Daniel', 1000,'ROP'),
(2, 1, 'Dani', 1000,'ROP'),
(3, 1, 'Danny', 3000,'ROP');
with diffs as (
select 'name' colName, emp_audit_id, eid, lag(name, 1, null) over (partition by eid order by emp_audit_id) oldValue, name newValue
from Employee_audit
union all
select 'salary', emp_audit_id, eid, cast(lag(salary, 1, null) over (partition by eid order by emp_audit_id) as varchar), cast(salary as varchar) newValue
from Employee_audit
union all
...
)
select *
from diffs
where oldValue <> newValue or oldValue is null
order by emp_audit_id, eid
Returns:
emp_audit_id
columnName
oldValue
newValue
1
name
NULL
Daniel
1
salary
NULL
1000.00
2
name
Daniel
Dani
3
name
Dani
Danny
3
salary
1000.00
3000.00
...
...
...
...
But the problem is, the query is very slow because to track 10 fields we have to write 10 union all.
How can I more optimize the query like in a single scan how can I do it?
Here is an option that will dynamically unpivot your data without actually using Dynamic SQL.
Example
;with cte as (
Select emp_audit_id
,eid
,[key]
,newValue=[value]
,oldvalue = lag(value) over (partition by eid,[key] order by emp_audit_id)
From Employee_audit A
Cross Apply ( Select [key],[value] From OpenJson( (Select A.* For JSON Path,Without_Array_Wrapper ) ) ) B
Where [key] not in ('emp_audit_id','eid')
)
Select emp_audit_id
,columName = [key]
,oldvalue
,newvalue
From cte
Where newvalue<>oldvalue or oldvalue is null
Returns
emp_audit_id columName oldvalue newvalue
1 department NULL ROP
1 name NULL Daniel
2 name Daniel Dani
3 name Dani Danny
1 salary NULL 1000
3 salary 1000 3000
I would use apply:
select t.emp_audit_id, v.columnName, v.newValue,
lag(v.newValue) over (partition by eid, columnName order by emp_audit_id) as oldValue
from some_table t cross apply
(values ('name', t.name),
('salary', t.salary),
. . .
) v (columnName, newValue);
If you need to cast the values so they are strings, that goes in the values clause:
select t.emp_audit_id, v.columnName, v.newValue,
lag(v.newValue) over (partition by eid, ColumnName order by emp_audit_id) as oldValue
from some_table t cross apply
(values ('name', t.name),
('salary', cast(t.salary as varchar(255))),
. . .
) v (columnName, newValue);

Pivot query is returning all the values in T-SQL

I am having a requirement that I want to build a pivot query for my table which is shown in screen shot.
This is my table
I want to make the rows values into columns.
My try:
SELECT rno, NO,Description,CarNo,ID from
(
select row_number()over(partition by columnname order by rno)rno, columnname,value
from mytable
group by rno, columnname,value
) x
pivot
(
max(value)
for columnname in (NO,Description,CarNo,ID)
) p
Actual Output:
Expected Output:
NO | Description | CradNo | ID
---------------------------------------------
Part1 | desc1 | Card1 | 1
Part2 | desc2 | Card1 | 1
You can use this.
DECLARE #mytable TABLE(columnname VARCHAR(20), value VARCHAR(10), rno INT)
INSERT INTO #mytable
VALUES
('ID', '1', 1),
('NO', 'Part1', 1),
('NO', 'Part2', 1),
('Description', 'desc1', 1),
('Description', 'desc2', 1),
('CarNo', 'car1', 1)
;WITH CTE AS
(SELECT * FROM
(SELECT *, RN = ROW_NUMBER() OVER(PARTITION BY columnname Order BY rno, value)
FROM #mytable) SRC
PIVOT ( MAX(value) FOR columnName IN ([ID], [NO], [Description], [CarNo]) ) PVT
)
SELECT
ISNULL( T1.NO, T2.NO) NO
, ISNULL( T1.Description, T2.Description) Description
, ISNULL( T1.CarNo, T2.CarNo) CarNo
, ISNULL( T1.ID, T2.ID) ID
FROM CTE T1 OUTER APPLY ( SELECT TOP 1 * FROM CTE WHERE CTE.RN < T1.RN ) T2
Result:
NO Description CarNo ID
---------- ----------- ---------- ----------
Part1 desc1 car1 1
Part2 desc2 car1 1

How to get active records for each date in SQL

I have a table like below:
ID FID MDate Active
--------------------------
1 1 2009-05-25 1
1 2 2009-05-25 1
1 1 2010-02-04 0
1 3 2010-02-04 1
1 1 2009-04-01 0
1 1 2009-03-01 1
How to get active FId for each date?
I was trying like below:
SELECT DISTINCT
ID, MDate,
STUFF ((SELECT DISTINCT ',' + CAST(FID AS VARCHAR)
FROM
(SELECT ID, MDate, FID
FROM Table1
WHERE IsActive = 1) t
WHERE t.MDate = a.MDate
FOR XML PATH('')), 1, 1, '') colb
FROM
(SELECT ID, MDate, FID
FROM Table1
WHERE IsActive = 1) a
Somehow, it is giving partial results. In last row of the result of above query FID 3 is getting selected, there should be two FIDs 2, 3 as FID 2 is active since 2009-05-25 to till date.
I want output like below:
ID MDate FID
1 2009-03-01 1
1 2009-05-25 1,2
1 2010-02-04 2,3
How to get this in SQL?
You need to specify names of fields correctly, and no need for that number of sub-query:
;WITH Table1 AS (
SELECT * FROM (VALUES
(1, 1, '2009-05-25', 1),
(1, 2, '2009-05-25', 1),
(1, 1, '2010-02-04', 0),
(1, 3, '2010-02-04', 1),
(1, 1, '2009-04-01', 0),
(1, 1, '2009-03-01', 1))
AS t(ID, FID, ModifyDate, IsActive)
)
SELECT DISTINCT ID,
ModifyDate,
STUFF(( SELECT DISTINCT ',' + CAST(FID AS VARCHAR)
FROM Table1 t
WHERE t.ModifyDate = a.ModifyDate
FOR XML PATH('')),1,1,'') colb
FROM Table1 a
WHERE IsActive=1
Results:
ID ModifyDate colb
----------- ---------- ----------
1 2009-03-01 1
1 2009-05-25 1,2
1 2010-02-04 1,3
(3 row(s) affected)
I would use a CTE to filter out the active row first
; WITH
CTE AS
(
SELECT *
FROM Table1
WHERE IsActive = 1
)
SELECT ID, ModifyDate,
colb = STUFF ( (
SELECT ',' + CAST(FID AS VARCHAR(10))
FROM CTE x
WHERE x.ModifyDate = c.ModifyDate
FOR XML PATH ('')
) , 1, 1, '')
FROM CTE c
GROUP BY ID, ModifyDate
SELECT distinct [ID],[MDate],STUFF((SELECT ',' + CAST(FID AS VARCHAR) FROM [dbo].[tableName] WHERE [MDate] = a.[MDate] and active =1 FOR XML PATH('')),1 ,1 ,'') as FID
FROM [dbo].[tableName] AS a
WHERE active = 1
ORDER BY [MDate]
Sample Output as below:
ID MDate FID
1 2009-03-01 1
1 2009-05-25 1,2
1 2010-02-04 3
I think this is what you want.

SQL count ids in fields

I have a table contains IDs in field. It looks like:
FieldName
-------------------------
1,8,2,3,4,10,5,9,6,7
-------------------------
1,8
-------------------------
1,8
I need to count these IDs to get result:
ID | Count
---|------
1 | 3
8 | 3
2 | 1
3 | 1
Any ideas?
Thanks!
Try this :
Declare #demo table(FieldName varchar(100))
insert into #demo values('1,8,2,3,4,10,5,9,6,7')
insert into #demo values('1,8')
insert into #demo values('1,8')
select ID, COUNT(id) ID_count from
(SELECT
CAST(Split.a.value('.', 'VARCHAR(100)') AS INT) AS ID
FROM
(
SELECT CAST ('<M>' + REPLACE(FieldName, ',', '</M><M>') + '</M>' AS XML) AS ID
FROM #demo
) AS A CROSS APPLY ID.nodes ('/M') AS Split(a)) q1
group by ID
I like Devart's answer because of the faster execution. Here is a modified earlier answer to suite your need :
Declare #col varchar(200)
SELECT
#col=(
SELECT FieldName + ','
FROM #demo c
FOR XML PATH('')
);
;with demo as(
select cast(substring(#col,1,charindex(',',#col,1)-1) AS INT) cou,charindex(',',#col,1) pos
union all
select cast(substring(#col,pos+1,charindex(',',#col,pos+1)-pos-1)AS INT) cou,charindex(',',#col,pos+1) pos
from demo where pos<LEN(#col))
select cou ID, COUNT(cou) id_count from demo
group by cou
Try this one -
Query:
SET NOCOUNT ON;
DECLARE #temp TABLE (txt VARCHAR(8000))
INSERT INTO #temp (txt)
VALUES ('1,8,2,3,4,10,5,9,6,7'), ('1,8'), ('1,8')
SELECT
t.ID
, [Count] = COUNT(1)
FROM (
SELECT
ID =
SUBSTRING(
t.string
, number + 1
, ABS(CHARINDEX(',', t.string, number + 1) - number - 1)
)
FROM (
SELECT string = (
SELECT ',' + txt
FROM #temp
FOR XML PATH(N''), TYPE, ROOT).value(N'root[1]', N'NVARCHAR(MAX)')
) t
CROSS JOIN [master].dbo.spt_values n
WHERE [type] = 'p'
AND number <= LEN(t.string) - 1
AND SUBSTRING(t.string, number, 1) = ','
) t
GROUP BY t.ID
ORDER BY [Count] DESC
Output:
ID Count
----- -----------
1 3
8 3
9 1
10 1
2 1
3 1
4 1
5 1
6 1
7 1
Query cost:

Add more columns for each value of another column?

I have this kind of table :
Name Date Value
-----------------------
Test 1/1/2001 10
Test 2/1/2001 17
Test 3/1/2001 52
Foo 5/4/2011 15
Foo 6/4/2011 321
My 15/5/2005 36
My 25/7/2005 75
And I would like to show the results like this :
Name Date Value Name Date Value Name Date Value
---------------------------------------------------------------------
Test 1/1/2001 10 Foo 5/4/2011 15 My 15/5/2005 36
Test 2/1/2001 17 Foo 6/4/2011 321 My 25/7/2005 75
Test 3/1/2001 52
I need to show as many columns as what is present in my Name column
How could I do this in Sql ?
In order to get the result that you want, you are going to have to unpivot the columns in your table and apply the pivot function.
The unpivot can be done using either the UNPIVOT function or you can use CROSS APPLY with VALUES.
UNPIVOT:
select rn,
col +'_'+cast(dr as varchar(10)) col,
new_values
from
(
select name,
convert(varchar(10), date, 101) date,
cast(value as varchar(10)) value,
dense_rank() over(order by name) dr,
row_number() over(partition by name order by date) rn
from yourtable
) d
unpivot
(
new_values
for col in (name, date, value)
) un;
CROSS APPLY:
select rn,
col +'_'+cast(dr as varchar(10)) col,
c.value
from
(
select name,
convert(varchar(10), date, 101) date,
cast(value as varchar(10)) value,
dense_rank() over(order by name) dr,
row_number() over(partition by name order by date) rn
from yourtable
) d
cross apply
(
values
('Name', name), ('Date', date), ('Value', Value)
) c (col, value);
See SQL Fiddle with Demo of both versions. This gives the result:
| RN | COL | NEW_VALUES |
-----------------------------
| 1 | name_1 | Foo |
| 1 | date_1 | 04/05/2011 |
| 1 | value_1 | 15 |
| 2 | name_1 | Foo |
| 2 | date_1 | 04/06/2011 |
| 2 | value_1 | 321 |
| 1 | name_2 | My |
| 1 | date_2 | 05/15/2005 |
| 1 | value_2 | 36 |
These queries take your existing columns values and converts them to rows. Once they are in rows, you create the new column names by using the windowing function dense_rank.
Once the data has been converted to rows, you then use the new column names (created with the dense_rank value) and apply the PIVOT function.
PIVOT with UNPIVOT:
select name_1, date_1, value_1,
name_2, date_2, value_2,
name_3, date_3, value_3
from
(
select rn,
col +'_'+cast(dr as varchar(10)) col,
new_values
from
(
select name,
convert(varchar(10), date, 101) date,
cast(value as varchar(10)) value,
dense_rank() over(order by name) dr,
row_number() over(partition by name order by date) rn
from yourtable
) d
unpivot
(
new_values
for col in (name, date, value)
) un
) src
pivot
(
max(new_values)
for col in (name_1, date_1, value_1,
name_2, date_2, value_2,
name_3, date_3, value_3)
) piv;
See SQL Fiddle with Demo
PIVOT with CROSS APPLY:
select name_1, date_1, value_1,
name_2, date_2, value_2,
name_3, date_3, value_3
from
(
select rn,
col +'_'+cast(dr as varchar(10)) col,
c.value
from
(
select name,
convert(varchar(10), date, 101) date,
cast(value as varchar(10)) value,
dense_rank() over(order by name) dr,
row_number() over(partition by name order by date) rn
from yourtable
) d
cross apply
(
values
('Name', name), ('Date', date), ('Value', Value)
) c (col, value)
) src
pivot
(
max(value)
for col in (name_1, date_1, value_1,
name_2, date_2, value_2,
name_3, date_3, value_3)
) piv;
See SQL Fiddle with Demo.
Dyanmic PIVOT:
The above versions will work great if you have a limited or known number of columns, if not, then you will need to use dynamic SQL:
DECLARE #cols AS NVARCHAR(MAX),
#query AS NVARCHAR(MAX)
select #cols = STUFF((SELECT ',' + QUOTENAME(col +'_'+cast(dr as varchar(10)))
from
(
select dense_rank() over(order by name) dr
from yourtable
) t
cross apply
(
values(1, 'Name'), (2, 'Date'), (3, 'Value')
) c (sort, col)
group by col, dr, sort
order by dr, sort
FOR XML PATH(''), TYPE
).value('.', 'NVARCHAR(MAX)')
,1,1,'')
set #query = 'SELECT ' + #cols + '
from
(
select rn,
col +''_''+cast(dr as varchar(10)) col,
c.value
from
(
select name,
convert(varchar(10), date, 101) date,
cast(value as varchar(10)) value,
dense_rank() over(order by name) dr,
row_number() over(partition by name order by date) rn
from yourtable
) d
cross apply
(
values
(''Name'', name), (''Date'', date), (''Value'', Value)
) c (col, value)
) x
pivot
(
max(value)
for col in (' + #cols + ')
) p'
execute(#query)
See SQL Fiddle with Demo.
The result for each of the queries is:
| NAME_1 | DATE_1 | VALUE_1 | NAME_2 | DATE_2 | VALUE_2 | NAME_3 | DATE_3 | VALUE_3 |
-------------------------------------------------------------------------------------------------
| Foo | 04/05/2011 | 15 | My | 05/15/2005 | 36 | Test | 01/01/2001 | 10 |
| Foo | 04/06/2011 | 321 | My | 07/25/2005 | 75 | Test | 01/02/2001 | 17 |
| (null) | (null) | (null) | (null) | (null) | (null) | Test | 01/03/2001 | 52 |
By hand or with a program. Most people are accustomed to showing output in a linear fashion (the default). If someone is asking you to do this, you can tell them that's not how the application works. You can export the result set to csv and then import that into something like Excel and reformat it by hand or use a serverside language like ASP.net or PHP to format the results into a table.
When you're parsing the output you could check the last var Name against the current. If they're different then add a column. It would still be tricky to script it because they will more than likely come out of the database in order. So you would have a sequence like test, test, test, foo, foo which would mean that you need to create a multidimensional array to organize the data to get a column count. Then setup the table based on that with a counter that counts row names, then data underneath.
I'm not sure which apps you're familiar with, so in PHP the output would look something like this from the multidimensional array.
row [1]['name']=test
row [1][test][1]['date'] = 1/1/2001
This is more of a visual output though. The databases are designed to hold data and return it in an intuitive fashion.