Grouping runs of data - sql

SQL Experts,
Is there an efficient way to group runs of data together using SQL?
Or is it going to be more efficient to process the data in code.
For example if I have the following data:
ID|Name
01|Harry Johns
02|Adam Taylor
03|John Smith
04|John Smith
05|Bill Manning
06|John Smith
I need to display this:
Harry Johns
Adam Taylor
John Smith (2)
Bill Manning
John Smith
#Matt: Sorry I had trouble formatting the data using an embedded html table it worked in the preview but not in the final display.

Try this:
select n.name,
(select count(*)
from myTable n1
where n1.name = n.name and n1.id >= n.id and (n1.id <=
(
select isnull(min(nn.id), (select max(id) + 1 from myTable))
from myTable nn
where nn.id > n.id and nn.name <> n.name
)
))
from myTable n
where not exists (
select 1
from myTable n3
where n3.name = n.name and n3.id < n.id and n3.id > (
select isnull(max(n4.id), (select min(id) - 1 from myTable))
from myTable n4
where n4.id < n.id and n4.name <> n.name
)
)
I think that'll do what you want. Bit of a kludge though.
Phew! After a few edits I think I have all the edge cases sorted out.

I hate cursors with a passion... but here's a dodgy cursor version...
Declare #NewName Varchar(50)
Declare #OldName Varchar(50)
Declare #CountNum int
Set #CountNum = 0
DECLARE nameCursor CURSOR FOR
SELECT Name
FROM NameTest
OPEN nameCursor
FETCH NEXT FROM nameCursor INTO #NewName
WHILE ##FETCH_STATUS = 0
BEGIN
if #OldName <> #NewName
BEGIN
Print #OldName + ' (' + Cast(#CountNum as Varchar(50)) + ')'
Set #CountNum = 0
END
SELECT #OldName = #NewName
FETCH NEXT FROM nameCursor INTO #NewName
Set #CountNum = #CountNum + 1
END
Print #OldName + ' (' + Cast(#CountNum as Varchar(50)) + ')'
CLOSE nameCursor
DEALLOCATE nameCursor

My solution just for kicks (this was a fun exercise), no cursors, no iterations, but i do have a helper field
-- Setup test table
DECLARE #names TABLE (
id INT IDENTITY(1,1),
name NVARCHAR(25) NOT NULL,
grp UNIQUEIDENTIFIER NULL
)
INSERT #names (name)
SELECT 'Harry Johns' UNION ALL
SELECT 'Adam Taylor' UNION ALL
SELECT 'John Smith' UNION ALL
SELECT 'John Smith' UNION ALL
SELECT 'Bill Manning' UNION ALL
SELECT 'Bill Manning' UNION ALL
SELECT 'Bill Manning' UNION ALL
SELECT 'John Smith' UNION ALL
SELECT 'Bill Manning'
-- Set the first id's group to a newid()
UPDATE n
SET grp = newid()
FROM #names n
WHERE n.id = (SELECT MIN(id) FROM #names)
-- Set the group to a newid() if the name does not equal the previous
UPDATE n
SET grp = newid()
FROM #names n
INNER JOIN #names b
ON (n.ID - 1) = b.ID
AND ISNULL(b.Name, '') <> n.Name
-- Set groups that are null to the previous group
-- Keep on doing this until all groups have been set
WHILE (EXISTS(SELECT 1 FROM #names WHERE grp IS NULL))
BEGIN
UPDATE n
SET grp = b.grp
FROM #names n
INNER JOIN #names b
ON (n.ID - 1) = b.ID
AND n.grp IS NULL
END
-- Final output
SELECT MIN(id) AS id_start,
MAX(id) AS id_end,
name,
count(1) AS consecutive
FROM #names
GROUP BY grp,
name
ORDER BY id_start
/*
Results:
id_start id_end name consecutive
1 1 Harry Johns 1
2 2 Adam Taylor 1
3 4 John Smith 2
5 7 Bill Manning 3
8 8 John Smith 1
9 9 Bill Manning 1
*/

Well, this:
select Name, count(Id)
from MyTable
group by Name
will give you this:
Harry Johns, 1
Adam Taylor, 1
John Smith, 2
Bill Manning, 1
and this (MS SQL syntax):
select Name +
case when ( count(Id) > 1 )
then ' ('+cast(count(Id) as varchar)+')'
else ''
end
from MyTable
group by Name
will give you this:
Harry Johns
Adam Taylor
John Smith (2)
Bill Manning
Did you actually want that other John Smith on the end of your results?
EDIT: Oh I see, you want consecutive runs grouped. In that case, I'd say you need a cursor or to do it in your program code.

How about this:
declare #tmp table (Id int, Nm varchar(50));
insert #tmp select 1, 'Harry Johns';
insert #tmp select 2, 'Adam Taylor';
insert #tmp select 3, 'John Smith';
insert #tmp select 4, 'John Smith';
insert #tmp select 5, 'Bill Manning';
insert #tmp select 6, 'John Smith';
select * from #tmp order by Id;
select Nm, count(1) from
(
select Id, Nm,
case when exists (
select 1 from #tmp t2
where t2.Nm=t1.Nm
and (t2.Id = t1.Id + 1 or t2.Id = t1.Id - 1))
then 1 else 0 end as Run
from #tmp t1
) truns group by Nm, Run
[Edit] That can be shortened a bit
select Nm, count(1) from (select Id, Nm, case when exists (
select 1 from #tmp t2 where t2.Nm=t1.Nm
and abs(t2.Id-t1.Id)=1) then 1 else 0 end as Run
from #tmp t1) t group by Nm, Run

For this particular case, all you need to do is group by the name and ask for the count, like this:
select Name, count(*)
from MyTable
group by Name
That'll get you the count for each name as a second column.
You can get it all as one column by concatenating like this:
select Name + ' (' + cast(count(*) as varchar) + ')'
from MyTable
group by Name

Related

Split Name Column which can contain Multiple Names and no delimiter into Person 1 and Person 2

How do I split a string which can contain 2 names into Person1 and Person2 ? There is no delimiter between the names, there is not always a second person for each row and not necessarily a middle initial/name for either first or second person and only sometimes the second name will be separated with an “AND”
Examples of Names are as follows
JANE MIDDLETON John MIDDLETON
SUE FRACARO BOB FRACARO
TONY FRENCH
JOHN EDUARDO OCHOA AND JANE ADRIANA OCHOA
TONY JOHN CARPENTER TONYA CARPENTER
Desired Output Design
Person 1 First Name
Person 1 Middle Name
Person 1 Last Name
Person 2 First Name
Person 2 Middle Name
Person 2 Last Name
Create a function for splitting string values in question from some table column:
CREATE FUNCTION [dbo].[Fn_Splittemp]
(
#text VARCHAR(8000)
, #delimiter VARCHAR(20) = ' '
)
RETURNS #String TABLE
(
Position INT IDENTITY PRIMARY KEY
, StringValue VARCHAR(8000)
)
AS
BEGIN
DECLARE #index INT
SET #index = -1
WHILE (LEN(#text) >0)
BEGIN
SET #index = CHARINDEX(#delimiter, #text)
IF (#index = 0) AND (LEN(#text) > 0)
BEGIN
INSERT INTO #string VALUES (#text)
BREAK
END
IF (#index > 1)
BEGIN
INSERT INTO #String VALUES (LEFT(#text, (#index-1)))
SET #text = RIGHT(#text, (LEN(#text)-#index))
END
ELSE
SET #text = RIGHT(#text, (LEN(#text)-#index))
END
RETURN
END
GO
Splitting, cleaning and assigning to respective name_columns:
CREATE TABLE #t0 (rid INT IDENTITY, rawnames VARCHAR(8000));
GO
INSERT INTO #t0 VALUES ('JANE MIDDLETON John MIDDLETON'),
('SUE FRACARO BOB FRACARO'),
('TONY FRENCH'),
('JOHN EDUARDO OCHOA AND JANE ADRIANA OCHOA'),
('TONY JOHN CARPENTER TONYA CARPENTER');
GO
SELECT n.rid, n.rawnames, fn.StringValue AS Names,
COUNT(*) OVER(PARTITION BY rawnames) AS wordcount,
ROW_NUMBER() OVER(PARTITION BY fn.stringvalue,rawnames ORDER BY fn.stringvalue) AS LastNameids,
fn.Position
INTO #t1
FROM #t0 n
cross apply dbo.Fn_Splittemp(n.rawnames, ' ') AS fn
GO
SELECT rid, rawnames, Position AS Pid,
PersonName, LastName INTO #t2
FROM
(SELECT t.rid, t.rawnames, t.names AS Lastname, LTRIM(RTRIM(REPLACE(f.StringValue,'and',''))) AS PersonName, f.Position
FROM
(SELECT replace(sqa.rawnames,sqa.Names,sqa.Names+',') AS delimstr , sqa.*
FROM #t1 sqa
WHERE wordcount<=3 AND position = (SELECT MAX(position) from #t1 crq where crq.rid = sqa.rid)
)t
cross apply dbo.Fn_Splittemp(delimstr,',') f
UNION ALL
SELECT b.rid, b.rawnames, b.Names AS Lastname, LTRIM(RTRIM(REPLACE(f.StringValue,'and',''))) AS PersonName, f.Position
FROM
(SELECT replace(rawnames,names,names+',') AS delimstr, *
FROM #t1
WHERE wordcount>3 AND LastNameids>1)b
cross apply dbo.Fn_Splittemp(delimstr,',') f
)sqt
GO
SELECT * INTO #t3 FROM #t2 cross apply dbo.Fn_Splittemp(personname, ' ');
GO
SELECT t.rawnames, fina.Firstname, fina.MiddleName, fina.LastName
FROM #t0 t
JOIN (
SELECT rid, pid, [1] AS Firstname, NULL AS MiddleName, [2] AS LastName
FROM
(SELECT * FROM (SELECT rid, pid, position, stringvalue,
COUNT(*) OVER(PARTITION BY rid, pid) AS cnt FROM #t3)a
WHERE a.cnt <=2)apiv
PIVOT
(MAX(stringvalue)
FOR position IN ([1],[2])
)piva
UNION ALL
SELECT rid, pid, [1] AS Firstname, [2] AS MiddleName, [3] AS LastName
FROM
(SELECT * FROM (SELECT rid, pid, position, stringvalue,
COUNT(*) OVER(PARTITION BY rid, pid) AS cnt FROM #t3)a
WHERE a.cnt >2)apiv
PIVOT
(MAX(stringvalue)
FOR position IN ([1],[2],[3])
)piva
)fina
ON fina.rid = t.rid;

Trying to Sum up Cross-Tab Data in SQL

I have a table where every ID has one or more places, and each place comes with a count. Places can be repeated within IDs. It is stored in rows like so:
ID ColumnName DataValue
1 place1 ABC
1 count1 5
2 place1 BEC
2 count1 12
2 place2 CDE
2 count2 6
2 place3 BEC
2 count3 9
3 place1 BBC
3 count1 5
3 place2 BBC
3 count2 4
Ultimately, I want a table where every possible place name is its own column, and the count per place per ID is summed up, like so:
ID ABC BEC CDE BBC
1 5 0 0 0
2 0 21 6 0
3 0 0 0 9
I don't know the best way to go about this. There are around 50 different possible place names, so specifically listing them out in a query isn't ideal. I know I ultimately have to pivot the data, but I don't know if I should do it before or after I sum up the counts. And whether it's before or after, I haven't been able to figure out how to go about summing it up.
Any ideas/help would be greatly appreciated. At this point, I'm having a hard time finding where to even start. I've seen a few posts with similar problems, but nothing quite as convoluted as this.
EDIT:
Right now I'm working with this to pivot the table, but this leaves me with columns named place1, place2, .... count1, count2,...
and I don't know how to appropriately sum up the counts and make new columns with the place names.
DECLARE #cols NVARCHAR(MAX), #query NVARCHAR(MAX);
SET #cols = STUFF(
(
SELECT DISTINCT
','+QUOTENAME(c.[ColumnName])
FROM #temp c FOR XML PATH(''), TYPE
).value('.', 'nvarchar(max)'), 1, 1, '');
SET #query = 'SELECT [ID], '+#cols+'from (SELECT [ID],
[DataValue] AS [amount],
[ColumnName] AS [category]
FROM #temp
)x pivot (max(amount) for category in ('+#cols+')) p';
EXECUTE (#query);
Your table structure is pretty bad. You'll need to normalize your data before you can attempt to pivot it. Try this:
;WITH IDs AS
(
SELECT DISTINCT
id
,ColId = RIGHT(ColumnName, LEN(ColumnName) - 5)
,Place = datavalue
FROM #temp
WHERE ISNUMERIC(datavalue) = 0
)
,Counts AS
(
SELECT DISTINCT
id
,ColId = RIGHT(ColumnName, LEN(ColumnName) - 5)
,Cnt = CAST(datavalue AS INT)
FROM #temp
WHERE ISNUMERIC(datavalue) = 1
)
SELECT
piv.id
,ABC = ISNULL(piv.ABC, 0)
,BEC = ISNULL(piv.BEC, 0)
,CDE = ISNULL(piv.CDE, 0)
,BBC = ISNULL(piv.BBC, 0)
FROM (SELECT i.id, i.Place, c.Cnt FROM IDs i JOIN Counts c ON c.id = i.id AND c.ColId = i.ColId) src
PIVOT ( SUM(Cnt)
FOR Place IN ([ABC], [BEC], [CDE], [BBC])
) piv;
Doing it with dynamic SQL would yield the following:
SET #query =
';WITH IDs AS
(
SELECT DISTINCT
id
,ColId = RIGHT(ColumnName, LEN(ColumnName) - 5)
,Place = datavalue
FROM #temp
WHERE ISNUMERIC(datavalue) = 0
)
,Counts AS
(
SELECT DISTINCT
id
,ColId = RIGHT(ColumnName, LEN(ColumnName) - 5)
,Cnt = CAST(datavalue AS INT)
FROM #temp
WHERE ISNUMERIC(datavalue) = 1
)
SELECT [ID], '+#cols+'
FROM
(
SELECT i.id, i.Place, c.Cnt
FROM IDs i
JOIN Counts c ON c.id = i.id AND c.ColId = i.ColId
) src
PIVOT
(SUM(Cnt) FOR Place IN ('+#cols+')) piv;';
EXECUTE (#query);
Try this out:
SELECT id,
COALESCE(ABC, 0) AS ABC,
COALESCE(BBC, 0) AS BBC,
COALESCE(BEC, 0) AS BEC,
COALESCE(CDE, 0) AS CDE
FROM
(SELECT id,
MIN(CASE WHEN columnname LIKE 'place%' THEN datavalue END) AS col,
CAST(MIN(CASE WHEN columnname LIKE 'count%' THEN datavalue END) AS INT) AS val
FROM t
GROUP BY id, RIGHT(columnname, 1)
) src
PIVOT
(SUM(val)
FOR col in ([ABC], [BBC], [BEC], [CDE])) pvt
Tested here: http://rextester.com/XUTJ68690
In the src query, you need to re-format your data, so that you have a unique id and place in each row. From there a pivot will work.
If the count is always immediately after the place, the following query will generate a data set for pivoting.
The result data set before pivoting has the following columns:
id, placename, count
select placeTable.id, placeTable.datavalue, countTable.datavalue
from
(select *, row_number() over (order by id, %%physloc%%) as rownum
from test
where isnumeric(datavalue) = 1
) as countTable
join
(select *, row_number() over (order by id, %%physloc%%) as rownum
from test
where isnumeric(datavalue) <> 1
) as placeTable
on countTable.id = placeTable.id and
countTable.rownum = placeTable.rownum
Tested on sqlfidde mssqlserver: http://sqlfiddle.com/#!6/701c91/18
Here is one other approach using PIVOT operator with dynamic style
declare #Col varchar(2000) = '',
#Query varchar(2000) = ''
set #Col = stuff(
(select ','+QUOTENAME(DataValue)
from table where isnumeric(DataValue) = 0
group by DataValue for xml path('')),1,1,'')
set #Query = 'select id, '+#Col+' from
(
select id, DataValue,
cast((case when isnumeric(DataValue) = 1 then DataValue else lead(DataValue) over (order by id) end) as int) Value
from table
) as a
PIVOT
(
sum(Value) for DataValue in ('+#Col+')
)pvt'
EXECUTE (#Query)
Note : I have used lead() function to access next rows data if i found character string values and replace with numeric data values
Result :
id ABC BBC BEC CDE
1 5 NULL NULL NULL
2 NULL NULL 21 6
3 NULL 9 NULL NULL

Get result from select without repeated records next to each other

i have table with records :
City Name Seq
London 1
London 2
London 3
Madrid 4
London 5
Porto 6
Problem is how to get a result in string ( merge all without repeated records ).
Result : London-Madrid-London-Porto
Another option if 2012+ ... LAG()
Example
Declare #YourTable Table ([City Name] varchar(50),[Seq] int)
Insert Into #YourTable Values
('London',1)
,('London',2)
,('London',3)
,('Madrid',4)
,('London',5)
,('Porto',6)
Select Stuff((Select '-' +Value From
(
Select top 1000 Value = case when [City Name]=lag([City Name],1) over (Order By Seq) then null else [City Name] end
From #YourTable
Order By Seq
) A
For XML Path ('')),1,1,'')
Returns
London-Madrid-London-Porto
How about this?
declare #table table (CityName varchar(64), seq int)
insert into #table
values
('London',1),
('London',2),
('London',3),
('Madrid',4),
('London',5),
('Porto',6)
--find the next row that isn't the same city name (t2seq)
;with cte as(
select distinct
t.CityName
,t.seq
,min(t2.seq) as t2seq
from #table t
left join #table t2 on
t2.seq > t.seq
and t2.CityName <> t.CityName
group by
t.CityName
,t.seq),
--limit the result set to distinct list
cte2 as(
select distinct
CityName
,seq = isnull(t2seq,9999999)
from cte)
--use stuff to concat it together
select distinct
stuff(( select '-', + t2.CityName
from cte2 t2
order by seq
FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)'), 1, 1, '')
from cte2

TSQL Recursive Query Update Temp Table

I have a query that is recursively going through my employee ORG and getting a list of all people that report up to the VP. This query is working as intended:
DECLARE #pit AS DATETIME = GETDATE();
DECLARE #table TABLE (
mgrQID VARCHAR (64) ,
QID VARCHAR (64) ,
NTID VARCHAR (64) ,
FullName VARCHAR (256),
lvl INT ,
metadate DATETIME ,
totalCount INT );
WITH empList (mgrQID, QID, NTID, FullName, lvl, metadate)
AS (SELECT TOP 1 mgrQID,
QID,
NTID,
FirstName + ' ' + LastName,
0,
Meta_LogDate
FROM dbo.EmployeeTable_Historical
WHERE QID IN (SELECT director
FROM dbo.attritionDirectors)
AND Meta_LogDate <= #pit
ORDER BY Meta_LogDate DESC
UNION ALL
SELECT b.mgrQID,
b.QID,
b.NTID,
b.FirstName + ' ' + b.LastName,
lvl + 1,
b.Meta_LogDate
FROM empList AS a CROSS APPLY dbo.Fetch_DirectsHistorical_by_qid (a.QID, #pit) AS b)
-- Insert into the counts table
INSERT INTO #table (mgrQID, QID, NTID, FullName, lvl, metadate, totalCount)
SELECT empList.mgrQID,
empList.QID,
empList.NTID,
empList.FullName,
empList.lvl,
empList.metadate,
'0'
FROM empList
ORDER BY lvl
OPTION (MAXRECURSION 10);
As you can see, I have a table column called totalCount which I set to 0 in the first recursive query.
I now have a second query that goes through all of the people in that temp table and finds the total direct reports up to them.
For example if a Director Had 3 Managers and Each Manager has 3 Employees it would be 12 people reporting up to the director; the 9 employees and the 3 managers.
This comes from the query below:
;WITH a
AS (SELECT mgrQID AS direct,
QID
FROM #table AS t
WHERE QID IN (SELECT QID
FROM #table)
UNION ALL
SELECT a.direct,
t.QID
FROM #table AS t
INNER JOIN
a
ON t.mgrQID = a.QID)
--subtracting 1 because it is also counting the manager
SELECT direct,
count(*) - 1 AS totalCount
FROM a
GROUP BY direct
OPTION (MAXRECURSION 10);
My question is...
How can I update #temp totalCount with the count I get from the second query? QID and Direct are the 2 fields in common amongst the two.
Try this:
update t
set t.totalCount = a.count(*) - 1
from a
join #temp t
on a.Direct = t.QID
group by a.direct, t.QID
option (maxrecursion 10)

Converting updated column values to a table as rows

ID State Name Department City
1 O George Sales Phoenix
1 N George Sales Denver
2 O Michael Order Process San diego
2 N Michael Marketing San jose
I got a situation that I need to convert the above tables values to the following format.(Consider the top row is column names)
ID Column OldValue New Value
1 Department Phoenix Denver
2 Department Order Process Marketing
2 City San diego San jose
I.e : I need to capture the changed column values for a table from its old and new records and record them in a different table.But the problem is we have many tables like that and the column names and no of columns are different for each table.
If anyone come with a solution that would be greatly appreciated..!
Thank you in advance.
Is this what you want?
ID Column OldValue New Value
1 City Phoenix Denver
2 Department Order Process Marketing
2 City San Diego San jose
Here is the dynamic code:
DECLARE #sqlStm varchar(max);
DECLARE #sqlSelect varchar(max);
DECLARE #tablename varchar(200);
SET #tablename = 'testtable';
-- Assume table has ID column and State column.
SET #sqlSelect = ''
SET #sqlStm = 'WITH old AS
(
SELECT *
FROM '+#tablename+'
WHERE State=''O''
), new AS
(
SELECT *
FROM '+#tablename+'
WHERE State=''N''
)';
DECLARE #aCol varchar(128)
DECLARE curCols CURSOR FOR
SELECT column_name
FROM information_schema.columns
WHERE table_name = #tablename
AND UPPER(column_name) NOT IN ('ID','STATE')
OPEN curCols
FETCH curCols INTO #aCol
WHILE (##FETCH_STATUS = 0)
BEGIN
SET #sqlStm = #sqlStm +
', changed'+#aCol+' AS
(
SELECT n.ID, '''+#aCol+''' AS [Column], o.['+#aCol+'] AS oldValue, n.['+#aCol+'] AS newValue
FROM new n
JOIN old o ON n.ID = o.ID AND n.['+#aCol+'] != o.['+#aCol+']
)'
IF LEN(#sqlSelect) > 0 SET #sqlSelect = #sqlSelect + ' UNION ALL '
SET #sqlSelect = #sqlSelect + '
SELECT * FROM changed'+#aCol
FETCH curCols INTO #aCol
END
CLOSE curCols
DEALLOCATE curCols
SET #sqlSelect = #sqlSelect + '
ORDER BY id, [Column]'
PRINT #sqlStm+#sqlSelect
EXEC (#sqlStm+#sqlSelect)
Which in my test output the following:
WITH old AS
(
SELECT *
FROM testtable
WHERE State='O'
), new AS
(
SELECT *
FROM testtable
WHERE State='N'
), changedName AS
(
SELECT n.ID, 'Name' AS [Column], o.[Name] AS oldValue, n.[Name] AS newValue
FROM new n
JOIN old o ON n.ID = o.ID AND n.[Name] != o.[Name]
), changedDepartment AS
(
SELECT n.ID, 'Department' AS [Column], o.[Department] AS oldValue, n.[Department] AS newValue
FROM new n
JOIN old o ON n.ID = o.ID AND n.[Department] != o.[Department]
), changedCity AS
(
SELECT n.ID, 'City' AS [Column], o.[City] AS oldValue, n.[City] AS newValue
FROM new n
JOIN old o ON n.ID = o.ID AND n.[City] != o.[City]
)
SELECT * FROM changedName UNION ALL
SELECT * FROM changedDepartment UNION ALL
SELECT * FROM changedCity
ORDER BY id, [Column]
Original answer below:
I would do it like this -- because I think it is clearer than other ways which might be faster:
with old as
(
Select ID, Name,Department,City
From table1
Where State='O'
), new as
(
Select ID, Name,Department,City
From table1
Where State='N'
), oldDepartment as
(
Select ID, 'Department' as Column, o.Department as oldValue, n.Department as newValue
From new
join old on new.ID = old.ID and new.Department != old.Department
), oldCity as
(
Select ID, 'City' as Column, o.City as oldValue, n.City as newValue
From new
join old on new.ID = old.ID and new.City != old.City
)
select * from oldDepartment
union all
select * from oldCity
Depending on many things (size of tables and indexes etc) it might actually be faster than using pivots or cases or grouping. It really depends on your data. If this is a one-off run I'd just go for the easiest to grok.
The cleanest approach is probably to unpivot the data and then use aggregation. This does require custom coding for each table, which you might be able to generalize by using some form a dynamic SQL.
For your particular example, here is an illustration of what to do:
select id, col,
max(case when OldNew = 'Old' then value end) as OldValue,
max(case when OldNew = 'New' then value end) as NewValue
from ((select ID, OldNew, 'Name' as col, Name as value
from t
) union all
(select ID, OldNew, 'Department' as col, Department as value
from t
) union all
(select ID, OldNew, 'City' as col, City as value
from t
)
) unpvt
group by id, col
having max(value) <> min(value) and max(value) is not null;
This is for illustration purposes. The unpivot can be done more efficiently than using union all, particularly when there are many scans. Here is a more efficient version, although the exact syntax depends on the database:
select id, col,
max(case when OldNew = 'Old' then value end) as OldValue,
max(case when OldNew = 'New' then value end) as NewValue
from (select ID, OldNew, cols.col,
(case when cols.col = 'Name' then Name
when cols.col = 'Department' then Department
when cols.col = 'City' then City
end) as value
from t cross join
(select 'Name' as col union all select 'Department' union all select 'City') cols
) unpvt
group by id, col
having max(value) <> min(value) and max(value) is not null;
This is more efficient because it will typically only scan your table once, rather than once for each column as in the union all version.
In either version, there is an implicit assumption that all the columns have the same character type. This is implicit in the format you are converting to, where all the values are in a single column.