Converting updated column values to a table as rows - sql

ID State Name Department City
1 O George Sales Phoenix
1 N George Sales Denver
2 O Michael Order Process San diego
2 N Michael Marketing San jose
I got a situation that I need to convert the above tables values to the following format.(Consider the top row is column names)
ID Column OldValue New Value
1 Department Phoenix Denver
2 Department Order Process Marketing
2 City San diego San jose
I.e : I need to capture the changed column values for a table from its old and new records and record them in a different table.But the problem is we have many tables like that and the column names and no of columns are different for each table.
If anyone come with a solution that would be greatly appreciated..!
Thank you in advance.

Is this what you want?
ID Column OldValue New Value
1 City Phoenix Denver
2 Department Order Process Marketing
2 City San Diego San jose
Here is the dynamic code:
DECLARE #sqlStm varchar(max);
DECLARE #sqlSelect varchar(max);
DECLARE #tablename varchar(200);
SET #tablename = 'testtable';
-- Assume table has ID column and State column.
SET #sqlSelect = ''
SET #sqlStm = 'WITH old AS
(
SELECT *
FROM '+#tablename+'
WHERE State=''O''
), new AS
(
SELECT *
FROM '+#tablename+'
WHERE State=''N''
)';
DECLARE #aCol varchar(128)
DECLARE curCols CURSOR FOR
SELECT column_name
FROM information_schema.columns
WHERE table_name = #tablename
AND UPPER(column_name) NOT IN ('ID','STATE')
OPEN curCols
FETCH curCols INTO #aCol
WHILE (##FETCH_STATUS = 0)
BEGIN
SET #sqlStm = #sqlStm +
', changed'+#aCol+' AS
(
SELECT n.ID, '''+#aCol+''' AS [Column], o.['+#aCol+'] AS oldValue, n.['+#aCol+'] AS newValue
FROM new n
JOIN old o ON n.ID = o.ID AND n.['+#aCol+'] != o.['+#aCol+']
)'
IF LEN(#sqlSelect) > 0 SET #sqlSelect = #sqlSelect + ' UNION ALL '
SET #sqlSelect = #sqlSelect + '
SELECT * FROM changed'+#aCol
FETCH curCols INTO #aCol
END
CLOSE curCols
DEALLOCATE curCols
SET #sqlSelect = #sqlSelect + '
ORDER BY id, [Column]'
PRINT #sqlStm+#sqlSelect
EXEC (#sqlStm+#sqlSelect)
Which in my test output the following:
WITH old AS
(
SELECT *
FROM testtable
WHERE State='O'
), new AS
(
SELECT *
FROM testtable
WHERE State='N'
), changedName AS
(
SELECT n.ID, 'Name' AS [Column], o.[Name] AS oldValue, n.[Name] AS newValue
FROM new n
JOIN old o ON n.ID = o.ID AND n.[Name] != o.[Name]
), changedDepartment AS
(
SELECT n.ID, 'Department' AS [Column], o.[Department] AS oldValue, n.[Department] AS newValue
FROM new n
JOIN old o ON n.ID = o.ID AND n.[Department] != o.[Department]
), changedCity AS
(
SELECT n.ID, 'City' AS [Column], o.[City] AS oldValue, n.[City] AS newValue
FROM new n
JOIN old o ON n.ID = o.ID AND n.[City] != o.[City]
)
SELECT * FROM changedName UNION ALL
SELECT * FROM changedDepartment UNION ALL
SELECT * FROM changedCity
ORDER BY id, [Column]
Original answer below:
I would do it like this -- because I think it is clearer than other ways which might be faster:
with old as
(
Select ID, Name,Department,City
From table1
Where State='O'
), new as
(
Select ID, Name,Department,City
From table1
Where State='N'
), oldDepartment as
(
Select ID, 'Department' as Column, o.Department as oldValue, n.Department as newValue
From new
join old on new.ID = old.ID and new.Department != old.Department
), oldCity as
(
Select ID, 'City' as Column, o.City as oldValue, n.City as newValue
From new
join old on new.ID = old.ID and new.City != old.City
)
select * from oldDepartment
union all
select * from oldCity
Depending on many things (size of tables and indexes etc) it might actually be faster than using pivots or cases or grouping. It really depends on your data. If this is a one-off run I'd just go for the easiest to grok.

The cleanest approach is probably to unpivot the data and then use aggregation. This does require custom coding for each table, which you might be able to generalize by using some form a dynamic SQL.
For your particular example, here is an illustration of what to do:
select id, col,
max(case when OldNew = 'Old' then value end) as OldValue,
max(case when OldNew = 'New' then value end) as NewValue
from ((select ID, OldNew, 'Name' as col, Name as value
from t
) union all
(select ID, OldNew, 'Department' as col, Department as value
from t
) union all
(select ID, OldNew, 'City' as col, City as value
from t
)
) unpvt
group by id, col
having max(value) <> min(value) and max(value) is not null;
This is for illustration purposes. The unpivot can be done more efficiently than using union all, particularly when there are many scans. Here is a more efficient version, although the exact syntax depends on the database:
select id, col,
max(case when OldNew = 'Old' then value end) as OldValue,
max(case when OldNew = 'New' then value end) as NewValue
from (select ID, OldNew, cols.col,
(case when cols.col = 'Name' then Name
when cols.col = 'Department' then Department
when cols.col = 'City' then City
end) as value
from t cross join
(select 'Name' as col union all select 'Department' union all select 'City') cols
) unpvt
group by id, col
having max(value) <> min(value) and max(value) is not null;
This is more efficient because it will typically only scan your table once, rather than once for each column as in the union all version.
In either version, there is an implicit assumption that all the columns have the same character type. This is implicit in the format you are converting to, where all the values are in a single column.

Related

Trying to Sum up Cross-Tab Data in SQL

I have a table where every ID has one or more places, and each place comes with a count. Places can be repeated within IDs. It is stored in rows like so:
ID ColumnName DataValue
1 place1 ABC
1 count1 5
2 place1 BEC
2 count1 12
2 place2 CDE
2 count2 6
2 place3 BEC
2 count3 9
3 place1 BBC
3 count1 5
3 place2 BBC
3 count2 4
Ultimately, I want a table where every possible place name is its own column, and the count per place per ID is summed up, like so:
ID ABC BEC CDE BBC
1 5 0 0 0
2 0 21 6 0
3 0 0 0 9
I don't know the best way to go about this. There are around 50 different possible place names, so specifically listing them out in a query isn't ideal. I know I ultimately have to pivot the data, but I don't know if I should do it before or after I sum up the counts. And whether it's before or after, I haven't been able to figure out how to go about summing it up.
Any ideas/help would be greatly appreciated. At this point, I'm having a hard time finding where to even start. I've seen a few posts with similar problems, but nothing quite as convoluted as this.
EDIT:
Right now I'm working with this to pivot the table, but this leaves me with columns named place1, place2, .... count1, count2,...
and I don't know how to appropriately sum up the counts and make new columns with the place names.
DECLARE #cols NVARCHAR(MAX), #query NVARCHAR(MAX);
SET #cols = STUFF(
(
SELECT DISTINCT
','+QUOTENAME(c.[ColumnName])
FROM #temp c FOR XML PATH(''), TYPE
).value('.', 'nvarchar(max)'), 1, 1, '');
SET #query = 'SELECT [ID], '+#cols+'from (SELECT [ID],
[DataValue] AS [amount],
[ColumnName] AS [category]
FROM #temp
)x pivot (max(amount) for category in ('+#cols+')) p';
EXECUTE (#query);
Your table structure is pretty bad. You'll need to normalize your data before you can attempt to pivot it. Try this:
;WITH IDs AS
(
SELECT DISTINCT
id
,ColId = RIGHT(ColumnName, LEN(ColumnName) - 5)
,Place = datavalue
FROM #temp
WHERE ISNUMERIC(datavalue) = 0
)
,Counts AS
(
SELECT DISTINCT
id
,ColId = RIGHT(ColumnName, LEN(ColumnName) - 5)
,Cnt = CAST(datavalue AS INT)
FROM #temp
WHERE ISNUMERIC(datavalue) = 1
)
SELECT
piv.id
,ABC = ISNULL(piv.ABC, 0)
,BEC = ISNULL(piv.BEC, 0)
,CDE = ISNULL(piv.CDE, 0)
,BBC = ISNULL(piv.BBC, 0)
FROM (SELECT i.id, i.Place, c.Cnt FROM IDs i JOIN Counts c ON c.id = i.id AND c.ColId = i.ColId) src
PIVOT ( SUM(Cnt)
FOR Place IN ([ABC], [BEC], [CDE], [BBC])
) piv;
Doing it with dynamic SQL would yield the following:
SET #query =
';WITH IDs AS
(
SELECT DISTINCT
id
,ColId = RIGHT(ColumnName, LEN(ColumnName) - 5)
,Place = datavalue
FROM #temp
WHERE ISNUMERIC(datavalue) = 0
)
,Counts AS
(
SELECT DISTINCT
id
,ColId = RIGHT(ColumnName, LEN(ColumnName) - 5)
,Cnt = CAST(datavalue AS INT)
FROM #temp
WHERE ISNUMERIC(datavalue) = 1
)
SELECT [ID], '+#cols+'
FROM
(
SELECT i.id, i.Place, c.Cnt
FROM IDs i
JOIN Counts c ON c.id = i.id AND c.ColId = i.ColId
) src
PIVOT
(SUM(Cnt) FOR Place IN ('+#cols+')) piv;';
EXECUTE (#query);
Try this out:
SELECT id,
COALESCE(ABC, 0) AS ABC,
COALESCE(BBC, 0) AS BBC,
COALESCE(BEC, 0) AS BEC,
COALESCE(CDE, 0) AS CDE
FROM
(SELECT id,
MIN(CASE WHEN columnname LIKE 'place%' THEN datavalue END) AS col,
CAST(MIN(CASE WHEN columnname LIKE 'count%' THEN datavalue END) AS INT) AS val
FROM t
GROUP BY id, RIGHT(columnname, 1)
) src
PIVOT
(SUM(val)
FOR col in ([ABC], [BBC], [BEC], [CDE])) pvt
Tested here: http://rextester.com/XUTJ68690
In the src query, you need to re-format your data, so that you have a unique id and place in each row. From there a pivot will work.
If the count is always immediately after the place, the following query will generate a data set for pivoting.
The result data set before pivoting has the following columns:
id, placename, count
select placeTable.id, placeTable.datavalue, countTable.datavalue
from
(select *, row_number() over (order by id, %%physloc%%) as rownum
from test
where isnumeric(datavalue) = 1
) as countTable
join
(select *, row_number() over (order by id, %%physloc%%) as rownum
from test
where isnumeric(datavalue) <> 1
) as placeTable
on countTable.id = placeTable.id and
countTable.rownum = placeTable.rownum
Tested on sqlfidde mssqlserver: http://sqlfiddle.com/#!6/701c91/18
Here is one other approach using PIVOT operator with dynamic style
declare #Col varchar(2000) = '',
#Query varchar(2000) = ''
set #Col = stuff(
(select ','+QUOTENAME(DataValue)
from table where isnumeric(DataValue) = 0
group by DataValue for xml path('')),1,1,'')
set #Query = 'select id, '+#Col+' from
(
select id, DataValue,
cast((case when isnumeric(DataValue) = 1 then DataValue else lead(DataValue) over (order by id) end) as int) Value
from table
) as a
PIVOT
(
sum(Value) for DataValue in ('+#Col+')
)pvt'
EXECUTE (#Query)
Note : I have used lead() function to access next rows data if i found character string values and replace with numeric data values
Result :
id ABC BBC BEC CDE
1 5 NULL NULL NULL
2 NULL NULL 21 6
3 NULL 9 NULL NULL

Split/separate column into multiple columns

I'm completely stuck and I cannot find any answers for this problem even though problem seems to be quite simple. Can I separate that 'description' column without making a new table?
For now I just wrote this simplest code.
select item_id, description
from data
where item_id = '123'
With that code it looks like this:
item_id description
123 A
123 B
123 C
But I'd like to make it look like this:
item_id desc_1 desc_1 desc_2
123 A B C
Use conditional aggregation with the help of case expression
select item_id,
max(case when description= 'A' then description end) [desc_1],
max(case when description= 'B' then description end) [desc_2],
max(case when description= 'C' then description end) [desc_3],
from table
group by item_id
EDIT : So, the dynamic pivot way will look like as for SQL Server
declare #col varchar(max), #q varchar(max)
set #col = stuff(
(select distinct ','+quotename('desc_'+cast(row_number() over(partition by Item_id order by description) as varchar))
from table for xml path('')),
1,1,'')
set #q = 'select * from
(
select *,
''desc_''+cast(row_number() over(partition by Item_id order by description) as varchar) rn
from table
)a
PIVOT
(
max(description) for rn in ('+#col+')
)p'
EXEC (#Q)
Result :
item_id desc_1 desc_2 desc_3
123 A B C
234 B C d
first Declare Distinct column names
Like ABC, DEF,GHI
and values
then write dynamic pivot
DECLARE #COLS AS NVARCHAR(MAX)
DECLARE #query AS NVARCHAR(MAX)
SET #COLS=STUFF((select ',' + QUOTENAME(Course) from cst_coursedetails where programid=1 FOR XML PATH(''),TYPE).value('.','NVARCHAR(MAX)'),1,1,'')
SET #query =' SELECT * FROM(SELECT B.COLLCODE, B.COLLNAME,C.Course AS COURSE,D.ROLLNAME, E.ExamType, COUNT (A.HTNO) AS PRECOUNT FROM TableOne AS A
INNER JOIN TableTwo AS B ON A.COLLCODE=B.COLLCODE AND A.PROGRAMID=B.PROGRAMID
INNER JOIN TableThreee AS C ON C.CourseId=A.CourseId
INNER JOIN TableFour AS D ON D.ROLLID=A.ROLLID
INNER JOIN CST_EXAMTYPE AS E ON E.ExamTypeId=A.ExamTypeId
WHERE A.STATUSID !=17 AND a.ProgramId=1 AND A.ROLLID IN(1,2,3) AND A.EXAMTYPEID IN(1) GROUP BY B.COLLNAME,B.COLLCODE,C.Course,d.ROLLNAME ,E.ExamType
)SRC
PIVOT(
SUM(PRECOUNT) FOR COURSE IN('+#COLS+')
)AS PIV'
afs -- with clause name
giga -- alias name for listagg
with afs as
(
select item_id,LISTAGG(description, ',') WITHIN GROUP (ORDER BY item_id) AS
giga from test_jk group by item_id
)
select item_id,REGEXP_SUBSTR (giga, '[^,]+', 1, 1) AS
desc_1,REGEXP_SUBSTR (giga, '[^,]+', 1, 2) as desc_2 from afs;
output

SQL update column depending on other values in same column

I have a table similar to this:
Index Name Type
--------------------------------
1 'Apple' 'Fruit'
2 'Carrot' 'Vegetable'
3 'Orange' 'Fruit'
3 'Mango' 'Fruit'
4 'Potato' 'Vegetable'
and would like to change it to this:
Index Name Type
--------------------------------
1 'Apple' 'Fruit 1'
2 'Carrot' 'Vegetable 1'
3 'Orange' 'Fruit 2'
3 'Mango' 'Fruit 3'
4 'Potato' 'Vegetable 2'
Any chance to do this in a smart update query (= without cursors)?
You can run update with join to get row_number() within [type] group for each row and then concatenate this values with [type] using [index] as glue column:
update t1 set t1.[type] = t1.[type] + ' ' + cast(t2.[rn] as varchar(3))
from [tbl] t1
join ( select [index]
, row_number() over (partition by [type] order by [index]) as [rn]
from [tbl]
) t2 on t1.[index] = t2.[index]
SQLFiddle
Suppose that your table has a Primary Key called ID then you can run the following:
update fruits
set Type = newType
from
(
select f.id
,f.[index]
,f.Name
,f.[Type]
,Type + ' '+ cast((select COUNT(*)
from fruits
where Type = f.Type
and Fruits.id <= f.id) as varchar(10)) as newType
from fruits f
) t
where t.id = fruits.id
SELECT Index ,Name, Type, ROW_NUMBER() OVER (PARTITION BY Type ORDER BY Index)AS RowNum
INTO #temp
FROM table_name
UPDATE #temp
SET Type = Type + ' ' + CAST(RowNum AS NVARCHAR(15))
UPDATE table_name
SET Type = t2.Type
FROM table_name t1
JOIN #temp t2 ON t2.Index = t1.Index
You can use the fact that you can update cte:
with cte as(select type, row_number() over(partition by type order by index) rn from table)
update cte set type = type + ' ' + cast(rn as varchar(10))

order by using terms in where clause

I have a simple select query -
SELECT ID, NAME
FROM PERSONS
WHERE NAME IN ('BBB', 'AAA', 'ZZZ')
-- ORDER BY ???
I want this result to be ordered by the sequence in which NAMES are provided, that is,
1st row in result set should be the one with NAME = BBB, 2nd is AAA, 3rd it ZZZ.
Is this possible in SQL server ? I would like to know how to do it if there is a simple and short way of doing it, like maybe 5-6 lines of code.
You could create an ordered split function:
CREATE FUNCTION [dbo].[SplitStrings_Ordered]
(
#List NVARCHAR(MAX),
#Delimiter NVARCHAR(255)
)
RETURNS TABLE
AS
RETURN (SELECT [Index] = ROW_NUMBER() OVER (ORDER BY Number), Item
FROM (SELECT Number, Item = SUBSTRING(#List, Number,
CHARINDEX(#Delimiter, #List + #Delimiter, Number) - Number)
FROM (SELECT ROW_NUMBER() OVER (ORDER BY s1.[object_id])
FROM sys.all_objects AS s1 CROSS JOIN sys.all_objects AS s2) AS n(Number)
WHERE Number <= CONVERT(INT, LEN(#List))
AND SUBSTRING(#Delimiter + #List, Number, LEN(#Delimiter)) = #Delimiter
) AS y);
Then alter your input slightly (a single comma-separated list instead of three individual strings):
SELECT p.ID, p.NAME
FROM dbo.PERSONS AS p
INNER JOIN dbo.SplitStrings_Ordered('BBB,AAA,ZZZ', ',') AS s
ON p.NAME = s.Item
ORDER BY s.[Index];
You could store the names in a temp table with an order. Example:
DECLARE #Names TABLE (
Name VARCHAR(MAX),
SortOrder INT
)
INSERT INTO #Names (Name, SortOrder) VALUES ('BBB', 1)
INSERT INTO #Names (Name, SortOrder) VALUES ('AAA', 2)
INSERT INTO #Names (Name, SortOrder) VALUES ('ZZZ', 3)
SELECT P.ID, P.NAME
FROM PERSONS P
JOIN #Names N ON P.Name = N.Name
ORDER BY N.SortOrder
There is no way to do this using the order in the IN predicate, however, you could create a table of constants giving your constants an order by value:
SELECT p.ID, p.NAME
FROM PERSONS p
INNER JOIN
( VALUES
('BBB', 1),
('AAA', 2),
('ZZZ', 3)
) t (Name, SortOrder)
ON p.Name = t.Name
ORDER BY t.SortOrder;
The other (and in my option less attractive) solution is to use CASE
SELECT ID, NAME
FROM PERSONS
WHERE NAME IN ('BBB', 'AAA', 'ZZZ')
ORDER BY CASE Name
WHEN 'BBB' THEN 1
WHEN 'AAA' THEN 2
WHEN 'ZZZ' THEN 3
END;
SELECT ID, NAME
FROM PERSONS
WHERE NAME IN ('BBB', 'AAA', 'ZZZ')
ORDER BY CASE
WHEN NAME = 'BBB' THEN 1
WHEN NAME = 'AAA' THEN 2
WHEN NAME = 'ZZZ' THEN 3
END ASC
I think this must work:
ORDER BY CASE
WHEN NAME = 'BBB' THEN 0
WHEN NAME = 'AAA' THEN 1
WHEN NAME = 'ZZZ' THEN 2
ELSE 3
END ASC

Grouping runs of data

SQL Experts,
Is there an efficient way to group runs of data together using SQL?
Or is it going to be more efficient to process the data in code.
For example if I have the following data:
ID|Name
01|Harry Johns
02|Adam Taylor
03|John Smith
04|John Smith
05|Bill Manning
06|John Smith
I need to display this:
Harry Johns
Adam Taylor
John Smith (2)
Bill Manning
John Smith
#Matt: Sorry I had trouble formatting the data using an embedded html table it worked in the preview but not in the final display.
Try this:
select n.name,
(select count(*)
from myTable n1
where n1.name = n.name and n1.id >= n.id and (n1.id <=
(
select isnull(min(nn.id), (select max(id) + 1 from myTable))
from myTable nn
where nn.id > n.id and nn.name <> n.name
)
))
from myTable n
where not exists (
select 1
from myTable n3
where n3.name = n.name and n3.id < n.id and n3.id > (
select isnull(max(n4.id), (select min(id) - 1 from myTable))
from myTable n4
where n4.id < n.id and n4.name <> n.name
)
)
I think that'll do what you want. Bit of a kludge though.
Phew! After a few edits I think I have all the edge cases sorted out.
I hate cursors with a passion... but here's a dodgy cursor version...
Declare #NewName Varchar(50)
Declare #OldName Varchar(50)
Declare #CountNum int
Set #CountNum = 0
DECLARE nameCursor CURSOR FOR
SELECT Name
FROM NameTest
OPEN nameCursor
FETCH NEXT FROM nameCursor INTO #NewName
WHILE ##FETCH_STATUS = 0
BEGIN
if #OldName <> #NewName
BEGIN
Print #OldName + ' (' + Cast(#CountNum as Varchar(50)) + ')'
Set #CountNum = 0
END
SELECT #OldName = #NewName
FETCH NEXT FROM nameCursor INTO #NewName
Set #CountNum = #CountNum + 1
END
Print #OldName + ' (' + Cast(#CountNum as Varchar(50)) + ')'
CLOSE nameCursor
DEALLOCATE nameCursor
My solution just for kicks (this was a fun exercise), no cursors, no iterations, but i do have a helper field
-- Setup test table
DECLARE #names TABLE (
id INT IDENTITY(1,1),
name NVARCHAR(25) NOT NULL,
grp UNIQUEIDENTIFIER NULL
)
INSERT #names (name)
SELECT 'Harry Johns' UNION ALL
SELECT 'Adam Taylor' UNION ALL
SELECT 'John Smith' UNION ALL
SELECT 'John Smith' UNION ALL
SELECT 'Bill Manning' UNION ALL
SELECT 'Bill Manning' UNION ALL
SELECT 'Bill Manning' UNION ALL
SELECT 'John Smith' UNION ALL
SELECT 'Bill Manning'
-- Set the first id's group to a newid()
UPDATE n
SET grp = newid()
FROM #names n
WHERE n.id = (SELECT MIN(id) FROM #names)
-- Set the group to a newid() if the name does not equal the previous
UPDATE n
SET grp = newid()
FROM #names n
INNER JOIN #names b
ON (n.ID - 1) = b.ID
AND ISNULL(b.Name, '') <> n.Name
-- Set groups that are null to the previous group
-- Keep on doing this until all groups have been set
WHILE (EXISTS(SELECT 1 FROM #names WHERE grp IS NULL))
BEGIN
UPDATE n
SET grp = b.grp
FROM #names n
INNER JOIN #names b
ON (n.ID - 1) = b.ID
AND n.grp IS NULL
END
-- Final output
SELECT MIN(id) AS id_start,
MAX(id) AS id_end,
name,
count(1) AS consecutive
FROM #names
GROUP BY grp,
name
ORDER BY id_start
/*
Results:
id_start id_end name consecutive
1 1 Harry Johns 1
2 2 Adam Taylor 1
3 4 John Smith 2
5 7 Bill Manning 3
8 8 John Smith 1
9 9 Bill Manning 1
*/
Well, this:
select Name, count(Id)
from MyTable
group by Name
will give you this:
Harry Johns, 1
Adam Taylor, 1
John Smith, 2
Bill Manning, 1
and this (MS SQL syntax):
select Name +
case when ( count(Id) > 1 )
then ' ('+cast(count(Id) as varchar)+')'
else ''
end
from MyTable
group by Name
will give you this:
Harry Johns
Adam Taylor
John Smith (2)
Bill Manning
Did you actually want that other John Smith on the end of your results?
EDIT: Oh I see, you want consecutive runs grouped. In that case, I'd say you need a cursor or to do it in your program code.
How about this:
declare #tmp table (Id int, Nm varchar(50));
insert #tmp select 1, 'Harry Johns';
insert #tmp select 2, 'Adam Taylor';
insert #tmp select 3, 'John Smith';
insert #tmp select 4, 'John Smith';
insert #tmp select 5, 'Bill Manning';
insert #tmp select 6, 'John Smith';
select * from #tmp order by Id;
select Nm, count(1) from
(
select Id, Nm,
case when exists (
select 1 from #tmp t2
where t2.Nm=t1.Nm
and (t2.Id = t1.Id + 1 or t2.Id = t1.Id - 1))
then 1 else 0 end as Run
from #tmp t1
) truns group by Nm, Run
[Edit] That can be shortened a bit
select Nm, count(1) from (select Id, Nm, case when exists (
select 1 from #tmp t2 where t2.Nm=t1.Nm
and abs(t2.Id-t1.Id)=1) then 1 else 0 end as Run
from #tmp t1) t group by Nm, Run
For this particular case, all you need to do is group by the name and ask for the count, like this:
select Name, count(*)
from MyTable
group by Name
That'll get you the count for each name as a second column.
You can get it all as one column by concatenating like this:
select Name + ' (' + cast(count(*) as varchar) + ')'
from MyTable
group by Name