Given 2 or more rows that are selected to merge, one of them is identified as being the template row. The other rows should merge their data into any null value columns that the template has.
Example data:
Id Name Address City State Active Email Date
1 Acme1 NULL NULL NULL NULL blah#yada.com 3/1/2011
2 Acme1 1234 Abc Rd Springfield OR 0 blah#gmail.com 1/12/2012
3 Acme2 NULL NULL NULL 1 blah#yahoo.com 4/19/2012
Say that a user has chosen row with Id 1 as the template row, and rows with Ids 2 and 3 are to be merged into row 1 and then deleted. Any null value columns in row Id 1 should be filled with (if one exists) the most recent (see Date column) non-null value, and non-null values already present in row Id 1 are to be left as is. The result of this query on the above data should be exactly this:
Id Name Address City State Active Email Date
1 Acme1 1234 Abc Road Springfield OR 1 blah#yada.com 3/1/2011
Notice that the Active value is 1, and not 0 because row Id 3 had the most recent date.
P.S. Also, is there any way possible to do this without explicitly defining/knowing beforehand what all the column names are? The actual table I'm working with has a ton of columns, with new ones being added all the time. Is there a way to look up all the column names in the table, and then use that subquery or temptable to do the job?
You might do it by ordering rows first by template flag, then by date desc. Template row should always be the last one. Each row is assigned a number in that order. Using max() we are finding fist occupied cell (in descending order of numbers). Then we select columns from rows matching those maximums.
; with rows as (
select test.*,
-- Template row must be last - how do you decide which one is template row?
-- In this case template row is the one with id = 1
row_number() over (order by case when id = 1 then 1 else 0 end,
date) rn
from test
-- Your list of rows to merge goes here
-- where id in ( ... )
),
-- Finding first occupied row per column
positions as (
select
max (case when Name is not null then rn else 0 end) NamePosition,
max (case when Address is not null then rn else 0 end) AddressPosition,
max (case when City is not null then rn else 0 end) CityPosition,
max (case when State is not null then rn else 0 end) StatePosition,
max (case when Active is not null then rn else 0 end) ActivePosition,
max (case when Email is not null then rn else 0 end) EmailPosition,
max (case when Date is not null then rn else 0 end) DatePosition
from rows
)
-- Finally join this columns in one row
select
(select Name from rows cross join Positions where rn = NamePosition) name,
(select Address from rows cross join Positions where rn = AddressPosition) Address,
(select City from rows cross join Positions where rn = CityPosition) City,
(select State from rows cross join Positions where rn = StatePosition) State,
(select Active from rows cross join Positions where rn = ActivePosition) Active,
(select Email from rows cross join Positions where rn = EmailPosition) Email,
(select Date from rows cross join Positions where rn = DatePosition) Date
from test
-- Any id will suffice, or even DISTINCT
where id = 1
You might check it at Sql Fiddle.
EDIT:
Cross joins in last section might actually be inner joins on rows.rn = xxxPosition. It works this way, but change to inner join would be an improvement.
It's not so complicated.
At first..
DECLARE #templateID INT = 1
..so you can remember which row is treated as template..
Now find latest NOT NULL values (exclude template row). The easiest way is to use TOP 1 subqueries for each column:
SELECT
(SELECT TOP 1 Name FROM DataTab WHERE Name IS NOT NULL AND NOT ID = #templateID ORDER BY Date DESC) AS LatestName,
(SELECT TOP 1 Address FROM DataTab WHERE Address IS NOT NULL AND NOT ID = #templateID ORDER BY Date DESC) AS AddressName
-- add more columns here
Wrap above into CTE (Common Table Expression) so you have nice input for your UDPATE..
WITH Latest_CTE (CTE_LatestName, CTE_AddressName) -- add more columns here; I like CTE prefix to distinguish source columns from target columns..
AS
-- Define the CTE query.
(
SELECT
(SELECT TOP 1 Name FROM DataTab WHERE Name IS NOT NULL AND NOT ID = #templateID ORDER BY Date DESC) AS LatestName,
(SELECT TOP 1 Address FROM DataTab WHERE Address IS NOT NULL AND NOT ID = #templateID ORDER BY Date DESC) AS AddressName
-- add more columns here
)
UPDATE
<update statement here (below)>
Now, do smart UPDATE of your template row using ISNULL - it will act as conditional update - update only if target column is null
WITH
<common expression statement here (above)>
UPDATE DataTab
SET
Name = ISNULL(Name, CTE_LatestName), -- if Name is null then set Name to CTE_LatestName else keep Name as Name
Address = ISNULL(Address, CTE_LatestAddress)
-- add more columns here..
WHERE ID = #templateID
And the last task is delete rows other then template row..
DELETE FROM DataTab WHERE NOT ID = #templateID
Clear?
For dynamic columns, you need to write a solution using dynamic SQL.
You can query sys.columns and sys.tables to get the list of columns you need, then you want to loop backwards once for each null column finding the first non-null row for that column and updating your output row for that column. Once you get to 0 in the loop you have a complete row which you can then display to the user.
I should pay attention to posting dates. In any case, here's a solution using dynamic SQL to build out an update statement. It should give you something to build from, anyway.
There's some extra code in there to validate the results along the way, but I tried to comment in a way that made that non-vital code apparent.
CREATE TABLE
dbo.Dummy
(
[ID] int ,
[Name] varchar(30),
[Address] varchar(40) null,
[City] varchar(30) NULL,
[State] varchar(2) NULL,
[Active] tinyint NULL,
[Email] varchar(30) NULL,
[Date] date NULL
);
--
INSERT dbo.Dummy
VALUES
(
1, 'Acme1', NULL, NULL, NULL, NULL, 'blah#yada.com', '3/1/2011'
)
,
(
2, 'Acme1', '1234 Abc Rd', 'Springfield', 'OR', 0, 'blah#gmail.com', '1/12/2012'
)
,
(
3, 'Acme2', NULL, NULL, NULL, 1, 'blah#yahoo.com', '4/19/2012'
);
DECLARE
#TableName nvarchar(128) = 'Dummy',
#TemplateID int = 1,
#SetStmtList nvarchar(max) = '',
#LoopCounter int = 0,
#ColumnCount int = 0,
#SQL nvarchar(max) = ''
;
--
--Create a table to hold the column names
DECLARE
#ColumnList table
(
ColumnID tinyint IDENTITY,
ColumnName nvarchar(128)
);
--
--Get the column names
INSERT #ColumnList
(
ColumnName
)
SELECT
c.name
FROM
sys.columns AS c
JOIN
sys.tables AS t
ON
t.object_id = c.object_id
WHERE
t.name = #TableName;
--
--Create loop boundaries to build out the SQL statement
SELECT
#ColumnCount = MAX( l.ColumnID ),
#LoopCounter = MIN (l.ColumnID )
FROM
#ColumnList AS l;
--
--Loop over the column names
WHILE #LoopCounter <= #ColumnCount
BEGIN
--Dynamically construct SET statements for each column except ID (See the WHERE clause)
SELECT
#SetStmtList = #SetStmtList + ',' + l.ColumnName + ' =COALESCE(' + l.ColumnName + ', (SELECT TOP 1 ' + l.ColumnName + ' FROM ' + #TableName + ' WHERE ' + l.ColumnName + ' IS NOT NULL AND ID <> ' + CAST(#TemplateID AS NVARCHAR(MAX )) + ' ORDER BY Date DESC)) '
FROM
#ColumnList AS l
WHERE
l.ColumnID = #LoopCounter
AND
l.ColumnName <> 'ID';
--
SELECT
#LoopCounter = #LoopCounter + 1;
--
END;
--TESTING - Validate the initial table values
SELECT * FROM dbo.Dummy ;
--
--Get rid of the leading common in the SetStmtList
SET #SetStmtList = SUBSTRING( #SetStmtList, 2, LEN( #SetStmtList ) - 1 );
--Build out the rest of the UPDATE statement
SET #SQL = 'UPDATE ' + #TableName + ' SET ' + #SetStmtList + ' WHERE ID = ' + CAST(#TemplateID AS NVARCHAR(MAX ))
--Then execute the update
EXEC sys.sp_executesql
#SQL;
--
--TESTING - Validate the updated table values
SELECT * FROM dbo.Dummy ;
--
--Build out the DELETE statement
SET #SQL = 'DELETE FROM ' + #TableName + ' WHERE ID <> ' + CAST(#TemplateID AS NVARCHAR(MAX ))
--Execute the DELETE
EXEC sys.sp_executesql
#SQL;
--
--TESTING - Validate the final table values
SELECT * FROM dbo.Dummy;
--
DROP TABLE dbo.Dummy;
Related
On my table, I have records that contain ','.
Id Name
1 Here is the result
2 of your examination.
3 ,
4 New Opening for the position of
5 PT Teacher, Science Lab.
6 ,
So in cursor If I found ',' then I want to merge the 2 rows value into the first one.
DECLARE #ID int
DECLARE #Name nvarchar(500)
DECLARE MergeCursor CURSOR FOR
select ID,NAME from TEST_TABLE
OPEN MergeCursor
FETCH NEXT FROM NarrationCursor into #ID,#NAME
WHILE (##FETCH_STATUS=0)
BEGIN
if(#Name = ',')
select * from TEST_TABLE where ID = (select max(ID) from TEST_TABLE where ID < #ID)
FETCH NEXT FROM NarrationCursor into #ID,#NAME
END
CLOSE MergeCursor
DEALLOCATE MergeCursor
IN cursor how can I get the PREVIOUS TWO-ROW And UPDATE the value in 1st row and DELETE THE 2nd and THIRD ROW. AS WELL AS UPDATE THE ID
In the End, I want to output
Id Name
1 Here is the result of your examination.
2 New Opening for the position of PT Teacher, Science Lab.
WITH
grouped AS
(
SELECT
SUM(CASE WHEN name=',' THEN 1 ELSE 0 END)
OVER (ORDER BY id)
AS group_id,
id,
name
FROM
TEST_TABLE
)
SELECT
group_id + 1 AS id,
STRING_AGG(name, ' ') WITHIN GROUP (ORDER BY id) AS name
FROM
grouped
WHERE
name <> ','
GROUP BY
group_id
ORDER BY
group_id
For the selected rows in a column, how to update each row sequentially from the beginning to the end, with each row value incremented by 1 (or certain number). I know this can be done in excel in few seconds but I could figure out how to achieve in SQL server. For instance:
customer id is NULL now
update customer id with every row incremented by 1 (i.e. first row = 1, second row = 2, .....nth row = n)
ship-to party customer id
0002018092 NULL
0002008127 NULL
0002000129 NULL
0002031592 NULL
0002034232 NULL
desired output
ship-to party customer id
0002018092 1
0002008127 2
0002000129 3
0002031592 4
0002034232 5
Also, for the selected rows in a column, how to update each row with the row number? I know there is a row_number() function but didn't succeed in producing the desired result. for instance
column A is NULL now
update Column A with every row incremented by 1 (i.e. first row = row number 1, second row = row number 2, .....nth row = row number n)
Any demonstration would be very helpful.thkans
example : suppose I want to add a value to each value in column SomeIntField in table tblUser
there are 2 ways of doing this easy
first: this just adds value 1 to each column SomeIntField
update tblUser set SomeIntField = SomeIntField + 1
second : this adds an incrementing value, the first row gets +1, second gets +2, and so on...
declare #number int = 0
update tblUser
set #number = #number + 1,
SomeIntField = isnull(SomeIntField, 0) + #Number
EDIT: based on your last comment this might be what you want
declare #table table (shiptoparty varchar(50), customer_id int)
insert into #Table (shiptoparty, customer_id)
values ('0002018092', NULL), ('0002008127', NULL), ('0002000129', NULL), ('0002031592', NULL), ('0002034232', NULL)
declare #number int = 0
update #table
set #number = #number + 1,
customer_id = isnull(customer_id, 0) + #Number
select * from #table
The result of this is :
shiptoparty | customer_id
----------- | -----------
0002018092 | 1
0002008127 | 2
0002000129 | 3
0002031592 | 4
0002034232 | 5
Rather than using a self referencing variable, use a CTE:
WITH CTE AS (
SELECT [Your Incrementing Column],
ROW_NUMBER() OVER (ORDER BY [Columns to Order By]) AS RN
FROM YourTable)
UPDATE CTE
SET [Your Incrementing Column] = RN;
Edit: To prove a point that ALL rows will be updated:
CREATE TABLE #Sample (String varchar(50),
IncrementingInt int);
INSERT INTO #Sample (String)
VALUES ('sdkfjasdf'),
('dfydsfdfg'),
('sdfgsdfg45yfg'),
('dfgf54d'),
('dsft43tdc'),
('f6gytrntrfu7m45'),
('5d6f45wgby54'),
('g34h636j'),
('jw'),
('h6nw54m'),
('g54j747jm5e5f4w5gsft'),
('ns67mw54mk8o7hr'),
('h45j4w5h4');
SELECT *
FROM #Sample;
WITH CTE AS(
SELECT IncrementingInt,
ROW_NUMBER() OVER (ORDER BY String) AS RN
FROM #Sample)
UPDATE CTE
SET IncrementingInt = RN;
SELECT *
FROM #Sample;
DROP TABLE #Sample;
GO
To update each row with row number
Try below
CREATE TABLE tmp(Id INT IDENTITY(1,1), Value INT)
INSERT INTO tmp(value) VALUES(1),(2),(3),(4),(5)
UPDATE T
SET
T.Value = B.RowNo
FROM tmp AS T
INNER JOIN (SELECT Id, ROW_NUMBER()OVER(ORDER BY Id) AS RowNo FROM tmp)AS B
ON T.Id = B.Id
Don't think very complex. Try the simple method given below
alter table table_name drop column customer_id
go
alter table table_name add id customer_id IDENTITY(1,1)
go
First problem:
you want to increase values in every row in certain column by 1 (or other nuber), try this:
update TABLE_NAME set column_to_increase = column_to_increase + 1
Second problem:
you want to get row number for only certain rows. Solution: first create column holding all row numbers, then get the rows:
select * from (
select column1, column2, ..., columnN, row_number() over (order by (select null)) as [rn] from MY_TABLE
) where *condition*
FYI: select null in over clause does exactly nothing, it's just there, because window functions (such as row_number) have to have over clause and some of them require order by.
I created a temp table #test containing 3 fields: ColumnName, TableName, and Id.
I would like to see which rows in the #test table (columns in their respective tables) are not empty? I.e., for every column name that i have in the ColumnName field, and for the corresponding table found in the TableName field, i would like to see whether the column is empty or not. Tried some things (see below) but didn't get anywhere. Help, please.
declare #LoopCounter INT = 1, #maxloopcounter int, #test varchar(100),
#test2 varchar(100), #check int
set #maxloopcounter = (select count(TableName) from #test)
while #LoopCounter <= #maxloopcounter
begin
DECLARE #PropIDs TABLE (tablename varchar(max), id int )
Insert into #PropIDs (tablename, id)
SELECT [tableName], id FROM #test
where id = #LoopCounter
set #test2 = (select columnname from #test where id = #LoopCounter)
declare #sss varchar(max)
set #sss = (select tablename from #PropIDs where id = #LoopCounter)
set #check = (select count(#test2)
from (select tablename
from #PropIDs
where id = #LoopCounter) A
)
print #test2
print #sss
print #check
set #LoopCounter = #LoopCounter + 1
end
In order to use variables as column names and table names in your #Check= query, you will need to use Dynamic SQL.
There is most likely a better way to do this but I cant think of one off hand. Here is what I would do.
Use the select and declare a cursor rather than a while loop as you have it. That way you dont have to count on sequential id's. The cursor would fetch fields columnname, id and tablename
In the loop build a dynamic sql statement
Set #Sql = 'Select Count(*) Cnt Into #Temp2 From ' + TableName + ' Where ' + #columnname + ' Is not null And ' + #columnname <> '''''
Exec(#Sql)
Then check #Temp2 for a value greater than 0 and if this is what you desire you can use the #id that was fetched to update your #Temp table. Putting the result into a scalar variable rather than a temp table would be preferred but cant remember the best way to do that and using a temp table allows you to use an update join so it would well in my opinion.
https://www.mssqltips.com/sqlservertip/1599/sql-server-cursor-example/
http://www.sommarskog.se/dynamic_sql.html
Found a way to extract all non-empty tables from the schema, then just joined with the initial temp table that I had created.
select A.tablename, B.[row_count]
from (select * from #test) A
left join
(SELECT r.table_name, r.row_count, r.[object_id]
FROM sys.tables t
INNER JOIN (
SELECT OBJECT_NAME(s.[object_id]) table_name, SUM(s.row_count) row_count, s.[object_id]
FROM sys.dm_db_partition_stats s
WHERE s.index_id in (0,1)
GROUP BY s.[object_id]
) r on t.[object_id] = r.[object_id]
WHERE r.row_count > 0 ) B
on A.[TableName] = B.[table_name]
WHERE ROW_COUNT > 0
order by b.row_count desc
How about this one - bitmask computed column checks for NULLability. Value in the bitmask tells you if a column is NULL or not. Counting base 2.
CREATE TABLE FindNullComputedMask
(ID int
,val int
,valstr varchar(3)
,NotEmpty as
CASE WHEN ID IS NULL THEN 0 ELSE 1 END
|
CASE WHEN val IS NULL THEN 0 ELSE 2 END
|
CASE WHEN valstr IS NULL THEN 0 ELSE 4 END
)
INSERT FindNullComputedMask
SELECT 1,1,NULL
INSERT FindNullComputedMask
SELECT NULL,2,NULL
INSERT FindNullComputedMask
SELECT 2,NULL, NULL
INSERT FindNullComputedMask
SELECT 3,3,3
SELECT *
FROM FindNullComputedMask
Imagine the following two tables:
create table MainTable (
MainId integer not null, -- This is the index
Data varchar(100) not null
)
create table OtherTable (
MainId integer not null, -- MainId, Name combined are the index.
Name varchar(100) not null,
Status tinyint not null
)
Now I want to select all the rows from MainTable, while combining all the rows that match each MainId from OtherTable into a single field in the result set.
Imagine the data:
MainTable:
1, 'Hi'
2, 'What'
OtherTable:
1, 'Fish', 1
1, 'Horse', 0
2, 'Fish', 0
I want a result set like this:
MainId, Data, Others
1, 'Hi', 'Fish=1,Horse=0'
2, 'What', 'Fish=0'
What is the most elegant way to do this?
(Don't worry about the comma being in front or at the end of the resulting string.)
There is no really elegant way to do this in Sybase. Here is one method, though:
select
mt.MainId,
mt.Data,
Others = stuff((
max(case when seqnum = 1 then ','+Name+'='+cast(status as varchar(255)) else '' end) +
max(case when seqnum = 2 then ','+Name+'='+cast(status as varchar(255)) else '' end) +
max(case when seqnum = 3 then ','+Name+'='+cast(status as varchar(255)) else '' end)
), 1, 1, '')
from MainTable mt
left outer join
(select
ot.*,
row_number() over (partition by MainId order by status desc) as seqnum
from OtherTable ot
) ot
on mt.MainId = ot.MainId
group by
mt.MainId, md.Data
That is, it enumerates the values in the second table. It then does conditional aggregation to get each value, using the stuff() function to handle the extra comma. The above works for the first three values. If you want more, then you need to add more clauses.
Well, here is how I implemented it in Sybase 13.x. This code has the advantage of not being limited to a number of Names.
create proc
as
declare
#MainId int,
#Name varchar(100),
#Status tinyint
create table #OtherTable (
MainId int not null,
CombStatus varchar(250) not null
)
declare OtherCursor cursor for
select
MainId, Name, Status
from
Others
open OtherCursor
fetch OtherCursor into #MainId, #Name, #Status
while (##sqlstatus = 0) begin -- run until there are no more
if exists (select 1 from #OtherTable where MainId = #MainId) begin
update #OtherTable
set CombStatus = CombStatus + ','+#Name+'='+convert(varchar, Status)
where
MainId = #MainId
end else begin
insert into #OtherTable (MainId, CombStatus)
select
MainId = #MainId,
CombStatus = #Name+'='+convert(varchar, Status)
end
fetch OtherCursor into #MainId, #Name, #Status
end
close OtherCursor
select
mt.MainId,
mt.Data,
ot.CombStatus
from
MainTable mt
left join #OtherTable ot
on mt.MainId = ot.MainId
But it does have the disadvantage of using a cursor and a working table, which can - at least with a lot of data - make the whole process slow.
I have the following table layout. Each line value will always be unique. There will never be more than one instance of the same Id, Name, and Line.
Id Name Line
1 A Z
2 B Y
3 C X
3 C W
4 D W
I would like to query the data so that the Line field becomes a column. If the value exists, a 1 is applied in the field data, otherwise a 0. e.g.
Id Name Z Y X W
1 A 1 0 0 0
2 B 0 1 0 0
3 C 0 0 1 1
4 D 0 0 0 1
The field names W, X, Y, Z are just examples of field values, so I can't apply an operator to explicitly check, for example, 'X', 'Y', or 'Z'. These could change at any time and are not restricted to a finate set of values. The column names in the result-set should reflect the unique field values as columns.
Any idea how I can accomplish this?
It's a standard pivot query.
If 1 represents a boolean indicator - use:
SELECT t.id,
t.name,
MAX(CASE WHEN t.line = 'Z' THEN 1 ELSE 0 END) AS Z,
MAX(CASE WHEN t.line = 'Y' THEN 1 ELSE 0 END) AS Y,
MAX(CASE WHEN t.line = 'X' THEN 1 ELSE 0 END) AS X,
MAX(CASE WHEN t.line = 'W' THEN 1 ELSE 0 END) AS W
FROM TABLE t
GROUP BY t.id, t.name
If 1 represents the number of records with that value for the group, use:
SELECT t.id,
t.name,
SUM(CASE WHEN t.line = 'Z' THEN 1 ELSE 0 END) AS Z,
SUM(CASE WHEN t.line = 'Y' THEN 1 ELSE 0 END) AS Y,
SUM(CASE WHEN t.line = 'X' THEN 1 ELSE 0 END) AS X,
SUM(CASE WHEN t.line = 'W' THEN 1 ELSE 0 END) AS W
FROM TABLE t
GROUP BY t.id, t.name
Edited following update in question
SQL Server does not support dynamic pivoting.
To do this you could either use dynamic SQL to generate a query along the following lines.
SELECT
Id ,Name,
ISNULL(MAX(CASE WHEN Line='Z' THEN 1 END),0) AS Z,
ISNULL(MAX(CASE WHEN Line='Y' THEN 1 END),0) AS Y,
ISNULL(MAX(CASE WHEN Line='X' THEN 1 END),0) AS X,
ISNULL(MAX(CASE WHEN Line='W' THEN 1 END),0) AS W
FROM T
GROUP BY Id ,Name
Or an alternative which I have read about but not actually tried is to leverage the Access Transform function by setting up an Access database with a linked table pointing at the SQL Server table then query the Access database from SQL Server!
Here is the dynamic version
Test table
create table #test(id int,name char(1),line char(1))
insert #test values(1 , 'A','Z')
insert #test values(2 , 'B','Y')
insert #test values(3 , 'C','X')
insert #test values(4 , 'C','W')
insert #test values(5 , 'D','W')
insert #test values(5 , 'D','W')
insert #test values(5 , 'D','P')
Now run this
declare #names nvarchar(4000)
SELECT #names =''
SELECT #names = #names + line +', '
FROM (SELECT distinct line from #test) x
SELECT #names = LEFT(#names,(LEN(#names) -1))
exec('
SELECT *
FROM(
SELECT DISTINCT Id, Name,Line
FROM #test
) AS pivTemp
PIVOT
( COUNT(Line)
FOR Line IN (' + #names +' )
) AS pivTable ')
Now add one row to the table and run the query above again and you will see the B
insert #test values(5 , 'D','B')
Caution: Of course all the problems with dynamic SQL apply, if you can use sp_executeSQL but since parameters are not use like that in the query there really is no point
Assuming you have a finite number of values for Line that you could enumerate:
declare #MyTable table (
Id int,
Name char(1),
Line char(1)
)
insert into #MyTable
(Id, Name, Line)
select 1,'A','Z'
union all
select 2,'B','Y'
union all
select 3,'C','X'
union all
select 3,'C','W'
union all
select 4,'D','W'
SELECT Id, Name, Z, Y, X, W
FROM (SELECT Id, Name, Line
FROM #MyTable) up
PIVOT (count(Line) FOR Line IN (Z, Y, X, W)) AS pvt
ORDER BY Id
As you are using SQL Server, you could possibly use the PIVOT operator intended for this purpose.
If you're doing this for a SQL Server Reporting Services (SSRS) report, or could possibly switch to using one, then stop now and go throw a Matrix control onto your report. Poof! You're done! Happy as a clam with your data pivoted.
Here's a rather exotic approach (using sample data from the old Northwind database). It's adapted from the version here, which no longer worked due to the deprecation of DBCC RENAMECOLUMN and the addition of PIVOT as a keyword.
set nocount on
create table Sales (
AccountCode char(5),
Category varchar(10),
Amount decimal(8,2)
)
--Populate table with sample data
insert into Sales
select customerID, 'Emp'+CAST(EmployeeID as char), sum(Freight)
from Northwind.dbo.orders
group by customerID, EmployeeID
create unique clustered index Sales_AC_C
on Sales(AccountCode,Category)
--Create table to hold data column names and positions
select A.Category,
count(distinct B.Category) AS Position
into #columns
from Sales A join Sales B
on A.Category >= B.Category
group by A.Category
create unique clustered index #columns_P on #columns(Position)
create unique index #columns_C on #columns(Category)
--Generate first column of Pivot table
select distinct AccountCode into Pivoted from Sales
--Find number of data columns to be added to Pivoted table
declare #datacols int
select #datacols = max(Position) from #columns
--Add data columns one by one in the correct order
declare #i int
set #i = 0
while #i < #datacols begin
set #i = #i + 1
--Add next data column to Pivoted table
select P.*, isnull((
select Amount
from Sales S join #columns C
on C.Position = #i
and C.Category = S.Category
where P.AccountCode = S.AccountCode),0) AS X
into PivotedAugmented
from Pivoted P
--Name new data column correctly
declare #c sysname
select #c = Category
from #columns
where Position = #i
exec sp_rename '[dbo].[PivotedAugmented].[X]', #c, 'COLUMN'
--Replace Pivoted table with new table
drop table Pivoted
select * into Pivoted from PivotedAugmented
drop table PivotedAugmented
end
select * from Pivoted
go
drop table Pivoted
drop table #columns
drop table Sales