Dynamic SQL Procedure with Pivot displaying counts based on Date Range - sql

I have a table which contains multiple user entries.
I want to pull counts of user entries based on date range passed to a stored procedure.
start date: 11/9/2017
end date: 11/11/2017
However the response needs to be dynamic based on amount of days in the date range.
Here is a desired format:

Now that you have provided examples, I have updated my answer which provides you with a solution based on the data you have provided.
Note that you are able to change the date range and the query will update accordingly.
Bare in mind that this SQL query is for SQL Server:
create table #tbl1 (
[UserId] int
,[UserName] nvarchar(max)
,[EntryDateTime] datetime
);
insert into #tbl1 ([UserId],[UserName],[EntryDateTime])
values
(1,'John Doe','20171109')
,(1,'John Doe','20171109')
,(1,'John Doe','20171110')
,(1,'John Doe','20171111')
,(2,'Mike Smith','20171109')
,(2,'Mike Smith','20171110')
,(2,'Mike Smith','20171110')
,(2,'Mike Smith','20171110')
;
-- declare variables
declare
#p1 date
,#p2 date
,#diff int
,#counter1 int
,#counter2 int
,#dynamicSQL nvarchar(max)
;
-- set variables
set #p1 = '20171109'; -- ENTER THE START DATE IN THE FORMAT YYYYMMDD
set #p2 = '20171111'; -- ENTER THE END DATE IN THE FORMAT YYYYMMDD
set #diff = datediff(dd,#p1,#p2); -- used to calculate the difference in days
set #counter1 = 0; -- first counter to be used in while loop
set #counter2 = 0; -- second counter to be used in while loop
set #dynamicSQL = 'select pivotTable.[UserId] ,pivotTable.[UserName] as [Name] '; -- start of the dynamic SQL statement
-- to get the dates into the query in a dynamic way, you need to do a while loop (or use a cursor)
while (#counter1 < #diff)
begin
set #dynamicSQL += ',pivotTable.[' + convert(nvarchar(10),dateadd(dd,#counter1,#p1),120) + '] '
set #counter1 = (#counter1 +1)
end
-- continuation of the dynamic SQL statement
set #dynamicSQL += ' from (
select
t.[UserId]
,t.[UserName]
,cast(t.[EntryDateTime] as date) as [EntryDate]
,count(t.[UserId]) as [UserCount]
from #tbl1 as t
where
t.[EntryDateTime] >= ''' + convert(nvarchar(10),#p1,120) + ''' ' +
' and t.[EntryDateTime] <= ''' + convert(nvarchar(10),#p2,120) + ''' ' +
'group by
t.[UserId]
,t.[UserName]
,t.[EntryDateTime]
) as mainQuery
pivot (
sum(mainQuery.[UserCount]) for mainQuery.[EntryDate]
in ('
;
-- the second while loop which is used to create the columns in the pivot table
while (#counter2 < #diff)
begin
set #dynamicSQL += ',[' + convert(nvarchar(10),dateadd(dd,#counter2,#p1),120) + ']'
set #counter2 = (#counter2 +1)
end
-- continuation of the SQL statement
set #dynamicSQL += ')
) as pivotTable'
;
-- this is the easiet way I could think of to get rid of the leading comma in the query
set #dynamicSQL = replace(#dynamicSQL,'in (,','in (');
print #dynamicSQL -- included this so that you can see the SQL statement that is generated
exec sp_executesql #dynamicSQL; -- this will run the generate dynamic SQL statement
drop table #tbl1;
Let me know if that's what you were looking for.

If you are using MySQL this will make what you want:
SELECT UserID,
UserName,
SUM(Date = '2017-11-09') '2017-11-09',
SUM(Date = '2017-11-10') '2017-11-10',
SUM(Date = '2017-11-11') '2017-11-11'
FROM src
GROUP BY UserID

If you are using SQL Server, you could try it with PIVOT:
SELECT *
FROM
(SELECT userID, userName, EntryDateTime
FROM t) src
PIVOT
(COUNT(userID)
FOR EntryDateTime IN (['2017-11-09'], ['2017-11-10'], ['2017-11-11'])) pvt

Related

SQL return values if row count > X

DECLARE #sql_string varchar(7000)
set #sql_string = (select top 1 statement from queries where name = 'report name')
EXECUTE (#sql_string)
#sql_string is holding another SQL statement. This query works for me. It returns all the values from the query from the statement on the queries table. From this, I need to figure out how to only return the results IF the number of rows returned exceeds a threshold (for my particular case, 25). Else return nothing. I can't quite figure out how to get this conditional statement to work.
Much appreciated for any direction on this.
If all the queries return the same columns, you could simply store the data in a temporary table or table variable and then use logic such as:
select t.*
from #t t
where (select count(*) from #t) > 25;
An alternative is to try constructing a new query from the existing query. I don't recommend trying to parse the existing string, if you can avoid that. Assuming that the query does not use CTEs or have an ORDER BY clause, for instance, something like this should work:
set #sql = '
with q as (
' + #sql + '
)
select q.*
from q
where (select count(*) from q) > 25
';
That did the trick #Gordon. Here was my final:
DECLARE #report_name varchar(100)
DECLARE #sql_string varchar(7000)
DECLARE #sql varchar(7000)
DECLARE #days int
set #report_name = 'Complex Pass Failed within 1 day'
set #days = 5
set #sql_string = (select top 1 statement from queries where name = #report_name )
set #sql = 'with q as (' + #sql_string + ') select q.* from q where (select count(*) from q) > ' + convert(varchar(100), #days)
EXECUTE (#sql)
Worked with 2 nuances.
The SQL returned could not include an end ";" charicter
The statement cannot include an "order by" statement

Insert into table the outcome of a select on that table using Row_Number

I am creating a query where in I select data on a table, then select a number of rows from that table, to then insert those rows into another identical table in another Database, and then repeat the proces to select the next number of rows from the orignal table.
For Reference, this is what i try to do (already build it for Oracle):
$" INSERT INTO {destination-table}
SELECT * FROM {original-table}
WHERE ROWID IN (SELECT B.RID
FROM (SELECT ROWID AS RID, rownum as RID2
FROM {original-table}
WHERE {Where Claus}
AND ROWNUM <= {recordsPerStatement * iteration}
) B WHERE RID2 > {recordsPerStatement * (iteration - 1)})"
This is put through a loop in .net
For SQL server however I fail to get this done. The data i retrieve with:
$" Select B.* from (Select A.* from (Select Row_NUMBER()
OVER (order by %%physloc%%) As RowID, {original-table}.* FROM
{original-table} where {where-claus})
A Where A.RowID between {recordsPerStatement * (iteration - 1)}
AND {recordsPerStatement * iteration} B"
The problem here is that above select produces an extra column (ROWID) which prevents me from inserting the above data into the destination-table
I have been looking at ways to get rid of the ROWID column in the top select or to insert data from original-table based on the data retrieved
(something like insert into destination-table select * from original-table where exists in (rest of select query)..... but to no avail
TLDR = Get rid of a ROWID column used in calculations to then be able to insert rows into an identical table
specifications:
A LOT (millions of rows) of data (therefor processing it in bits)
Unknown tables (so i cannot call on specific column names, as they are unknown)
needs to have an order (thus the row_number) so the same data is not copied twice.
insert using a select query (as first retrieving it and doing some magic locally would severly impact performance)
If necessary additional variables can be added in here (like an order claus variable) however, any reference to data in the query will ALWAYS be a variable + If I can find a way to not add more varriables in the query then that would be preferable
I hope that someone would have an idea on what i could look at further.
This approach uses a temporary table to save the paginated data before processing it page by page. It has worked for me, but not sure if you might have problems with very large data sets. You could put the whole thing in an SP then call the SP with parameters from .net. You will need to add a parameter for the destination table name and construct/execute an INSERT statement in the final loop.
-- Parameters
DECLARE #PageSize integer = 100;
DECLARE #TableName nVarchar(200) = 'WRD_WordHits';
DECLARE #OrderBy nVarchar(3000) = 'WordID'
STEP_010: BEGIN
-- Get the column definitions for the table
DECLARE #Cols int;
SELECT TABLE_NAME, ORDINAL_POSITION, COLUMN_NAME, DATA_TYPE, CHARACTER_MAXIMUM_LENGTH
, IS_NULLABLE
INTO #Tspec
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = #TableName;
-- Number of columns
SET #Cols = ##ROWCOUNT;
END;
STEP_020: BEGIN
-- Create the temporary table that will hold the paginated data
CREATE TABLE #TT2 ( PageNumber int, LineNumber int, SSEQ int )
DECLARE #STMT nvarchar(3000);
END;
STEP_030: BEGIN
-- Add columns to #TT2 using the column definitions
DECLARE #Ord int = 0;
DECLARE #Colspec nvarchar(3000) = '';
DECLARE #AllCols nvarchar(3000) = '';
DECLARE #ColName nvarchar(200) = '';
WHILE #Ord < #Cols BEGIN
SELECT #Ord = #Ord + 1;
-- Get the column name and specification
SELECT #ColName = Column_Name
, #Colspec =
Column_Name + ' ' + DATA_TYPE + CASE WHEN CHARACTER_MAXIMUM_LENGTH IS NULL THEN ''
ELSE '(' + CAST(CHARACTER_MAXIMUM_LENGTH AS varchar(30) ) + ')' END
FROM #Tspec WHERE ORDINAL_POSITION = #Ord;
-- Create and execute statement to add the column and the columns list used later
SELECT #STMT = ' ALTER TABLE #TT2 ADD ' + #Colspec + ';'
, #AllCols = #AllCols + ', ' + #ColName ;
EXEC sp_ExecuteSQL #STMT;
END;
-- Remove leading comma from columns list
SELECT #AllCols = SUBSTRING(#AllCols, 3, 3000);
PRINT #AllCols
-- Finished with the source table spec
DROP TABLE #Tspec;
END;
STEP_040: BEGIN -- Create and execute the statement used to fill #TT2 with the paginated data from the source table
-- The first two cols are the page number and row number within the page
-- The sequence is arbitrary but could use a key list for the order by clause
SELECT #STMT =
'INSERT #TT2
SELECT FLOOR( CAST( SSEQ as float) /' + CAST(#PageSize as nvarchar(10)) + ' ) + 1 PageNumber, (SSEQ) % ' + CAST(#PageSize as nvarchar(10)) + ' + 1 LineNumber, * FROM
(
SELECT ROW_NUMBER() OVER ( ORDER BY ' + #OrderBy + ' ) - 1 AS SSEQ, * FROM ' + #TableName + '
)
A; ' ;
EXEC sp_ExecuteSQL #STMT;
-- *** Test only to show that the table contains the data
--SELECT * FROM #TT2;
--SELECT #STMT = 'SELECT NULL AS EXECSELECT, ' + #AllCols + ' FROM #TT2;' ;
--EXEC sp_ExecuteSQL #STMT;
-- ***
END;
STEP_050: BEGIN -- Loop through paginated data, one page at a time.
-- Variables to control the paginated loop
DECLARE #PageMAX int;
SELECT #PageMAX = MAX(PageNumber) FROM #TT2;
PRINT 'Generated ' + CAST( #PageMAX AS varchar(10) ) + ' pages from table';
DECLARE #Page int = 0;
WHILE #Page < #PageMax BEGIN
SELECT #Page = #Page + 1;
-- Create and execute the statement to get one page of data - this could be any statement to process data page by page
SELECT #STMT = 'SELECT ' + #AllCols + ' FROM #TT2 WHERE PageNumber = ' + CAST(#Page AS Varchar(10 )) + ' ORDER BY LineNumber '
-- Execute the statment.
PRINT #STMT -- For testing
--EXEC sp_EXECUTESQL #STMT;
END;
-- Finished with Paginated data
DROP TABLE #TT2;
END;
The solution i came up with:
First reading the column_names from the database and storing them locally, to then use them again in building up the insert / select query and only select those columns from the view (which are all apart from ROWID).
commandText = $"SELECT column_name
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = N'{table}'"
columnNames = "executionfunction with commandText"
columnNamesCount = columnNames.Rows.Count
Dim counter As Int16 = 0
commandText = String.Empty
commandText = $"INSERT INTO {destination} SELECT "
For Each row As DataRow In columnNames.Rows
If counter = columnNamesCount - 1 Then
commandText += $"B.{row("column_name")} "
Else
commandText += $"B.{row("column_name")}, "
End If
counter = counter + 1
Next
commandText += $"FROM
(Select A.* FROM (Select Row_NUMBER()
OVER(order by %%physloc%%) AS RowID, {table}.*
FROM {table} where {filter}) A
WHERE A.RowID between ({recordsPerStatement} * ({iteration}-1)) + 1
AND ({recordsPerStatement} * {iteration})) B"
EDIT: To remove the %%physloc%% clause AN OFFSET FETCH NEXT part has been build in. new approach:
commandText += $"INSERT INTO {destination} SELECT * FROM {table} WHERE {filter}"
For i As Int16 = 1 To columnNamesCount
If i = 1 Then
commandText += $"ORDER BY {columnNames.Rows(i - 1)("column_name")} ASC"
Else
commandText += $"{columnNames.Rows(i - 1)("column_name")} ASC"
End If
If i <> columnNamesCount Then
commandText += ", "
End If
Next
commandText += $" OFFSET ({recordsPerStatement} * ({iteration} -1)) ROWS FETCH Next {recordsPerStatement} ROWS ONLY"

SQL operations on all columns of a table

I have many (>48) columns in one table, each column corresponds to a month and contains sales for that month. I need to create another table in which each column equals the addition of the previous 12 columns, e.g. getting the "rolling year" figure, so that e.g. July 2010 has everything from August 2009 through July 2010 added, August 2010 has everything from September 2009 through August 2010, and so on.
I could write this as:
select
[201007TOTAL] = [200908] + [200909] + ... + [201007]
,[201008TOTAL] = [200909] + ... + [201008]
...
...
into #newtable
from #mytable
I was wondering if there was a smarter way of doing this, either creating these as new columns in the table in one step, or perhaps pivoting the data, doing something to it, and re-pivoting?
Altough everybody is right, a different database set-up would be best, I thought this was a nice problem to play around with. Here's my setup:
CREATE TABLE TEST
(
ID INT
, [201401] decimal(19, 5)
, [201402] decimal(19, 5)
, [201403] decimal(19, 5)
, [201404] decimal(19, 5)
, [201405] decimal(19, 5)
, [201406] decimal(19, 5)
, [201407] decimal(19, 5)
)
INSERT INTO TEST
VALUES (1, 1, 2, 3, 4, 5, 6, 7)
Just one record with data is enough to test.
On the assumption the columns to be summed are consecutive in the table, and the first one is the first with datatype decimal. In other words, the table 'starts' (for want of better word) with a PK, which is usually INT, may be followed by descriptions or whatever, followed by the monthly columns to be summed:
DECLARE #OP_START INT
, #OP_END INT
, #LOOP INT
, #DATE VARCHAR(255)
, #SQL VARCHAR(MAX) = 'SELECT '
, #COLNAME VARCHAR(MAX)
-- Set Date to max date (=columnname)
SET #DATE = '201406'
-- Find Last attribute
SET #OP_END = (
SELECT MAX(ORDINAL_POSITION)
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'TEST'
AND COLUMN_NAME <= #DATE
)
-- Find First attribute
SET #OP_START = (
SELECT MIN(ORDINAL_POSITION)
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'TEST'
AND DATA_TYPE = 'DECIMAL'
)
SET #LOOP = #OP_START
-- Loop through the columns
WHILE #LOOP <= #OP_END
BEGIN
SET #COLNAME = (
SELECT COLUMN_NAME
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'TEST'
AND ORDINAL_POSITION = #LOOP
)
-- Build SQL with found ColumnName
SET #SQL = #SQL + '[' + #COLNAME + ']' + '+'
SET #LOOP = #LOOP + 1
END
-- Remove last "+"
SET #SQL = SUBSTRING(#SQL, 1, LEN(#SQL) - 1)
-- Complete SQL
SET #SQL = #SQL + ' FROM TEST'
-- Execute
EXEC(#SQL)
This should keep adding up the monthly values, regardless how many you add. Just change the max date to what pleases you.
I'm NOT saying this is the best way to go, but it is a fun way :P

Date and Table name as parameter in Dynamic SQL

I'm Trying to create a stored procedure that will allow me to pick a start date and end date to get data from and to have a variable table name to write this data to.
I would like to pass in the two dates and the table name as parameters in the stored procedure. Here is that part I'm stuck on. I took out the stored procedure to try and get this working. this way I can see the lines the error is on.
DECLARE #MinDateWeek DATETIME
SELECT #MinDateWeek= DATEADD(WEEK, DATEDIFF(WEEK,0,GETDATE()), -7)
DECLARE #MaxDateWeek DATETIME
SELECT #MaxDateWeek= DATEADD(WEEK, DATEDIFF(WEEK,0,GETDATE()),0)
DECLARE #SQLCommand NVARCHAR(MAX)
SET #SQLCommand = ' --ERROR ON THIS LINE
-- Getting how much space is used in the present
DECLARE #Present Table (VMName NVARCHAR(50), UseSpace float(24))
INSERT INTO #Present
SELECT VMName
,SUM(CapacityGB-FreeSpaceGB)
FROM VMWareVMGuestDisk
GROUP BY VMName;
-- Getting how much space was used at the reference date
DECLARE #Past Table (VMName NVARCHAR(50), UseSpace float(24))
INSERT INTO #Past
SELECT VMName
,SUM(CapacityGB-FreeSpaceGB)
FROM VMWareVMGuestDisk
WHERE Cast([Date] AS VARCHAR(20))= '''+CAST(#MinDateWeek AS varchar(20))+'''
GROUP BY VMName;
--Inserting the average growth(GB/DAY) between the 2 dates in a Temporary Table
CREATE TABLE #TempWeek (VMName NVARCHAR(50)
, CapacityGB float(24)
, GrowthLastMonthGB float(24)
, FreeSpace FLOAT(24) )
INSERT INTO #TempWeek
SELECT DISTINCT V.VMName
,SUM(V.CapacityGB)
,SUM(((W1.UseSpace-W2.UseSpace)/(DATEDIFF(DAY,'''+CONVERT(VARCHAR(50),#MaxDateWeek)+''','''+CONVERT(VARCHAR (50),#MaxDateWeek)+'''))))
,SUM(V.FreeSpaceGb)
FROM VMWareVMGuestDisk AS V
LEFT JOIN
#Present AS W1
ON
V.VMName=W1.VMName
LEFT JOIN
#Past AS W2
ON
W1.VMName=W2.VMName
WHERE (CONVERT(VARCHAR(15),Date))='''+CONVERT(VARCHAR(50),#MaxDateWeek)+'''
GROUP BY V.VMName;
-- Checking if there is already data in the table
TRUNCATE TABLE SAN_Growth_Weekly;
--insert data in permanent table
INSERT INTO SAN_Growth_Weekly (VMName,Datacenter,Cluster,Company,DaysLeft,Growth, Capacity,FreeSpace,ReportDate)
SELECT DISTINCT
G.VMName
,V.Datacenter
,V.Cluster
,S.Company
, DaysLeft =
CASE
WHEN G.GrowthLastMonthGB IS NULL
THEN ''NO DATA''
WHEN (G.GrowthLastMonthGB)<=0
THEN ''UNKNOWN''
WHEN (G.FreeSpace/G.GrowthLastMonthGB)>0 AND (G.FreeSpace/G.GrowthLastMonthGB) <=30
THEN ''Less then 30 Days''
WHEN (G.FreeSpace/G.GrowthLastMonthGB)>30 AND (G.FreeSpace/G.GrowthLastMonthGB)<=60 THEN ''Less then 60 Days''
WHEN (G.FreeSpace/G.GrowthLastMonthGB)>60 AND (G.FreeSpace/G.GrowthLastMonthGB)<=90
THEN ''Less then 90 Days''
WHEN (G.FreeSpace/G.GrowthLastMonthGB)>90 AND (G.FreeSpace/G.GrowthLastMonthGB)<=180 THEN ''Less then 180 Days''
WHEN (G.FreeSpace/G.GrowthLastMonthGB)>180 AND (G.FreeSpace/G.GrowthLastMonthGB)<=365 THEN ''Less then 1 Year''
ELSE ''Over 1 Year''
END
,G.GrowthLastMonthGB
,G.CapacityGB
,G.FreeSpace
,'''+#MaxDateWeek+'''
FROM #tempWeek AS G
RIGHT JOIN VMWareVMGuestDisk AS V
ON V.VMName = G.VMName COLLATE SQL_Latin1_General_CP1_CI_AS
LEFT JOIN Server_Reference AS S
ON G.VMName COLLATE SQL_Latin1_General_CP1_CI_AS=S.[Asset Name]
WHERE '''+CONVERT(VARCHAR(50),#MaxDateWeek)+'''= CONVERT(VARCHAR(50),V.Date);'
EXEC sp_executesql #SQLCommand;
The error I get is
Conversion failed when converting date and/or time from character
string.
Thanks for the help.
Are you forgetting to enclose your Group By in the dynamic sql?:
ALTER PROCEDURE SAN_DISK_GROWTH
#MaxDateWeek DATETIME ,
#MinDateWeek DATETIME
AS
BEGIN
DECLARE #SQLCommand NVARCHAR(MAX)
SELECT #SQLCommand = '
DECLARE #Present Table (VMName NVARCHAR(50), UseSpace float(24))
INSERT INTO #Present
SELECT VMName
,SUM(CapacityGB - FreeSpaceGB)
FROM VMWareVMGuestDisk
WHERE CONVERT(VARCHAR(15),Date) = '''
+ CONVERT(VARCHAR(50), #MaxDateWeek) + ''' GROUP BY VMName;'
END
Try specifying your date/time values as parameters to the dynamic SQL query. In other words, instead of converting the dates to a varchar, use parameters in the query:
WHERE #MaxDateWeek = V.Date;
And pass the parameters on the call to sp_executesql like so:
EXEC sp_executesql #SQLCommand,
'#MindateWeek datetime, #MaxDateWeek datetime',
#MinDateWeek = #MinDateWeek,
#MaxDateWeek = #MaxDateWeek
Then you won't have to convert your dates to strings.
Note that this does not work for dynamic table names or column names. Those need to be concatenated together as part of the dynamic SQL itself.
For example, if you had a table name variable like this:
declare #TableName sysname
set #TableName = 'MyTable'
And you wanted the dynamic SQL to retrieve data from that table, then you would need to build your FROM statement like this:
set #SQLCommand = N'SELECT ...
FROM ' + #TableName + N' WHERE...
This build the name into the SQL like so:
'SELECT ... FROM MyTable WHERE...'

Export data from a non-normalized database

I need to export data from a non-normalized database where there are multiple columns to a new normalized database.
One example is the Products table, which has 30 boolean columns (ValidSize1, ValidSize2 ecc...) and every record has a foreign key which points to a Sizes table where there are 30 columns with the size codes (XS, S, M etc...). In order to take the valid sizes for a product I have to scan both tables and take the value SizeCodeX from the Sizes table only if ValidSizeX on the product is true. Something like this:
Products Table
--------------
ProductCode <PK>
Description
SizesTableCode <FK>
ValidSize1
ValidSize2
[...]
ValidSize30
Sizes Table
-----------
SizesTableCode <PK>
SizeCode1
SizeCode2
[...]
SizeCode30
For now I am using a "template" query which I repeat for 30 times:
SELECT
Products.Code,
Sizes.SizesTableCode, -- I need this code because different codes can have same size codes
Sizes.Size_1
FROM Products
INNER JOIN Sizes
ON Sizes.SizesTableCode = Products.SizesTableCode
WHERE Sizes.Size_1 IS NOT NULL
AND Products.ValidSize_1 = 1
I am just putting this query inside a loop and I replace the "_1" with the loop index:
SET #counter = 1;
SET #max = 30;
SET #sql = '';
WHILE (#counter <= #max)
BEGIN
SET #sql = #sql + ('[...]'); -- Here goes my query with dynamic indexes
IF #counter < #max
SET #sql = #sql + ' UNION ';
SET #counter = #counter + 1;
END
INSERT INTO DestDb.ProductsSizes EXEC(#sql); -- Insert statement
GO
Is there a better, cleaner or faster method to do this? I am using SQL Server and I can only use SQL/TSQL.
You can prepare a dynamic query using the SYS.Syscolumns table to get all value in row
DECLARE #SqlStmt Varchar(MAX)
SET #SqlStmt=''
SELECT #SqlStmt = #SqlStmt + 'SELECT '''+ name +''' column , UNION ALL '
FROM SYS.Syscolumns WITH (READUNCOMMITTED)
WHERE Object_Id('dbo.Products')=Id AND ([Name] like 'SizeCode%' OR [Name] like 'ProductCode%')
IF REVERSE(#SqlStmt) LIKE REVERSE('UNION ALL ') + '%'
SET #SqlStmt = LEFT(#SqlStmt, LEN(#SqlStmt) - LEN('UNION ALL '))
print ( #SqlStmt )
Well, it seems that a "clean" (and much faster!) solution is the UNPIVOT function.
I found a very good example here:
http://pratchev.blogspot.it/2009/02/unpivoting-multiple-columns.html