SQL Find dynamic column and update value [closed] - sql

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I have 2 tables
##tblReports - temporary table
Books| GroupId | Category | 01-01-2014 | 02-01-2014 | ..etc
----------+------------+--------------------------
100 | 1 | Limit | 700 | 0
100 | 1 | Exp | 70 | 0
100 | 1 | Balance | 630 | 0
200 | 1 | Limit | 0 | 900
200 | 1 | Exp | 0 | 100
200 | 1 | Balance | 0 | 800
tblLimits -user table
GroupId | 100BooksLimit | 200BooksLimit
----------+------------+---------------
1 | 700 | 900
2 | 7 | 10
Desired output
Books| GroupId | Category | 01-01-2014 | 02-01-2014
----------+------------+--------------------------
100 | 1 | Limit | 700 | 700
100 | 1 | Exp | 70 | 0
100 | 1 | Balance | 630 | 700
200 | 1 | Limit | 900 | 900
200 | 1 | Exp | 0 | 100
200 | 1 | Balance | 900 | 800
the 3rd column onwards from ##tblReports are dynamic. Can you help me how to update it?
Basically:
find all the columns with 0 values
search for its limit in tblLimits table using GroupId and Books.
get the limit and update 'Limit' and 'Balance' row
I tried to use dynamic queries but I cant make it work. Please help me
*I know the design of the tables are not ideal and follow the best practices as this is a client requirement that I need to follow. This is a temporary table and a lot of things happened before this table (multiple joins, pivot and un-pivot)
tables shown are simplified and does not exactly replicate the actual table. Thanks!

-- Create temp tables and sample data
CREATE TABLE ##tblReports (books INT, groupid INT, category VARCHAR(25), [01-01-2014] INT, [02-01-2014] INT)
INSERT INTO ##tblReports VALUES (100, 1, 'Limit', 700, 0), (100, 1, 'Exp', 70, 0), (100, 1, 'Balance', 630, 0),
(200, 1, 'Limit', 0, 900), (200, 1, 'Exp', 0, 100), (200, 1, 'Balance', 0, 800)
CREATE TABLE ##tblLimits (groupid INT, [100bookslimit] INT, [200bookslimit] INT)
INSERT INTO ##tblLimits VALUES (1, 700, 900), (2, 7, 10)
-- Unpivot ##tblLimits in a CTE (see footnote for what this outputs)
DECLARE #sql NVARCHAR(MAX)
SELECT #sql = '
;WITH cte_unpivot AS
(
SELECT groupid, val, CAST(REPLACE(col, ''bookslimit'', '''') AS INT) AS books
FROM ##tblLimits
UNPIVOT (val FOR col IN ('
-- Are the columns in ##tblLimits dynamic (other than groupid)? If so, get their
-- names from tempdb.sys.columns metadata.
SELECT #sql += QUOTENAME(name) + ',' -- [Column],
FROM tempdb.sys.columns
WHERE [object_id] = OBJECT_ID(N'tempdb..##tblLimits') AND name <> 'groupid'
-- Delete trailing comma
SELECT #sql = SUBSTRING(#sql, 1, LEN(#sql) - 1)
SELECT #sql += '))AS u
)
SELECT t.books, t.groupid, category,
'
-- Get ##tblReports column names from tempdb.sys.columns metadata.
SELECT #sql += '
CASE WHEN ' + QUOTENAME(name) + ' = 0 AND t.category IN (''Limit'', ''Balance'')
THEN c.val ELSE t.' + QUOTENAME(name) + ' END AS ' + QUOTENAME(name) + ','
FROM tempdb.sys.columns
WHERE [object_id] = OBJECT_ID(N'tempdb..##tblReports') AND name NOT IN ('books', 'groupid', 'category')
-- Delete trailing comma again
SELECT #sql = SUBSTRING(#sql, 1, LEN(#sql) - 1)
SELECT #sql += '
FROM ##tblReports t
LEFT JOIN cte_unpivot c ON t.books = c.books AND t.groupid = c.groupid
'
EXEC sp_executesql #sql
Returns:
books groupid category 01-01-2014 02-01-2014
100 1 Limit 700 700
100 1 Exp 70 0
100 1 Balance 630 700
200 1 Limit 900 900
200 1 Exp 0 100
200 1 Balance 900 800
The key is unpivoting ##tblLimits into this format so you can easily join it to ##tblReports:
groupid val books
1 700 100
1 900 200
2 7 100
2 10 200
Here's the SQL it generates (but formatted):
;WITH cte_unpivot
AS (SELECT groupid,
val,
Cast(Replace(col, 'bookslimit', '') AS INT) AS books
FROM ##tbllimits
UNPIVOT (val
FOR col IN ([100bookslimit],
[200bookslimit]))AS u)
SELECT t.books,
t.groupid,
category,
CASE
WHEN [01-01-2014] = 0
AND t.category IN ( 'Limit', 'Balance' ) THEN c.val
ELSE t.[01-01-2014]
END AS [01-01-2014],
CASE
WHEN [02-01-2014] = 0
AND t.category IN ( 'Limit', 'Balance' ) THEN c.val
ELSE t.[02-01-2014]
END AS [02-01-2014]
FROM ##tblreports t
LEFT JOIN cte_unpivot c
ON t.books = c.books
AND t.groupid = c.groupid

Related

How to convert SQL Server columns into rows?

I need help to switch columns with rows in SQL. Need to turn this:
+------------+------------+-------------+------+
| Date | Production | Consumption | .... |
+------------+------------+-------------+------+
| 2017-01-01 | 100 | 1925 | |
| 2017-01-02 | 200 | 2005 | |
| 2017-01-03 | 150 | 1998 | |
| 2017-01-04 | 250 | 2200 | |
| 2017-01-05 | 30 | 130 | |
|... | | | |
+------------+------------+-------------+------+
into this:
+------------+------------+------------+------------+------------+-----+
| 01-01-2017 | 02-01-2017 | 03-01-2017 | 04-01-2017 | 05-01-2017 | ... |
+------------+------------+------------+------------+------------+-----+
| 100 | 200 | 150 | 250 | 30 | |
| 1925 | 2005 | 1998 | 2200 | 130 | |
+------------+------------+------------+------------+------------+-----+
Can someone help me? Should I use PIVOT?
EDIT: I've tried using some suggestions like PIVOT and UNPIVOT, but I could not achieve the expected result.
I've tried:
SELECT *
FROM (
SELECT date, Consumption
FROM Energy
where date < '2017-02-01'
) r
pivot (sum(Consumption) for date in ([2017-01-01],[2017-01-02],[2017-01-03]....)) c
order by 1
However with the above query I only managed to get some of what I need,
+------------+------------+------------+------------+------------+-----+
| 01-01-2017 | 02-01-2017 | 03-01-2017 | 04-01-2017 | 05-01-2017 | ... |
+------------+------------+------------+------------+------------+-----+
| 100 | 200 | 150 | 250 | 30 | |
+------------+------------+------------+------------+------------+-----+
I need to have production and consumption, all in the same query, but I can only get one of them.
Is it possible to put more than one column in PIVOT? I've tried, but unsuccessfully.
You can achieve the desired output with dynamic sql, but be aware of performance and security problems (i.e. SQL injection) of this approach.
--create test table
CREATE TABLE dbo.Test (
[Date] date
, Production int
, Consumption int
)
--populate test table with values
insert into dbo.Test
values
('2017-01-01', 100, 1925)
,('2017-01-02', 200, 2005)
,('2017-01-03', 150, 1998)
,('2017-01-04', 250, 2200)
,('2017-01-05', 30, 130)
--table variable that will hold the names of all columns to pivot
declare #columNames table (ColumnId int identity (1,1), ColumnName varchar(255))
--variable that will hold the total number of columns to pivot
declare #columnCount int
--variable that will be used to run through the columns to pivot
declare #counter int = 1
--this variable holds all column names
declare #headers nvarchar(max) = ''
--this variable contains the TSQL dinamically generated
declare #sql nvarchar(max) = ''
--populate list of columns to pivot
insert into #columNames
select COLUMN_NAME
from INFORMATION_SCHEMA.COLUMNS
where
TABLE_NAME = 'test'
and TABLE_SCHEMA = 'dbo'
and COLUMN_NAME <>'date'
--populate column total counter
select #columnCount = count(*) from #columNames
--populate list of headers of the result table
select #headers = #headers + ', ' + quotename([Date])
from dbo.Test
set #headers = right(#headers, len(#headers) - 2)
--run through the table containing the columns names and generate the dynamic sql query
while #counter <= #columnCount
begin
select #sql = #sql + ' select piv.* from (select [Date], '
+ quotename(ColumnName) + ' from dbo.Test) p pivot (max('
+ quotename(ColumnName) + ') for [Date] in ('
+ #headers + ') ) piv '
from #columNames where ColumnId = #counter
--add union all except when we are concatenating the last pivot statement
if #counter < #columnCount
set #sql = #sql + ' union all'
--increment counter
set #counter = #counter + 1
end
--execute the dynamic query
exec (#sql)
Result:
Now if you add a column and some more rows:
--create test table
CREATE TABLE [dbo].[Test] (
[Date] date
, Production int
, Consumption int
, NewColumn int
)
--populate test table with values
insert into [dbo].[Test]
values
('2017-01-01', 100, 1925 , 10)
,('2017-01-02', 200, 2005, 20)
,('2017-01-03', 150, 1998, 30)
,('2017-01-04', 250, 2200, 40)
,('2017-01-05', 30, 130 , 50)
,('2017-01-06', 30, 130 , 60)
,('2017-01-07', 30, 130 , 70)
this is the result:

SQL: select max(value) when change columns in table

Sorry if the title is confusing. I have a problem when Select from 2 table. I have 2 table like that.
Table 1: contains the column names of Table 2
+ Pkey | name1 | name2 +
+----------------------+
| 1 | a | b |
+----------------------+
| 2 | c | b |
Table 2: contains values
+ Pkey | a | b | c +
+----------------------+------+
| 1 | 10 | 2 | 7 |
+----------------------+------+
| 2 | 12 | 4 | 8 |
+----------------------+------+
| 3 | 8 | 2 | 4 |
+----------------------+------+
| 4 | 7 | 1 | 3 |
I want to get the max(value) from the table 2 and add when select table 1
Example: With first row of table 1 contains 2 values : a and b. From these two values, we refer to table 2 to calculated column a - column b is [8,8,6,6]. After getting the max value of this column is 8 and add when query table 1. Keep going with the next rows
Desired table:
+ Pkey | name1 | name2 | Desired column |
+----------------------+-------------------+
| 1 | a | b | 8 |
+----------------------+-------------------+
| 2 | c | b | 5 |
I have more than 10000 rows in table 1. I used function and It can not use dynamic in Function
One possible approach is to generate dynamic SQL:
-- Tables
CREATE TABLE #Table1 (
Pkey int,
name1 varchar(1),
name2 varchar(1)
)
INSERT INTO #Table1 (Pkey, name1, name2)
VALUES
(1, 'a', 'b'),
(2, 'c', 'b')
CREATE TABLE #Table2 (
Pkey int,
a int,
b int,
c int
)
INSERT INTO #Table2 (Pkey, a,b, c)
VALUES
(1, 10, 2, 7),
(2, 12, 4, 8),
(3, 8, 2, 4),
(4, 7, 1, 3)
-- Statement
DECLARE #stm nvarchar(max)
SET #stm = N''
SELECT #stm = #stm +
N'UNION ALL
SELECT
' + STR(Pkey) + ' AS Pkey,
''' + name1 + ''' AS name1,
''' + name2 + ''' AS name2, ' +
'PkeyMax = (SELECT MAX(' + name1 + ' - ' + name2 + ') FROM #Table2) '
FROM #Table1
SELECT #stm = STUFF(#stm, 1, 10, '')
-- Execution
EXEC (#stm)
Output:
Pkey name1 name2 PkeyMax
1 a b 8
2 c b 5
This gets the results you want, since there are few fields, makes sense to use CASE to get the ones you want (in order to avoid building dynamic SQL)
SELECT pkey,name1,name2,max(dif) FROM
(SELECT t1.pkey, t1.name1, t1.name2,
case when t1.name1 ='a' then t2.a
when t1.name1 ='b' then t2.b
when t1.name1 ='c' then t2.c
end
-
case when t1.name2 ='a' then t2.a
when t1.name2 ='b' then t2.b
when t1.name2 ='c' then t2.c
end dif
FROM Table1 t1 , Table2 t2) IQ
GROUP BY IQ.pkey, IQ.name1, IQ.name2

SQL SERVER 2017 - How do I query to retrieve a group of data only if all of the data inside that group are marked as completed?

I have some observation tables like below. The observation data might be in individual form or grouped form which is determined by the observation category table.
cat table (which holds category data)
id | title | is_groupable
-------------------------------------------
1 | Cat 1 | 1
2 | Cat 2 | 1
3 | Cat 3 | 0
4 | Cat 4 | 0
5 | Cat 5 | 1
obs table (Holds observation data, groupable data are indicated by is_groupable of cat table, and the data is grouped in respect to index of obs table. and is_completed field indicates if some action has been taken on that or not)
id | cat_id | index | is_completed | created_at
------------------------------------------------------
1 | 3 | 100 | 0 | 2017-12-01
2 | 4 | 400 | 1 | 2017-12-02
// complete action taken group indicated by 1 in is_completed field
3 | 1 | 200 | 1 | 2017-12-1
4 | 1 | 200 | 1 | 2017-12-1
// not complete action taken group
5 | 2 | 300 | 0 | 2017-12-1
6 | 2 | 300 | 1 | 2017-12-1
7 | 2 | 300 | 0 | 2017-12-1
// complete action taken group
8 | 5 | 400 | 1 | 2017-12-1
9 | 5 | 400 | 1 | 2017-12-1
10 | 5 | 400 | 1 | 2017-12-1
For the sake of easeness in understanding i have separated the set of data as completed or not using the comment above in obs table.
Now what I want to achieve is retrieve the set of data in a group format from obs table. In the above case the set of groups are
{3,4}
{5,6,7}
{8,9,10}
i want to get set {3,4} and {8,9,10} in my result since every data in the group are flagged as is_completed: 1
I don't need {5,6,7} set of data because it has only 6 is flagged as completed, 5 and 7 are not taken action and hence not completed.
What I have done till now is
(Lets ignore the individual case, because It is very easy and already completed and for the group case as well, Im able to retrieve the group data, if ignoring the action taken case, ie I able to group them and retrieve the sets irrespective of taken action or not.)
(SELECT
null AS id,
cat.is_groupable AS is_grouped,
cat.title,
cat.id AS category_id,
o.index,
o.date,
null AS created_at,
null AS is_action_taken,
(
-- individual observation
SELECT
oi.id AS "observation.id",
oi.category_id AS "observation.category_id",
oi.index AS "observation.index",
oi.created_at AS "observation.created_at",
-- action taken flag according to is_completed
CAST(
CASE
WHEN ((oi.is_completed) > 0) THEN 1
ELSE 0
END AS BIT
) AS "observation.is_action_taken",
-- we might do some sort of comparison here
CAST(
(
CASE
--
-- Check if total count == completed count
WHEN (
SELECT COUNT(obs.id)
FROM obs
WHERE obs.category_id = cat.id AND obs.index = o.index
) = (
SELECT COUNT(obs.id)
FROM obs
WHERE obs.category_id = cat.id AND oi.index = o.index
AND oi.is_action_taken = 1
) then 1
else 0
end
) as bit
) as all_completed_in_group
FROM observations oi
WHERE oi.category_id = cat.id
AND oi.index = o.index
FOR JSON PATH
) AS observations
FROM obs o
INNER JOIN cat ON cat.id = o.category_id
WHERE cat.is_groupable = 1
GROUP BY cat.id, cat.name, o.index, cat.is_groupable, o.created_at
)
Let's not get over on if this query executes successfully or not. I just want idea, if there is any better approach than this one, or if this approach is correct or not.
Hopefully this is what you need. To check the group completeness I just used AND NOT EXISTS for the group that has an is_completed = '0'. The inner join is used to get the corresponding obs id's. The algorithm is put in a CTE (common table expression). Then I use STUFF on the CTE to get the output.
DECLARE #cat TABLE (id int, title varchar(100), is_groupable bit)
INSERT INTO #cat VALUES
(1, 'Cat 1', 1), (2, 'Cat 2', 1), (3, 'Cat 3', 0), (4, 'Cat 4', 0), (5, 'Cat 5', 1)
DECLARE #obs TABLE (id int, cat_id int, [index] int, is_completed bit, created_at date)
INSERT INTO #obs VALUES
(1, 3, 100, 0, '2017-12-01'), (2, 4, 400, 1, '2017-12-02')
-- complete action taken group indicated by 1 in is_completed field
,(3, 1, 200, 1, '2017-12-01'), (4, 1, 200, 1, '2017-12-01')
-- not complete action taken group
,(5, 2, 300, 0, '2017-12-01'), (6, 2, 300, 1, '2017-12-01'), (7, 2, 300, 0, '2017-12-01')
-- complete action taken group
,(8, 5, 400, 1, '2017-12-01'), (9, 5, 400, 1, '2017-12-01'), (10, 5, 400, 1, '2017-12-01')
;
WITH cte AS
(
SELECT C.id [cat_id]
,O2.id [obs_id]
FROM #cat C INNER JOIN #obs O2 ON C.id = O2.cat_id
WHERE C.is_groupable = 1 --is a group
AND NOT EXISTS (SELECT *
FROM #obs O
WHERE O.cat_id = C.id
AND O.is_completed = '0'
--everything in the group is_completed
)
)
--Stuff will put everything on one row
SELECT DISTINCT
'{'
+ STUFF((SELECT ',' + CAST(C2.obs_id as varchar)
FROM cte C2
WHERE C2.cat_id = C1.cat_id
FOR XML PATH('')),1,1,'')
+ '}' AS returnvals
FROM cte C1
Produces output:
returnvals
{3,4}
{8,9,10}
I would try this approach, the trick is using the sum of completed to be checked against the total present for the group.
;WITH aux AS (
SELECT o.cat_id, COUNT(o.id) Tot, SUM(CONVERT(int, o.is_completed)) Compl, MIN(CONVERT(int, c.is_groupable)) is_groupable
FROM obs o INNER JOIN cat c ON o.cat_id = c.id
GROUP BY o.cat_id
)
, res AS (
SELECT o.*, a.is_groupable
FROM obs o INNER JOIN aux a ON o.cat_id = a.cat_id
WHERE (a.Tot = a.Compl AND a.is_groupable = 1) OR a.Tot = 1
)
SELECT CONVERT(nvarchar(10), id) id, CONVERT(nvarchar(10), cat_id) cat_id
INTO #res
FROM res
SELECT * FROM #res
SELECT m.cat_id, LEFT(m.results,Len(m.results)-1) AS DataGroup
FROM
(
SELECT DISTINCT r2.cat_id,
(
SELECT r1.id + ',' AS [text()]
FROM #res r1
WHERE r1.cat_id = r2.cat_id
ORDER BY r1.cat_id
FOR XML PATH ('')
) results
FROM #res r2
) m
ORDER BY DataGroup
DROP TABLE #res
The conversion is due to bit datatype

SQL SELECT: concatenated column with line breaks and heading per group

I have the following SQL result from a SELECT query:
ID | category| value | desc
1 | A | 10 | text1
2 | A | 11 | text11
3 | B | 20 | text20
4 | B | 21 | text21
5 | C | 30 | text30
This result is stored in a temporary table named #temptab. This temporary table is then used in another SELECT to build up a new colum via string concatenation (don't ask me about the detailed rationale behind this. This is code I took from a colleague). Via FOR XML PATH() the output of this column is a list of the results and is then used to send mails to customers.
The second SELECT looks as follows:
SELECT t1.column,
t2.column,
(SELECT t.category + ' | ' + t.value + ' | ' + t.desc + CHAR(9) + CHAR(13) + CHAR(10)
FROM #temptab t
WHERE t.ID = ttab.ID
FOR XML PATH(''),TYPE).value('.','NVARCHAR(MAX)') AS colname
FROM table1 t1
...
INNER JOIN #temptab ttab on ttab.ID = someOtherTable.ID
...
Without wanting to go into too much detail, the column colname becomes populated with several entries (due to multiple matches) and hence, a longer string is stored in this column (CHAR(9) + CHAR(13) + CHAR(10) is essentially a line break). The result/content of colname looks like this (it is used to send mails to customers):
A | 10 | text1
A | 11 | text11
B | 20 | text20
B | 21 | text21
C | 30 | text30
Now I would like to know, if there is a way to more nicely format this output string. The best case would be to group the same categories together and add a heading and empty line between different categories:
*A*
A | 10 | text1
A | 11 | text11
*B*
B | 20 | text20
B | 21 | text21
*C*
C | 30 | text30
My question is: How do I have to modify the above query (especially the string-concatenation-part) to achieve above formatting? I was thinking about using a GROUP BY statement, but this obviously does not yield the desired result.
Edit: I use Microsoft SQL Server 2008 R2 (SP2) - 10.50.4270.0 (X64)
Declare #YourTable table (ID int,category varchar(50),value int, [desc] varchar(50))
Insert Into #YourTable values
(1,'A',10,'text1'),
(2,'A',11,'text11'),
(3,'B',20,'text20'),
(4,'B',21,'text21'),
(5,'C',30,'text30')
Declare #String varchar(max) = ''
Select #String = #String + Case when RowNr=1 Then Replicate(char(13)+char(10),2) +'*'+Category+'*' Else '' end
+ char(13)+char(10) + category + ' | ' + cast(value as varchar(25)) + ' | ' + [desc]
From (
Select *
,RowNr=Row_Number() over (Partition By Category Order By Value)
From #YourTable
) A Order By Category, Value
Select Substring(#String,5,Len(#String))
Returns
*A*
A | 10 | text1
A | 11 | text11
*B*
B | 20 | text20
B | 21 | text21
*C*
C | 30 | text30
This should return what you want
Declare #YourTable table (ID int,category varchar(50),value int, [desc] varchar(50))
Insert Into #YourTable values
(1,'A',10,'text1'),
(2,'A',11,'text11'),
(3,'B',20,'text20'),
(4,'B',21,'text21'),
(5,'C',30,'text30');
WITH Categories AS
(
SELECT category
,'**' + category + '**' AS CatCaption
,ROW_NUMBER() OVER(ORDER BY category) AS CatRank
FROM #YourTable
GROUP BY category
)
,Grouped AS
(
SELECT c.CatRank
,0 AS ValRank
,c.CatCaption AS category
,-1 AS ID
,'' AS Value
,'' AS [desc]
FROM Categories AS c
UNION ALL
SELECT c.CatRank
,ROW_NUMBER() OVER(PARTITION BY t.category ORDER BY t.Value)
,t.category
,t.ID
,CAST(t.value AS VARCHAR(100))
,t.[desc]
FROM #YourTable AS t
INNER JOIN Categories AS c ON t.category=c.category
)
SELECT category,Value,[desc]
FROM Grouped
ORDER BY CatRank,ValRank
The result
category Value desc
**A**
A 10 text1
A 11 text11
**B**
B 20 text20
B 21 text21
**C**
C 30 text30

Combining like data from columns into single row

I'm trying to combine partial contents of rows that are the result set of a query from SQL Server 2005 that reads a .CSV. Here's a simplified version of the data I have:
objectID | value1 | value2
_________________________________
12 | R | 100
12 | R | 101
12 | S | 220
13 | D | 88
14 | K | 151
14 | K | 152
What I'm trying to get to is a grouping of each objectID's values on the same row, so that there is one and only one row for each objectID. In graphical terms:
objectID | value1a | value2a | value 1b | value2b | value1c | value2c
______________________________________________________________________________
12 | R | 100 | R | 101 | S | 220
13 | D | 88 | | | |
14 | K | 151 | K | 152 | |
Blank cells are blank.
I've been hoping to do this in Excel or Access without VB, but CONCAT and other similar functions (and responses here and elsewhere suggesting similar approaches) don't work because each value needs to stay in its own cell (this data will eventually be merged with a Word form). If the answer's a SQL stored procedure or cursor, that's okay, though I'm not terribly efficient at writing them just yet.
Thanks to all.
First import the data into a temp table. The temp table will end up something like this sample data:
create table #tmp (objectID int, value1 char(1), value2 int)
insert #tmp select
12 ,'R', 100 union all select
12 ,'R', 101 union all select
12 ,'S', 220 union all select
13 ,'D', 88 union all select
14 ,'K', 151 union all select
14 ,'K', 152
Then, you can use this SQL batch - which can be put into a Stored Procedure if required.
declare #sql nvarchar(max)
select #sql = ISNULL(#sql+',','')
+ 'max(case when rn=' + cast(number as varchar) + ' then value1 end) value' + cast(number as varchar) + 'a,'
+ 'max(case when rn=' + cast(number as varchar) + ' then value2 end) value' + cast(number as varchar) + 'b'
from master..spt_values
where type='P' and number between 1 and (
select top 1 COUNT(*)
from #tmp
group by objectID
order by 1 desc)
set #sql = '
select objectID, ' + #sql + '
from (
select rn=ROW_NUMBER() over (partition by objectID order by value2), *
from #tmp) p
group by ObjectID'
exec (#sql)
Output
objectID value1a value1b value2a value2b value3a value3b
----------- ------- ----------- ------- ----------- ------- -----------
12 R 100 R 101 S 220
13 D 88 NULL NULL NULL NULL
14 K 151 K 152 NULL NULL
Warning: Null value is eliminated by an aggregate or other SET operation.