How to pivot in SQL - sql

I am not sure if this would be called pivoting.
Data in my SQL 2005 table [CustromerRoles] is as such:
CustId RoleId
2 4
2 3
3 4
4 1
4 2
[Roles] table:
RoleId Role
1 Admin
2 Manager
3 Support
4 Assistant
I want to create a view such that:
SELECT * FROM [MYVIEW] will give me the data below:
The 1 & 0's will be bits so that I can display a grid with checkboxes on my UI display.
CustId Admin Manager Support Assistant
2 0 0 1 1
3 0 0 0 1
4 1 1 0 0
So far I have no idea how to go about doing this.

Have you read the documentation on PIVOT in Microsoft SQL Server 2005?
SELECT CustId,
[1] AS Admin,
[2] AS Manager,
[3] AS Support,
[4] AS Assistant
FROM (SELECT c.CustId, r.RoleId
FROM CustomerRoles c JOIN Roles r USING (RoleId)) AS s
PIVOT (
COUNT(CustId)
FOR RoleId IN ([1], [2], [3], [4])
) AS pvt
ORDER BY CustId;
I haven't tested the above, but just based it on the doc. This may get you started.
There doesn't seem to be a way to generate the columns dynamically. You have to hard-code them.

Try this:
SELECT
CustId,
SUM(ISNULL(Admin,0)) AS Admin,
SUM(ISNULL(Manager,0)) AS Manager,
SUM(ISNULL(Support,0)) AS Support,
SUM(ISNULL(Assistant,0)) AS Assistant
FROM
(
SELECT cr.CustId, cr.RoleId, Role, 1 AS a
FROM CustromerRoles cr
INNER JOIN Roles r ON cr.RoleId = r.RoleId
) up
PIVOT (MAX(a) FOR Role IN (Admin, Manager, Support, Assistant)) AS pvt
GROUP BY CustId
Tested. It gives the same result you want.

PIVOT has the disadvantage that the columns must be known, because you have to provide the ids in the query. You can work around this by using dynamic SQL, i.e. generating the PIVOT query dynamically based on separate query results from the Roles table, in your case, then executing the result. This can easily be done in a stored procedure.
Example:
CREATE TABLE #CustomerRole ([CustId] int, [RoleId] int);
INSERT INTO #CustomerRole values (2, 4);
INSERT INTO #CustomerRole values (2, 3);
INSERT INTO #CustomerRole values (3, 4);
INSERT INTO #CustomerRole values (4, 1);
INSERT INTO #CustomerRole values (4, 2);
CREATE TABLE #Role ([Id] int, [Role] varchar(20));
INSERT INTO #Role values (1, 'Admin');
INSERT INTO #Role values (2, 'Manager');
INSERT INTO #Role values (3, 'Support');
INSERT INTO #Role values (4, 'Assistant');
DECLARE #RoleList nvarchar(MAX)
SELECT #RoleList = COALESCE(#RoleList + ',[' + [Role] + ']',
'[' + [Role] + ']')
FROM #Role;
DECLARE #SQL Nvarchar(max);
SET #SQL = 'SELECT
[CustId] ' +
ISNULL(', ' + #RoleList , '') + '
FROM #CustomerRole custrole
inner join #Role as r
on r.[Id] = custrole.[RoleId]
PIVOT (count([Id]) for [Role] IN
(' + ISNULL(#RoleList, '[No role]') +
')) as pvt;'
EXEC sp_executesql #SQL;
drop table #Role;
drop table #CustomerRole;

Related

'Merge Fields' - alike SQL Server function

I try to find a way to let the SGBD perform a population of merge fields within a long text.
Create the structure :
CREATE TABLE [dbo].[store]
(
[id] [int] NOT NULL,
[text] [nvarchar](MAX) NOT NULL
)
CREATE TABLE [dbo].[statement]
(
[id] [int] NOT NULL,
[store_id] [int] NOT NULL
)
CREATE TABLE [dbo].[statement_merges]
(
[statement_id] [int] NOT NULL,
[merge_field] [nvarchar](30) NOT NULL,
[user_data] [nvarchar](MAX) NOT NULL
)
Now, create test values
INSERT INTO [store] (id, text)
VALUES (1, 'Waw, stackoverflow is an amazing library of lost people in the IT hell, and i have the feeling that $$PERC_SAT$$ of the users found a solution, personally I asked $$ASKED$$ questions.')
INSERT INTO [statement] (id, store_id)
VALUES (1, 1)
INSERT INTO [statement_merges] (statement_id, merge_field, user_data)
VALUES (1, '$$PERC_SAT$$', '85%')
INSERT INTO [statement_merges] (statement_id, merge_field, user_data)
VALUES (1, '$$ASKED$$', '12')
At the time being my app is delivering the final statement, looping through merges, replacing in the stored text and output
Waw, stackoverflow is an amazing library of lost people in the IT
hell, and i have the feeling that 85% of the users found a solution,
personally I asked 12 questions.
I try to find a way to be code-independent and serve the output in a single query, as u understood, select a statement in which the stored text have been populated with user data. I hope I'm clear.
I looked on TRANSLATE function but it looks like a char replacement, so I have two choices :
I try a recursive function, replacing one by one until no merge_fields is found in the calculated text; but I have doubts about the performance of this approach;
There is a magic to do that but I need your knowledge...
Consider that I want this because the real texts are very long, and I don't want to store it more than once in my database. You can imagine a 3 pages contract with only 12 parameters, like start date, invoiced amount, etc... Everything else cant be changed for compliance.
Thank you for your time!
EDIT :
Thanks to Randy's help, this looks to do the trick :
WITH cte_replace_tokens AS (
SELECT replace(r.text, m.merge_field, m.user_data) as [final], m.merge_field, s.id, 1 AS i
FROM store r
INNER JOIN statement s ON s.store_id = r.id
INNER JOIN statement_merges m ON m.statement_id = s.id
WHERE m.statement_id = 1
UNION ALL
SELECT replace(r.final, m.merge_field, m.user_data) as [final], m.merge_field, r.id, r.i + 1 AS i
FROM cte_replace_tokens r
INNER JOIN statement_merges m ON m.statement_id = r.id
WHERE m.merge_field > r.merge_field
)
select TOP 1 final from cte_replace_tokens ORDER BY i DESC
I will check with a bigger database if the performance is good...
At least, I can "populate" one statement, I need to figure out to be able to extract a list as well.
Thanks again !
If a record is updated more than once by the same update, the last wins. None of the updates are affected by the others - no cumulative effect. It is possible to trick SQL using a local variable to get cumulative effects in some cases, but it's tricky and not recommended. (Order becomes important and is not reliable in an update.)
One alternate is recursion in a CTE. Generate a new record from the prior as each token is replaced until there are no tokens. Here is a working example that replaces 1 with A, 2 with B, etc. (I wonder if there is some tricky xml that can do this as well.)
if not object_id('tempdb..#Raw') is null drop table #Raw
CREATE TABLE #Raw(
[test] [varchar](100) NOT NULL PRIMARY KEY CLUSTERED,
)
if not object_id('tempdb..#Token') is null drop table #Token
CREATE TABLE #Token(
[id] [int] NOT NULL PRIMARY KEY CLUSTERED,
[token] [char](1) NOT NULL,
[value] [char](1) NOT NULL,
)
insert into #Raw values('123456'), ('1122334456')
insert into #Token values(1, '1', 'A'), (2, '2', 'B'), (3, '3', 'C'), (4, '4', 'D'), (5, '5', 'E'), (6, '6', 'F');
WITH cte_replace_tokens AS (
SELECT r.test, replace(r.test, l.token, l.value) as [final], l.id
FROM [Raw] r
CROSS JOIN #Token l
WHERE l.id = 1
UNION ALL
SELECT r.test, replace(r.final, l.token, l.value) as [final], l.id
FROM cte_replace_tokens r
CROSS JOIN #Token l
WHERE l.id = r.id + 1
)
select * from cte_replace_tokens where id = 6
It's not recommended to do such tasks inside sql engine but if you want to do that, you need to do it in a loop using cursor in a function or stored procedure like so :
DECLARE #merge_field nvarchar(30)
, #user_data nvarchar(MAX)
, #statementid INT = 1
, #text varchar(MAX) = 'Waw, stackoverflow is an amazing library of lost people in the IT hell, and i have the feeling that $$PERC_SAT$$ of the users found a solution, personally I asked $$ASKED$$ questions.'
DECLARE merge_statements CURSOR FAST_FORWARD
FOR SELECT
sm.merge_field
, sm.user_data
FROM dbo.statement_merges AS sm
WHERE sm.statement_id = #statementid
OPEN merge_statements
FETCH NEXT FROM merge_statements
INTO #merge_field , #user_data
WHILE ##FETCH_STATUS = 0
BEGIN
set #text = REPLACE(#text , #merge_field, #user_data )
FETCH NEXT FROM merge_statements
INTO #merge_field , #user_data
END
CLOSE merge_statements
DEALLOCATE merge_statements
SELECT #text
Here is a recursive solution.
SQL Fiddle
MS SQL Server 2017 Schema Setup:
CREATE TABLE [dbo].[store]
(
[id] [int] NOT NULL,
[text] [nvarchar](MAX) NOT NULL
)
CREATE TABLE [dbo].[statement]
(
[id] [int] NOT NULL,
[store_id] [int] NOT NULL
)
CREATE TABLE [dbo].[statement_merges]
(
[statement_id] [int] NOT NULL,
[merge_field] [nvarchar](30) NOT NULL,
[user_data] [nvarchar](MAX) NOT NULL
)
INSERT INTO store (id, text)
VALUES (1, '$$(*)$$, stackoverflow...$$PERC_SAT$$...$$ASKED$$ questions.')
INSERT INTO store (id, text)
VALUES (2, 'Use The #_#')
INSERT INTO statement (id, store_id) VALUES (1, 1)
INSERT INTO statement (id, store_id) VALUES (2, 2)
INSERT INTO statement_merges (statement_id, merge_field, user_data) VALUES (1, '$$PERC_SAT$$', '85%')
INSERT INTO statement_merges (statement_id, merge_field, user_data) VALUES (1, '$$ASKED$$', '12')
INSERT INTO statement_merges (statement_id, merge_field, user_data) VALUES (1, '$$(*)$$', 'Wow')
INSERT INTO statement_merges (statement_id, merge_field, user_data) VALUES (2, ' #_#', 'Flux!')
Query 1:
;WITH Normalized AS
(
SELECT
store_id=store.id,
store.text,
sm.merge_field,
sm.user_data,
RowNumber = ROW_NUMBER() OVER(PARTITION BY store.id,sm.statement_id ORDER BY merge_field),
statement_id = st.id
FROM
store store
INNER JOIN statement st ON st.store_id = store.id
INNER JOIN statement_merges sm ON sm.statement_id = st.id
)
, Recurse AS
(
SELECT
store_id, statement_id, old_text = text, merge_field,user_data, RowNumber,
Iteration=1,
new_text = REPLACE(text, merge_field, user_data)
FROM
Normalized
WHERE
RowNumber=1
UNION ALL
SELECT
n.store_id, n.statement_id, r.old_text, n.merge_field, n.user_data,
RowNumber=r.RowNumber+1,
Iteration=Iteration+1,
new_text = REPLACE(r.new_text, n.merge_field, n.user_data)
FROM
Normalized n
INNER JOIN Recurse r ON r.RowNumber = n.RowNumber AND r.statement_id = n.statement_id
)
,ReverseOnIteration AS
(
SELECT *,
ReverseIteration = ROW_NUMBER() OVER(PARTITION BY statement_id ORDER BY Iteration DESC)
FROM
Recurse
)
SELECT
store_id, statement_id, new_text, old_text
FROM
ReverseOnIteration
WHERE
ReverseIteration=1
Results:
| store_id | statement_id | new_text | old_text |
|----------|--------------|------------------------------------------|--------------------------------------------------------------|
| 1 | 1 | Wow, stackoverflow...85%...12 questions. | $$(*)$$, stackoverflow...$$PERC_SAT$$...$$ASKED$$ questions. |
| 2 | 2 | Use TheFlux! | Use The #_# |
With the help of Randy, I think I've achieved what I wanted to do !
Known the fact that my real case is a contract, in which there are several statements that may be :
free text
stored text without any merges
stored text with one or
several merges
this CTE does the job !
WITH cte_replace_tokens AS (
-- The initial query dont join on merges neither on store because can be a free text
SELECT COALESCE(r.text, s.part_text) AS [final], CAST('' AS NVARCHAR) AS merge_field, s.id, 1 AS i, s.contract_id
FROM statement s
LEFT JOIN store r ON s.store_id = r.id
UNION ALL
-- We loop till the last merge field, output contains iteration to be able to keep the last record ( all fields updated )
SELECT replace(r.final, m.merge_field, m.user_data) as [final], m.merge_field, r.id, r.i + 1 AS i, r.contract_id
FROM cte_replace_tokens r
INNER JOIN statement_merges m ON m.statement_id = r.id
WHERE m.merge_field > r.merge_field AND r.final LIKE '%' + m.merge_field + '%'
-- spare lost replacements by forcing only one merge_field per loop
AND NOT EXISTS( SELECT mm.statement_id FROM statement_merges mm WHERE mm.statement_id = m.statement_id AND mm.merge_field > r.merge_field AND mm.merge_field < m.merge_field)
)
select s.id,
(select top 1 final from cte_replace_tokens t WHERE t.contract_id = s.contract_id AND t.id = s.id ORDER BY i DESC) as res
FROM statement s
where contract_id = 1
If the CTE solution with a cross join is too slow, an alternate solution would be to build a scalar fn dynamically that has every REPLACE required from the token table. One scalar fn call per record then is order(N). I get the same result as before.
The function is simple and likely not to be too long, depending upon how big the token table becomes...256 MB batch limit. I've seen attempts to dynamically create queries to improve performance backfire - moved the problem to compile time. Should not be a problem here.
if not object_id('tempdb..#Raw') is null drop table #Raw
CREATE TABLE #Raw(
[test] [varchar](100) NOT NULL PRIMARY KEY CLUSTERED,
)
if not object_id('tempdb..#Token') is null drop table #Token
CREATE TABLE #Token(
[id] [int] NOT NULL PRIMARY KEY CLUSTERED,
[token] [char](1) NOT NULL,
[value] [char](1) NOT NULL,
)
insert into #Raw values('123456'), ('1122334456')
insert into #Token values(1, '1', 'A'), (2, '2', 'B'), (3, '3', 'C'), (4, '4', 'D'), (5, '5', 'E'), (6, '6', 'F');
DECLARE #sql varchar(max) = 'CREATE FUNCTION dbo.fn_ReplaceTokens(#raw varchar(8000)) RETURNS varchar(8000) AS BEGIN RETURN ';
WITH cte_replace_statement AS (
SELECT a.id, CAST('replace(#raw,''' + a.token + ''',''' + a.value + ''')' as varchar(max)) as [statement]
FROM #Token a
WHERE a.id = 1
UNION ALL
SELECT n.id, CAST(replace(l.[statement], '#raw', 'replace(#raw,''' + n.token + ''',''' + n.value + ''')') as varchar(max)) as [statement]
FROM #Token n
INNER JOIN cte_replace_statement l
ON n.id = l.id + 1
)
select #sql += [statement] + ' END' from cte_replace_statement where id = 6
print #sql
if not object_id('dbo.fn_ReplaceTokens') is null drop function dbo.fn_ReplaceTokens
execute (#sql)
SELECT r.test, dbo.fn_ReplaceTokens(r.test) as [final] FROM [Raw] r

T-SQL loop through two tables

I have been unable to find a working solution for below dilemma.
I am using SQL Server 2016 and have the 2 tables shown below in a database.
Users table:
Id Name
----------
1 Lisa
2 Paul
3 John
4 Mike
5 Tom
Role table:
Id UserId Role
------------------------
1 3 Manager
2 2,4,5 Developer
3 1 Designer
I am looking for T-SQL code that loops through the Role table, extracts UserIds and retrieves associated name for each Id from the Users table.
So the looped result would look like this:
John
Paul,Mike,Tom
Lisa
FOR SQL SERVER 2017
SELECT R1.Id,STRING_AGG(U.Name , ','),R1.Role
FROM Users U
INNER JOIN
(
SELECT R.Id,S.value AS UserId,R.Role
FROM Role R
CROSS APPLY STRING_SPLIT (UserID, ',') S
) R1
ON U.Id=R1.UserId
GROUP BY R1.ID,R1.Role
ORDER BY R1.ID;
OR
FOR SQL SERVER 2016
WITH CTE AS
(
SELECT R2.ID,U.Name,R2.UserId,R2.Role
FROM Users U
INNER JOIN
(
SELECT R.Id,S.value AS UserId,R.Role
FROM Role R
CROSS APPLY STRING_SPLIT (UserID, ',') S
)R2
ON U.id=R2.UserId
)
SELECT DISTINCT R1.Id,
STUFF((
SELECT ',' + name
FROM CTE R3
WHERE R1.Role = R3.Role
FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)'), 1, 1, '') AS NAME
,R1.Role
FROM CTE AS R1;
OR
For Old versions
With CTE AS
(
SELECT r.id,
u.name,
r.Role
FROM Users u
INNER JOIN Role r
ON ',' + CAST(r.Userid AS NVARCHAR(20)) + ',' like '%,' + CAST(u.id AS NVARCHAR(20)) + ',%'
)
SELECT DISTINCT id,
STUFF((
SELECT ',' + name
FROM CTE md
WHERE T.Role = md.Role
FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)'), 1, 1, '') AS NAME,
Role
FROM
CTE AS T
ORDER BY id
Output
id NAME Role
1 John Manager
2 Paul,Mike,Tom Developer
3 Lisa Designer
Demo
http://sqlfiddle.com/#!18/04a2d/69
There are several problems going on here. The first issue is that you're storing your values in a delimited format. Next, because you're storing your values in a delimited format, the values are being stored as a varchar. This has problems as well, as, as I would guess that the value of your column Id in the table Users is an int; meaning an implicit cast is needed and ruining any SARGability.
So, the solution is to fix the problem, in my view. Because you have a many to many relationship, you'll need an extra table. Let's design the tables as you have them right now, anyway:
CREATE TABLE Users (Id int, Name varchar(100));
CREATE TABLE Role (Id int, UserId varchar(100), [Role] varchar(100));
INSERT INTO Users
VALUES (1,'Lisa'),
(2,'Paul'),
(3,'John'),
(4,'Mike'),
(5,'Tom');
INSERT INTO Roles
VALUES(1,'3','Manager'),
(2,'2,4,5','Developer'),
(3,'1','Designer');
Now, instead we need a new table:
CREATE TABLE UserRoles (Id int, UserID int, RoleID int);
Now, we can insert the proper rows into the database. As you're using SQL Server 2016, we can use STRING_SPLIT:
INSERT INTO UserRoles (UserID, RoleID)
SELECT SS.value, R.Id
FROM Roles R
CROSS APPLY STRING_SPLIT (UserID, ',') SS;
After this, if you want, you could drop your existing column using the following, however, I see no harm in leaving it at the moment:
ALTER TABLE Roles DROP COLUMN UserID;
Now, we can query the data correctly:
SELECT *
FROM Users U
JOIN UserRoles UR ON U.ID = UR.UserID
JOIN Roles R ON UR.RoleID = R.Id;
If you want to then delimit this data, you can use STUFF, but don't store it back; I've explained how to correct your data for a reason! :)
SELECT [Role],
STUFF((SELECT ',' + [Name]
FROM Users U
JOIN UserRoles UR ON U.Id = UR.UserID
WHERE UR.RoleID = R.Id
FOR XML PATH ('')),1,1,'') AS Users
FROM Roles R;
If you were using SQL Server 2017, you'd be able to use STRING_AGG
Clean up script:
DROP TABLE UserRoles;
DROP TABLE Users;
DROP TABLE Roles;
Try this solution:
declare #users table (Id int, Name varchar(100))
declare #role table (Id int, UserId varchar(100), [Role] varchar(100))
insert into #users values
(1, 'Lisa'),
(2, 'Paul'),
(3, 'John'),
(4, 'Mike'),
(5, 'Tom')
insert into #role values
(1, '3', 'Manager'),
(2, '2,4,5', 'Developer'),
(3, '1', 'Designer')
select * from #role [r]
join #users [u] on
CHARINDEX(',' + cast([u].Id as varchar(3)) + ',', ',' + [r].UserId + ',', 1) > 0
I joined both tables based on occurence Id in UserId. To make it possible and avoid matches like: 2 is matched to 12, I decided to match only IDs surrounded by commas. That's why I wrapped in commas Id in a query and also wrapped UserId in commas, to match IDs at the end and the beginning of userId.
This query should give you satisfying result, but to match your desired output exatcly, you have to wrap this query in a CTE and perform group by with string concatenation:
;with cte as (
select [r].Id, [r].Role, [u].Name from #role [r]
join #users [u] on
CHARINDEX(',' + cast([u].Id as varchar(3)) + ',', ',' + [r].UserId + ',', 1) > 0
)
select Id,
(select Name + ',' from cte where Id = [c].Id for xml path('')) [Name],
--I believe this should work in your case, if so, just pick one column from these two
string_agg(Name + ',') [Name2],
Role
from cte [c]
group by Id, Role

Sql Query to display output horizontally

I need to display a query output in a horizontal manner. I have some example data
create table TestTable (id number, name varchar2(10))
insert into TestTable values (1, 'John')
insert into TestTable values (2, 'Mckensy')
insert into TestTable values (3, 'Valneech')
insert into TestTable values (4, 'Zeebra')
select * from TestTable
This gets the output in a vertical view.
ID Name
==========
1 John
2 Mckensy
3 Valneech
4 Zeebra
However, I need to display it horizontally.
ID 1 2 3 4
Name John Mckensy Valneech Zeebra
create table #TestTable (id int, name varchar(10))
insert into #TestTable values (1, 'John')
insert into #TestTable values (2, 'Mckensy')
insert into #TestTable values (3, 'Valneech')
insert into #TestTable values (4, 'Zeebra')
select *
from
(
select *
from #TestTable
) src
pivot
(
max(name)
for id in ([1], [2], [3],[4])
) piv;
output
1 2 3 4
John Mckensy Valneech Zeebra
You can also use dynamic sql query something like below.
Query
declare #sql as varchar(max);
select #sql = 'select ' + char(39) + 'Name' + char(39) + ' Id,' + stuff((
select
',max(case [Id] when ' + cast(id as varchar(10)) + ' then name end) ['
+ cast([Id] as varchar(10)) + ']'
from TestTable
for xml path('')
), 1, 1, '');
select #sql += 'from TestTable;';
exec(#sql);

Can a Pivot table be used with a unknown number of columns?

If I have a team table with a unknown amount of members, is there a way to make the pivot query dynamic?
create table #t (
team varchar (20), member varchar (20)
)
insert into #t values ('ERP', 'Jack')
insert into #t values ('ERP', 'John')
insert into #t values ('ERP', 'Mary')
insert into #t values ('ERP', 'Tim')
insert into #t values ('CRM', 'Robert')
insert into #t values ('CRM', 'Diana')
select * from #t
select team, [1] as teamMember1, /* 1st select */
[2] as teamMember2, [3] as teamMember3
from
(select team , member, row_number () /* 3rd select */
over (partition by team order by team) as rownum
from #t) a
pivot (max(member) for rownum in ([1], [2], [3])) as pvt
drop table #t
Why yes, yes there is. Here's a script I cooked up years ago for a similar problem that was ultimately solved by giving the user Excel and washing my hands of it. I apologize it's not configured with your example data, but hopefully it's easy to follow.
Hope that helps,
John
--------------START QUERY--------------
-- Example Table
CREATE TABLE #glbTestTable
(
ProviderID INT,
Total INT,
PaymentDate SMALLDATETIME
)
--So the dates insert properly
SET DATEFORMAT dmy
-- Populate Example Table
INSERT INTO #glbTestTable VALUES (232, 12200, '12/01/09')
INSERT INTO #glbTestTable VALUES (456, 10200, '12/01/09')
INSERT INTO #glbTestTable VALUES (563, 11899, '02/03/09')
INSERT INTO #glbTestTable VALUES (221, 5239, '13/04/09')
INSERT INTO #glbTestTable VALUES (987, 7899, '02/03/09')
INSERT INTO #glbTestTable VALUES (1, 1234, '02/08/09')
INSERT INTO #glbTestTable VALUES (2, 4321, '02/07/09')
INSERT INTO #glbTestTable VALUES (3, 5555, '02/06/09')
-- Raw Output
SELECT *
FROM #glbTestTable
-- Build Query for Pivot --
DECLARE #pvtColumns VARCHAR(MAX)
SET #pvtColumns = ''
-- Grab up to the first 1023 "Columns" that we want to use in Pivot Table.
-- Tables can only have 1024 columns at a maximum
SELECT TOP 1023 #pvtColumns = #pvtColumns + '[' + CONVERT(VARCHAR, PaymentDate, 103) + '], '
FROM (SELECT DISTINCT PaymentDate FROM #glbTestTable) t_distFP
-- Create PivotTable Query
DECLARE #myQuery VARCHAR(MAX)
SET #myQuery = '
SELECT ProviderID, ' + LEFT(#pvtColumns, LEN(#pvtColumns) - 1) + '
FROM (SELECT ProviderID, PaymentDate, Total
FROM #glbTestTable) AS SourceTable
PIVOT
(
SUM(Total)
FOR PaymentDate IN (' + LEFT(#pvtColumns, LEN(#pvtColumns) - 1) + ')
) AS PivotTable'
-- Run the Pivot Query
EXEC(#myQuery)
-- Cleanup
DROP TABLE #glbTestTable
---------------END QUERY---------------

Using SQL to search for a set in a one-to-many relationship

Given the tables:
role: roleid, name
permission: permissionid, name
role_permission: roleid, permissionid
I have a set of permissions, and I want to see if there is an existing role that has these permissions, or if I need to make a new one. Note that I already know the permissionid, so really the permission table can be ignored - I just included it here for clarity.
Is this possible to do in a SQL query? I imagine it would have to be a dynamically-generated query.
If not, is there any better way than the brute force method of simply iterating over every role, and seeing if it has the exact permissions?
Note, I'm looking for a role that has the exact set of permissions - no more, no less.
You can select all roles that have the subset of permissions you are looking for. Count the number of permissions and see if it's exactly equal to the number of permissions you need:
select r.roleid
from role r
where not exists (select * from role_permissions rp where rp.roleid = r.roleid and rp.permissionid not in (1,2,3,4)) -- id of permissions
and (select count(*) from role_permissions rp where rp.roleid = r.roleid) = 4 -- number of permissions
Having made a hash of my first answer to this question, here is a slightly left field alternative which works but does involve adding data to the database.
The trick is to add a column to the permission table that holds a unique value for each row.
This is a fairly common pattern and will give precise results. The downside is you have to code your way around hiding the numerical equivalents.
id int(10)
name varchar(45)
value int(10)
Then the contents will become:
Permission: Role Role_Permission
id name value id name roleid permissionid
-- ---- ----- -- ---- ------ ------------
1 Read 8 1 Admin 1 1
2 Write 16 2 DataAdmin 1 2
3 Update 32 3 User 1 3
4 Delete 64 1 4
2 1
2 3
2 4
Then each combination of roles gives a unique value:
SELECT x.roleid, sum(value) FROM role_permission x
inner join permission p
on x.permissionid = p.id
Group by x.roleid
Giving:
roleid sum(value)
------ ----------
1 120 (the sum of permissions 1+2+3+4 = 120)
2 104 (the sum of permissions 1+3+4 = 104)
Now where did I leave that corkscrew...
This is an old sql trick (works in Oracle, at least):
SELECT roleid FROM role_permission t1
WHERE NOT EXISTS (
(SELECT permissionid FROM role_permission t2 WHERE t2.roleid = t1.roleid
MINUS
SELECT permissionid FROM role_permission WHERE roleid = 'Admin')
UNION
(SELECT permissionid FROM role_permission t2 WHERE roleid = 'Admin'
MINUS
SELECT permissionid FROM role_permsission t2 WHERE t2.roleid = t1.roleid)
)
Also not validated. Red wine always sounds good.
You basically need to check if there is a role that has the exact number of distinct permissions as you are checking for.
I have checked this stored procedure on SQL Server 2005 and it returns only those role ids that have an exact match of permission ids to those in the passed in list of comma separated permission ids -
CREATE PROC get_roles_for_permissions (#list nvarchar(max)) -- #list is a comma separated list of your permission ids
AS
SET NOCOUNT ON
BEGIN
DECLARE #index INT, #start_index INT, #id INT
DECLARE #permission_ids TABLE (id INT)
SELECT #index = 1
SELECT #start_index = 1
WHILE #index <= DATALENGTH(#list)
BEGIN
IF SUBSTRING(#list,#index,1) = ','
BEGIN
SELECT #id = CAST(SUBSTRING(#list, #start_index, #index - #start_index ) AS INT)
INSERT INTO #permission_ids ([id]) VALUES (#id)
SELECT #start_index = #index + 1
END
SELECT #index = #index + 1
END
SELECT #id = CAST(SUBSTRING(#list, #start_index, #index - #start_index ) AS INT)
INSERT INTO #permission_ids ([id]) VALUES (#id)
SELECT
r.roleid
FROM
role r
INNER JOIN
role_permission rp
ON r.roleid = rp.roleid
INNER JOIN
#permission_ids ids
ON
rp.permissionid = ids.id
GROUP BY r.roleid
HAVING(SELECT COUNT(*)
FROM role_permission
WHERE roleid = r.roleid) = (SELECT COUNT(*) FROM #permission_ids)
END
Example Data
CREATE TABLE [dbo].[role](
[roleid] [int] IDENTITY(1,1) NOT NULL,
[name] [nvarchar](50)
)
CREATE TABLE [dbo].[permission](
[permissionid] [int] IDENTITY(1,1) NOT NULL,
[name] [nvarchar](50)
)
CREATE TABLE [dbo].[role_permission](
[roleid] [int],
[permissionid] [int]
)
INSERT INTO role(name) VALUES ('Role1')
INSERT INTO role(name) VALUES ('Role2')
INSERT INTO role(name) VALUES ('Role3')
INSERT INTO role(name) VALUES ('Role4')
INSERT INTO permission(name) VALUES ('Permission1')
INSERT INTO permission(name) VALUES ('Permission2')
INSERT INTO permission(name) VALUES ('Permission3')
INSERT INTO permission(name) VALUES ('Permission4')
INSERT INTO role_permission(roleid, permissionid) VALUES (1, 1)
INSERT INTO role_permission(roleid, permissionid) VALUES (1, 2)
INSERT INTO role_permission(roleid, permissionid) VALUES (1, 3)
INSERT INTO role_permission(roleid, permissionid) VALUES (1, 4)
INSERT INTO role_permission(roleid, permissionid) VALUES (2, 2)
INSERT INTO role_permission(roleid, permissionid) VALUES (2, 3)
INSERT INTO role_permission(roleid, permissionid) VALUES (2, 4)
INSERT INTO role_permission(roleid, permissionid) VALUES (3, 3)
INSERT INTO role_permission(roleid, permissionid) VALUES (3, 4)
INSERT INTO role_permission(roleid, permissionid) VALUES (4, 4)
EXEC get_roles_for_permissions '3,4' -- RETURNS roleid 3
Maybe use a subquery along the lines of ...
SELECT * FROM role r
WHERE r.rolid = (SELECT x.roledid
FROM role_permission
WHERE x.permissionid in (1,2,3,4);
Sorry, haven't validated this, but having just spent an hour debugging PHP code for another question I feel the need for a glass of red wine.