How do I produce permutations in the same column using SQL? - sql

I have a table that looks like the following:
LETTERS
--------
A
B
C
D
E
F
G
H
I'd like to create a View that lists all the 3 letter combinations of these letters without repetition in the following way i.e. assigning a number to each combination.
ViewNew
-------
1 A
1 B
1 C
2 A
2 B
2 D
3 A
3 B
3 E
and so on.
Is the above possible? Any help will be much appreciated.

Check This. Using Joins and UNPIVOT we can find all permutions of letters.
select ID,ViewNew from
(
select row_number() over(order by (select 1)) AS ID,
C2.LETTERS as '1' ,C1.LETTERS AS '2' ,c3.LETTERS as '3' from #tableName C1,#tableName c2,#tableName c3
where C1.LETTERS != c2.LETTERS and c2.LETTERS ! = c3.LETTERS and c1.LETTERS ! = c3.LETTERS
) a
UNPIVOT
(
ViewNew
FOR [LETTERS] IN ([1], [2], [3])
)as f
OutPut :

For permutations (order is important):
DECLARE #q as table([No] int, L1 char(1), L2 char(1), L3 char(1))
INSERT INTO #q
SELECT
ROW_NUMBER() OVER (ORDER BY L1.Letter, L2.Letter, L3.Letter, L1.Letter),
L1.Letter,
L2.Letter,
L3.Letter
FROM
Letters L1 CROSS JOIN
Letters AS L2 CROSS JOIN
Letters AS L3
WHERE
(L1.Letter <> L2.Letter) AND
(L2.Letter <> L3.Letter) AND
(L1.Letter <> L3.Letter)
SELECT [No], L1 AS Letter FROM #q
UNION
SELECT [No], L2 FROM #q
UNION
SELECT [No], L3 FROM #q
This can actually be done in a single query, yet with repetition of #q query.
I would move #q query into subview, if View is the goal.
Update:
Use UNPIVOT to make things even simpler, as pointed out in Bhosale's answer.

If you want to create a list of all unique combinations between two tables, you need only select from both tables at once and SQL Server will give you what you're after. This is called a CROSS JOIN.
declare #t1 table (letter char(1))
declare #t2 table (number int)
insert #t1 values ('A'), ('B'), ('C')
insert #t2 values (1), (2), (3), (4)
select t2.number, t1.letter from #t1 as t1, #t2 as t2
Results
number letter
--------------
1 A
1 B
1 C
2 A
2 B
2 C
3 A
3 B
3 C
4 A
4 B
4 C

Related

Unique combination of multiple columns, order doesn't matter

Suppose a table with 3 columns. each row represents a unique combination of each value:
a a a
a a b
a b a
b b a
b b c
c c a
...
however, what I want is,
aab = baa = aba
cca = cac = acc
...
Finally, I want to get these values in a CSV format as a combination for each value like the image that I attached.
Thanks for your help!
Below is the query to generate my problem, please take a look!
--=======================================
--populate test data
--=======================================
drop table if exists #t0
;
with
cte_tally as
(
select row_number() over (order by (select 1)) as n
from sys.all_columns
)
select
char(n) as alpha
into #t0
from
cte_tally
where
(n > 64 and n < 91) or
(n > 96 and n < 123);
drop table if exists #t1
select distinct upper(alpha) alpha into #t1 from #t0
drop table if exists #t2
select
a.alpha c1
, b.alpha c2
, c.alpha c3
, row_number()over(order by (select 1)) row_num
into #t2
from #t1 a
join #t1 b on 1=1
join #t1 c on 1=1
drop table if exists #t3
select *
into #t3
from (
select *
from #t2
) p
unpivot
(cvalue for c in (c1,c2,c3)
) unpvt
select
row_num
, c
, cvalue
from #t3
order by 1,2
--=======================================
--these three rows should be treated equally
--=======================================
select *
from #t2
where concat(c1,c2,c3) in ('ABA','AAB', 'BAA')
--=======================================
--what i've tried...
--row count is actually correct, but the problem is that it ommits where there're any duplicate alphabet.
--=======================================
select
distinct
stuff((
select
distinct
'.' + cvalue
from #t3 a
where a.row_num = h.row_num
for xml path('')
),1,1,'') as comb
from #t3 h
As pointed out in the comments, you can unpivot the values, sort them in the right order and reaggregate them into a single row. Then you can group the original rows by those new values.
SELECT *
FROM #t2
CROSS APPLY (
SELECT a = MIN(val), b = MIN(CASE WHEN rn = 2 THEN val), c = MAX(val)
FROM (
SELECT *, rn = ROW_NUMBER() OVER (ORDER BY val)
FROM (VALUES (c1),(c2),(c3) ) v3(val)
) v2
) v
GROUP BY v.a, v.b, v.c;
Really, what you should perhaps do, is ensure that the values are in the correct order in the first place:
ALTER TABLE #t2
ADD CONSTRAINT t2_ValuesOrder
CHECK (c1 <= c2 AND c2 <= c3);
Would be curious why, sure you have a reason. Might suggest having a lookup table, holding all associated keys to a "Mapping Table". You might optimize some of this as you implement it. First create one table for holding the "Next/New Key" (this is where the 1, 2, 3...) come from. You get a new "New Key" after each batch of records you bulk insert into your "Mapping Table". The "Mapping Table" holds the combination of the key values, one row for each combinations along with your "New Key" Should get a table looking something like:
A, B, C, 1
A, C, B, 1
B, A, C, 1
...
X, Y, Z, 2
X, Z, Y, 2
If you can update your source table to hold a column for your "Mapping Key" (the 1,2,3) then you just look up from the mapping table where (c1=a, c2=a, c3=b) order for this look-up shouldn't matter. One suggestion would create a composite unique key using c1,c2,c3 on your mapping table. Then to get your records just look up the "mapping key value" from the mapping table and then query for records matching the mapping key value. Or, if you don't do a pre-lookup to get the mapping key you should be able to do a self-join using the mapping key value...
If you want them in a CSV format:
select distinct v.cs
from #t2 t2 cross apply
(select string_agg(c order by c desc, ',') as cs
from (values (t2.c1), (t2.c2), (t2.c3)
) v(c)
) v;
It seems to me that what you need is some form of masking*. Take this fiddle:
http://sqlfiddle.com/#!18/fc67f/8
where I have created a mapping table that contains all of the possible values and paired that with increasing orders of 10. Doing a cross join on that map table, concatenating the values, adding the masks and grouping on the total will yield you all the unique combinations.
Here is the code from the fiddle:
CREATE TABLE maps (
val varchar(1),
num int
);
INSERT INTO maps (val, num) VALUES ('a', 1), ('b', 10), ('c', 100);
SELECT mask, max(vals) as val
FROM (
SELECT concat(m1.val, m2.val, m3.val) as vals,
m1.num + m2.num + m3.num as mask
FROM maps m1
CROSS JOIN maps m2
CROSS JOIN maps m3
) q GROUP BY mask
Using these values of 10 will ensure that mask contains the count for each value, one for each place column in the resulting number, and then you can group on it to get the unique(ish) strings.
I don't know what your data looks like, and if you have more than 10 possible values then you will have to use some other base than 10, but the theory should still apply. I didn't write code to extract the columns from the value table into the mapping table, but I'm sure you can do that.
*actually, I think the term I was looking for was flag.

Subquery using multiple IN operator

I am trying to fetch all the id's in list 1 and with those id's from list 1, I am trying to fetch all the values in list 2 along with the count based on values in list 2.
DECLARE #Table1 AS TABLE (
id int,
l1 varchar(20)
);
INSERT INTO #Table1 VALUES
(1,'sun'),
(2,'shine'),
(3,'moon'),
(4,'light'),
(5,'earth'),
(6,'revolves'),
(7,'flow'),
(8,'fire'),
(9,'fighter'),
(10,'sun'),
(10,'shine'),
(11,'shine'),
(12,'moon'),
(1,'revolves'),
(10,'revolves'),
(2,'air'),
(3,'shine'),
(4,'fire'),
(5,'love'),
(6,'sun'),
(7,'rises');
/*
OPERATION 1
fetch all distinct ID's that has values from List 1
List1
sun
moon
earth
Initial OUTPUT1:
distinct_id list1_value
1 sun
3 moon
5 earth
10 sun
12 moon
6 sun
OPERATION2
fetch all the id, count_of_list2_values, list2_values
based on the id's that we recieved from OPERATION1
List2
shine
revolves
Expected Output:
id list1-value count_of_list2_values, list2_values
1 sun 1 revolves
3 moon 1 shine
5 earth 0 NULL
10 sun 2 shine,revolves
12 moon 0 NULL
6 sun 1 revolves
*/
My query:
Here is what I tried
select id, count(l1),l1
from #table1
where id in ('shine','revolves') and id in ('sun','moon','earth')
How can I achieve this.
I know this should be a subquery, having multiple in. How can this be achieved?
SQL fiddle Link:
https://dbfiddle.uk/?rdbms=sqlserver_2017&fiddle=7a85dbf51ca5b5d35e87d968c46300bb
foo
foo
There are several ways this could be done. Here's how I'd do it:
First set up the data:
DECLARE #Table1 AS TABLE (
id int,
l1 varchar(20)
) ;
INSERT INTO #Table1 VALUES
(1,'sun'),
(2,'shine'),
(3,'moon'),
(4,'light'),
(5,'earth'),
(6,'revolves'),
(7,'flow'),
(8,'fire'),
(9,'fighter'),
(10,'sun'),
(10,'shine'),
(11,'shine'),
(12,'moon'),
(1,'revolves'),
(10,'revolves'),
(2,'air'),
(3,'shine'),
(4,'fire'),
(5,'love'),
(6,'sun'),
(7,'rises') ;
Since this is a known list, set the "target" data up as it's own set. (In SQL, tables are almost invariably better to work with than demented lists. Oops, typo! I meant delimited lists.)
DECLARE #Targets AS TABLE (
l2 varchar(20)
) ;
INSERT INTO #Targets VALUES
('sun'),
('moon'),
('earth') ;
OPERATION 1
fetch all distinct ID's that has values from List 1
(sun, moon, earth)
Easy enough with a join:
SELECT Id
from #Table1 t1
inner join #Targets tg
on tg.l2 = t1.l1
OPERATION 2
fetch all the id, count_of_list2_values, list2_values
based on the id's that we recieved from OPERATION1
If I'm following the desired logic correctly, then (read the "join" comments first):
SELECT
tt.Id
-- This next counts how many items in the Operation 1 list are not in the target list
-- (Spaced out, to make it easier to compare with the next line)
,sum( case when tg2.l2 is null then 1 else 0 end)
-- And this concatenates them together in a string (in later editions of SQL Server)
,string_agg(case when tg2.l2 is null then tt.l1 else null end, ', ')
from #Table1 tt
inner join (-- Operation 1 as a subquery, produce list of the Ids to work with
select t1.id
from #Table1 t1
inner join #Targets tg
on tg.l2 = t1.l1
) xx
on xx.id = tt.id
-- This is used to identify the target values vs. the non-target values
left outer join #Targets tg2
on tg2.l2 = tt.l1
-- Aggregate, because that's what we need to do
group by tt.Id
-- Order it, because why not?
order by tt.Id
If you're using Sql Server 2017 then you can use string_agg function and outer apply operator:
select
l1.id,
l1.l1,
l2.cnt as count_of_list2_values,
l2.l1 as list2_values
from #Table1 as l1
outer apply (
select
count(*) as cnt,
string_agg(tt.l1, ',') as l1
from #Table1 as tt
where
tt.l1 in ('shine','revolves') and
tt.id = l1.id
) as l2
where
l1.l1 in ('sun','moon','earth')
db fiddle demo
In previous editions, I'm not sure it's possible to aggregate and count in one pass without creation of the special function for this. You can, of course, do it like this with xquery, but it might be a bit of an overkill (I'd not do this in production code at least):
select
l1.id,
l1.l1,
l2.data.value('count(l1)', 'int'),
stuff(l2.data.query('for $i in l1 return concat(",",$i/text()[1])').value('.','nvarchar(max)'),1,1,'')
from #Table1 as l1
outer apply (
select
tt.l1
from #Table1 as tt
where
tt.l1 in ('shine','revolves') and
tt.id = l1.id
for xml path(''), type
) as l2(data)
where
l1.l1 in ('sun','moon','earth')
db fiddle demo
If you don't mind to do it with double scan / seek of the table then you can either use #forpas answer or do something like this:
with cte_list2 as (
select tt.l1, tt.id
from #Table1 as tt
where
tt.l1 in ('shine','revolves')
)
select
l1.id,
l1.l1,
l22.cnt as count_of_list2_values,
stuff(l21.data.value('.', 'nvarchar(max)'),1,1,'') as list2_values
from #Table1 as l1
outer apply (
select
',' + tt.l1
from cte_list2 as tt
where
tt.id = l1.id
for xml path(''), type
) as l21(data)
outer apply (
select count(*) as cnt
from cte_list2 as tt
where
tt.id = l1.id
) as l22(cnt)
where
l1.l1 in ('sun','moon','earth')
With this:
with
cte as(
select t1.id, t2.l1
from table1 t1 left join (
select * from table1 where l1 in ('shine','revolves')
) t2 on t2.id = t1.id
where t1.l1 in ('sun','moon','earth')
),
cte1 as(
select
c.id,
stuff(( select ',' + cte.l1 from cte where id = c.id for xml path(''), type).value('.', 'NVARCHAR(MAX)'), 1, 1, '') col
from cte c
)
select
id,
count(col) count_of_list2_values,
max(col) list2_values
from cte1
group by id
The 1st CTE gives these results:
id | l1
-: | :-------
1 | revolves
3 | shine
5 | null
10 | shine
10 | revolves
12 | null
6 | revolves
and the 2nd operates on these results to concatenate the common grouped values of l1.
Finally I use group by id and aggergation on the results of the 2nd CTE.
See the demo
Results:
id | count_of_list2_values | list2_values
-: | --------------------: | :-------------
1 | 1 | revolves
3 | 1 | shine
5 | 0 | null
6 | 1 | revolves
10 | 2 | shine,revolves
12 | 0 | null

Remove duplicated subsets from very large table

The data I'm working with is fairly complicated, so I'm just going to provide a simpler example so I can hopefully expand that out to what I'm working on.
Note: I've already found a way to do it, but it's extremely slow and not scalable. It works great on small datasets, but if I applied it to the actual tables it needs to run on, it would take forever.
I need to remove entire duplicate subsets of data within a table. Removing duplicate rows is easy, but I'm stuck finding an efficient way to remove duplicate subsets.
Example:
GroupID Subset Value
------- ---- ----
1 a 1
1 a 2
1 a 3
1 b 1
1 b 3
1 b 5
1 c 1
1 c 3
1 c 5
2 a 1
2 a 2
2 a 3
2 b 4
2 b 5
2 b 6
2 c 1
2 c 3
2 c 6
So in this example, from GroupID 1, I would need to remove either subset 'b' or subset 'c', doesn't matter which since both contain Values 1,2,3. For GroupID 2, none of the sets are duplicated, so none are removed.
Here's the code I used to solve this on a small scale. It works great, but when applied to 10+ Million records...you can imagine it would be very slow (I was later informed of the number of records, the sample data I was given was much smaller)...:
DECLARE #values TABLE (GroupID INT NOT NULL, SubSet VARCHAR(1) NOT NULL, [Value] INT NOT NULL)
INSERT INTO #values (GroupID, SubSet, [Value])
VALUES (1,'a',1),(1,'a',2),(1,'a',3) ,(1,'b',1),(1,'b',3),(1,'b',5) ,(1,'c',1),(1,'c',3),(1,'c',5),
(2,'a',1),(2,'a',2),(2,'a',3) ,(2,'b',2),(2,'b',4),(2,'b',6) ,(2,'c',1),(2,'c',3),(2,'c',6)
SELECT *
FROM #values v
ORDER BY v.GroupID, v.SubSet, v.[Value]
SELECT x.GroupID, x.NameValues, MIN(x.SubSet)
FROM (
SELECT t1.GroupID, t1.SubSet
, NameValues = (SELECT ',' + CONVERT(VARCHAR(10), t2.[Value]) FROM #values t2 WHERE t1.GroupID = t2.GroupID AND t1.SubSet = t2.SubSet ORDER BY t2.[Value] FOR XML PATH(''))
FROM #values t1
GROUP BY t1.GroupID, t1.SubSet
) x
GROUP BY x.GroupID, x.NameValues
All I'm doing here is grouping by GroupID and Subset and concatenating all of the values into a comma delimited string...and then taking that and grouping on GroupID and Value list, and taking the MIN subset.
I'd go with something like this:
;with cte as
(
select v.GroupID, v.SubSet, checksum_agg(v.Value) h, avg(v.Value) a
from #values v
group by v.GroupID, v.SubSet
)
delete v
from #values v
join
(
select c1.GroupID, case when c1.SubSet > c2.SubSet then c1.SubSet else c2.SubSet end SubSet
from cte c1
join cte c2 on c1.GroupID = c2.GroupID and c1.SubSet <> c2.SubSet and c1.h = c2.h and c1.a = c2.a
)x on v.GroupID = x.GroupID and v.SubSet = x.SubSet
select *
from #values
From Checksum_Agg:
The CHECKSUM_AGG result does not depend on the order of the rows in
the table.
This is because it is a sum of the values: 1 + 2 + 3 = 3 + 2 + 1 = 3 + 3 = 6.
HashBytes is designed to produce a different value for two inputs that differ only in the order of the bytes, as well as other differences. (There is a small possibility that two inputs, perhaps of wildly different lengths, could hash to the same value. You can't take an arbitrary input and squeeze it down to an absolutely unique 16-byte value.)
The following code demonstrates how to use HashBytes to return for each GroupId/Subset.
-- Thanks for the sample data!
DECLARE #values TABLE (GroupID INT NOT NULL, SubSet VARCHAR(1) NOT NULL, [Value] INT NOT NULL)
INSERT INTO #values (GroupID, SubSet, [Value])
VALUES (1,'a',1),(1,'a',2),(1,'a',3) ,(1,'b',1),(1,'b',3),(1,'b',5) ,(1,'c',1),(1,'c',3),(1,'c',5),
(2,'a',1),(2,'a',2),(2,'a',3) ,(2,'b',2),(2,'b',4),(2,'b',6) ,(2,'c',1),(2,'c',3),(2,'c',6);
SELECT *
FROM #values v
ORDER BY v.GroupID, v.SubSet, v.[Value];
with
DistinctGroups as (
select distinct GroupId, Subset
from #Values ),
GroupConcatenatedValues as (
select GroupId, Subset, Convert( VarBinary(256), (
select Convert( VarChar(8000), Cast( Value as Binary(4) ), 2 ) AS [text()]
from #Values as V
where V.GroupId = DG.GroupId and V.SubSet = DG.SubSet
order by Value
for XML Path('') ), 2 ) as GroupedBinary
from DistinctGroups as DG )
-- To see the intermediate results from the CTE you can use one of the
-- following two queries instead of the last select :
-- select * from DistinctGroups;
-- select * from GroupConcatenatedValues;
select GroupId, Subset, GroupedBinary, HashBytes( 'MD4', GroupedBinary ) as Hash
from GroupConcatenatedValues
order by GroupId, Subset;
You can use checksum_agg() over a set of rows. If the checksums are the same, this is strong evidence that the 'values' columns are equal within the grouped fields.
In the 'getChecksums' cte below, I group by the group and subset, with a checksum based on your 'value' column.
In the 'maybeBadSubsets' cte, I put a row_number over each aggregation just to identify the 2nd+ row in the event the checksums match.
Finally, I delete any subgroups so identified.
with
getChecksums as (
select groupId,
subset,
cs = checksum_agg(value)
from #values v
group by groupId,
subset
),
maybeBadSubsets as (
select groupId,
subset,
cs,
deleteSubset =
case
when row_number() over (
partition by groupId, cs
order by subset
) > 1
then 1
end
from getChecksums
)
delete v
from #values v
where exists (
select 0
from maybeBadSubsets mbs
where v.groupId = mbs.groupId
and v.SubSet = mbs.subset
and mbs.deleteSubset = 1
);
I don't know what the exact likelihood is for checksums to match. If you're not comfortable with the false positive rate, you can still use it to eliminate some branches in a more algorithmic approach in order to vastly improve performance.
Note: CTE's can have a quirk performance-wise. If you find that the query engine is running 'maybeBadSubsets' for each row of #values, you may need to put its results into a temp table or table variable before using it. But I believe with 'exists' you're okay as far at that goes.
EDIT:
I didn't catch it, but as the OP noticed, checksum_agg seems to perform very poorly in terms of false hits/misses. I suspect it might be due to the simplicity of the input. I changed
cs = checksum_agg(value)
above to
cs = checksum_agg(convert(int,hashbytes('md5', convert(char(1),value))))
and got better results. But I don't know how it would perform on larger datasets.

Sorting two dimensional table by reordering rows and columns

Is it possible and if it is, how to sort two dimensional table, by reordering columns and rows, and using only these two operations, that table's biggest numbers are concentrated in top-left corner?
Any help would be very greatly appreciated.
For example, we can use this table:
Column 1 Column 2 Column 3
Row 1 2 4 5
Row 2 3 2 6
Row 3 7 2 6
Result I think would be this, but I am not sure:
Column 1 Column 2 Column 3
Row 1 7 6 2
Row 2 3 6 2
Row 3 2 5 4
For now, I only thought about summing rows and columns and sorting them to left-top descending.
More of a MatLab guy when it comes to matrix manipulations, but perhaps this may help.
Here we use a TVF to create a dynamic EAV structure. If you can't use a function, it is a small matter to go in-line.
Also, the final pivot can be dynamic if needed
Example
Declare #YourTable table (Column1 int,Column2 int,Column3 int)
Insert Into #YourTable values
(2,4,5),
(3,2,6),
(7,2,6)
;with cte as (
Select RowNr=Dense_Rank() over (Order By RowTotal Desc,Entity )
,ColNr=Dense_Rank() over (Order By ColTotal Desc,Attribute)
,Value
From (
Select *
,RowTotal = max(cast(Value as float)) over(Partition By Entity)
,ColTotal = max(cast(Value as float)) over(Partition By Attribute)
From [dbo].[udf-EAV]((Select RN=Row_Number() over (Order By (Select null)),* From #YourTable for XML RAW))
) A
)
Select [1] Col1,[2] Col2,[3] Col3
From cte
Pivot (max(Value) For [ColNr] in ([1],[2],[3]) ) p
Returns
Col1 Col2 Col3
7 6 2
3 6 2
2 5 4
The UDF if Interested
CREATE FUNCTION [dbo].[udf-EAV](#XML xml)
Returns Table
As
Return (
with cteKey(k) as (Select Top 1 xAtt.value('local-name(.)','varchar(100)') From #XML.nodes('/row') As A(xRow) Cross Apply A.xRow.nodes('./#*') As B(xAtt))
Select Entity = xRow.value('#*[1]','varchar(50)')
,Attribute = xAtt.value('local-name(.)','varchar(100)')
,Value = xAtt.value('.','varchar(max)')
From #XML.nodes('/row') As A(xRow)
Cross Apply A.xRow.nodes('./#*') As B(xAtt)
Where xAtt.value('local-name(.)','varchar(100)') Not In (Select k From cteKey)
)
-- Notes: First Field in Query will be the Entity
-- Select * From [dbo].[udf-EAV]((Select UTCDate=GetUTCDate(),* From sys.dm_os_sys_info for XML RAW))

How to synthesize attribute for joined tables

I have a view defined like this:
CREATE VIEW [dbo].[PossiblyMatchingContracts] AS
SELECT
C.UniqueID,
CC.UniqueID AS PossiblyMatchingContracts
FROM [dbo].AllContracts AS C
INNER JOIN [dbo].AllContracts AS CC
ON C.SecondaryMatchCodeFB = CC.SecondaryMatchCodeFB
OR C.SecondaryMatchCodeLB = CC.SecondaryMatchCodeLB
OR C.SecondaryMatchCodeBB = CC.SecondaryMatchCodeBB
OR C.SecondaryMatchCodeLB = CC.SecondaryMatchCodeBB
OR C.SecondaryMatchCodeBB = CC.SecondaryMatchCodeLB
WHERE C.UniqueID NOT IN
(
SELECT UniqueID FROM [dbo].DefinitiveMatches
)
AND C.AssociatedUser IS NULL
AND C.UniqueID <> CC.UniqueID
Which is basically finding contracts where f.e. the first name and the birthday are matching. This works great. Now I want to add a synthetic attribute to each row with the value from only one source row.
Let me give you an example to make it clearer. Suppose I have the following table:
UniqueID | FirstName | LastName | Birthday
1 | Peter | Smith | 1980-11-04
2 | Peter | Gray | 1980-11-04
3 | Peter | Gray-Smith| 1980-11-04
4 | Frank | May | 1985-06-09
5 | Frank-Paul| May | 1985-06-09
6 | Gina | Ericson | 1950-11-04
The resulting view should look like this:
UniqueID | PossiblyMatchingContracts | SyntheticID
1 | 2 | PeterSmith1980-11-04
1 | 3 | PeterSmith1980-11-04
2 | 1 | PeterSmith1980-11-04
2 | 3 | PeterSmith1980-11-04
3 | 1 | PeterSmith1980-11-04
3 | 2 | PeterSmith1980-11-04
4 | 5 | FrankMay1985-06-09
5 | 4 | FrankMay1985-06-09
6 | NULL | NULL [or] GinaEricson1950-11-04
Notice that the SyntheticID column uses ONLY values from one of the matching source rows. It doesn't matter which one. I am exporting this view to another application and need to be able to identify each "match group" afterwards.
Is it clear what I mean? Any ideas how this could be done in sql?
Maybe it helps to elaborate a bit on the actual use case:
I am importing contracts from different systems. To account for the possibility of typos or people that have married but the last name was only updated in one system, I need to find so called 'possible matches'. Two or more contracts are considered a possible match if they contain the same birthday plus the same first, last or birth name. That implies, that if contract A matches contract B, contract B also matches contract A.
The target system uses multivalue reference attributes to store these relationships. The ultimate goal is to create user objects for these contracts. The catch first is, that the shall only be one user object for multiple matching contracts. Thus I'm creating these matches in the view. The second catch is, that the creation of user objects happens by workflows, which run parallel for each contract. To avoid creating multiple user objects for matching contracts, each workflow needs to check, if there is already a matching user object or another workflow, which is about to create said user object. Because the workflow engine is extremely slow compared to sql, the workflows should not repeat the whole matching test. So the idea is, to let the workflow check only for the 'syntheticID'.
I have solved it with a multi step approach:
Create the list of possible 1st level matches for each contract
Create the base groups list, assigning a different group for for
each contract (as if they were not related to anybody)
Iterate the matches list updating the group list when more contracts need to
be added to a group
Recursively build up the SyntheticID from final group list
Output results
First of all, let me explain what I have understood, so you can tell if my approach is correct or not.
1) matching propagates in "cascade"
I mean, if "Peter Smith" is grouped up with "Peter Gray", it means that all Smith and all Gray are related (if they have the same birth date) so Luke Smith can be in the same group of John Gray
2) I have not understood what you mean with "Birth Name"
You say contracts matches on "first, last or birth name", sorry, I'm italian, I thought birth name and first were the same, also in your data there is not such column. Maybe it is related to that dash symbol between names?
When FirstName is Frank-Paul it means it should match both Frank and Paul?
When LastName is Gray-Smith it means it should match both Gray and Smith?
In following code I have simply ignored this problem, but it could be handled if needed (I already did a try, breaking names, unpivoting them and treating as double match).
Step Zero: some declaration and prepare base data
declare #cli as table (UniqueID int primary key, FirstName varchar(20), LastName varchar(20), Birthday varchar(20))
declare #comb as table (id1 int, id2 int, done bit)
declare #grp as table (ix int identity primary key, grp int, id int, unique (grp,ix))
declare #str_id as table (grp int primary key, SyntheticID varchar(1000))
declare #id1 as int, #g int
;with
t as (
select *
from (values
(1 , 'Peter' , 'Smith' , '1980-11-04'),
(2 , 'Peter' , 'Gray' , '1980-11-04'),
(3 , 'Peter' , 'Gray-Smith', '1980-11-04'),
(4 , 'Frank' , 'May' , '1985-06-09'),
(5 , 'Frank-Paul', 'May' , '1985-06-09'),
(6 , 'Gina' , 'Ericson' , '1950-11-04')
) x (UniqueID , FirstName , LastName , Birthday)
)
insert into #cli
select * from t
Step One: Create the list of possible 1st level matches for each contract
;with
p as(select UniqueID, Birthday, FirstName, LastName from #cli),
m as (
select p.UniqueID UniqueID1, p.FirstName FirstName1, p.LastName LastName1, p.Birthday Birthday1, pp.UniqueID UniqueID2, pp.FirstName FirstName2, pp.LastName LastName2, pp.Birthday Birthday2
from p
join p pp on (pp.Birthday=p.Birthday) and (pp.FirstName = p.FirstName or pp.LastName = p.LastName)
where p.UniqueID<=pp.UniqueID
)
insert into #comb
select UniqueID1,UniqueID2,0
from m
Step Two: Create the base groups list
insert into #grp
select ROW_NUMBER() over(order by id1), id1 from #comb where id1=id2
Step Three: Iterate the matches list updating the group list
Only loop on contracts that have possible matches and updates only if needed
set #id1 = 0
while not(#id1 is null) begin
set #id1 = (select top 1 id1 from #comb where id1<>id2 and done=0)
if not(#id1 is null) begin
set #g = (select grp from #grp where id=#id1)
update g set grp= #g
from #grp g
inner join #comb c on g.id = c.id2
where c.id2<>#id1 and c.id1=#id1
and grp<>#g
update #comb set done=1 where id1=#id1
end
end
Step Four: Build up the SyntheticID
Recursively add ALL (distinct) first and last names of group to SyntheticID.
I used '_' as separator for birth date, first names and last names, and ',' as separator for the list of names to avoid conflicts.
;with
c as(
select c.*, g.grp
from #cli c
join #grp g on g.id = c.UniqueID
),
d as (
select *, row_number() over (partition by g order by t,s) n1, row_number() over (partition by g order by t desc,s desc) n2
from (
select distinct c.grp g, 1 t, FirstName s from c
union
select distinct c.grp, 2, LastName from c
) l
),
r as (
select d.*, cast(CONVERT(VARCHAR(10), t.Birthday, 112) + '_' + s as varchar(1000)) Names, cast(0 as bigint) i1, cast(0 as bigint) i2
from d
join #cli t on t.UniqueID=d.g
where n1=1
union all
select d.*, cast(r.names + IIF(r.t<>d.t,'_',',') + d.s as varchar(1000)), r.n1, r.n2
from d
join r on r.g = d.g and r.n1=d.n1-1
)
insert into #str_id
select g, Names
from r
where n2=1
Step Five: Output results
select c.UniqueID, case when id2=UniqueID then id1 else id2 end PossibleMatchingContract, s.SyntheticID
from #cli c
left join #comb cb on c.UniqueID in(id1,id2) and id1<>id2
left join #grp g on c.UniqueID = g.id
left join #str_id s on s.grp = g.grp
Here is the results
UniqueID PossibleMatchingContract SyntheticID
1 2 1980-11-04_Peter_Gray,Gray-Smith,Smith
1 3 1980-11-04_Peter_Gray,Gray-Smith,Smith
2 1 1980-11-04_Peter_Gray,Gray-Smith,Smith
2 3 1980-11-04_Peter_Gray,Gray-Smith,Smith
3 1 1980-11-04_Peter_Gray,Gray-Smith,Smith
3 2 1980-11-04_Peter_Gray,Gray-Smith,Smith
4 5 1985-06-09_Frank,Frank-Paul_May
5 4 1985-06-09_Frank,Frank-Paul_May
6 NULL 1950-11-04_Gina_Ericson
I think that in this way the resulting SyntheticID should also be "unique" for each group
This creates a synthetic value and is easy to change to suit your needs.
DECLARE #T TABLE (
UniqueID INT
,FirstName VARCHAR(200)
,LastName VARCHAR(200)
,Birthday DATE
)
INSERT INTO #T(UniqueID,FirstName,LastName,Birthday) SELECT 1,'Peter','Smith','1980-11-04'
INSERT INTO #T(UniqueID,FirstName,LastName,Birthday) SELECT 2,'Peter','Gray','1980-11-04'
INSERT INTO #T(UniqueID,FirstName,LastName,Birthday) SELECT 3,'Peter','Gray-Smith','1980-11-04'
INSERT INTO #T(UniqueID,FirstName,LastName,Birthday) SELECT 4,'Frank','May','1985-06-09'
INSERT INTO #T(UniqueID,FirstName,LastName,Birthday) SELECT 5,'Frank-Paul','May','1985-06-09'
INSERT INTO #T(UniqueID,FirstName,LastName,Birthday) SELECT 6,'Gina','Ericson','1950-11-04'
DECLARE #PossibleMatches TABLE (UniqueID INT,[PossibleMatch] INT,SynKey VARCHAR(2000)
)
INSERT INTO #PossibleMatches
SELECT t1.UniqueID [UniqueID],t2.UniqueID [Possible Matches],'Ln=' + t1.LastName + ' Fn=' + + t1.FirstName + ' DoB=' + CONVERT(VARCHAR,t1.Birthday,102) [SynKey]
FROM #T t1
INNER JOIN #T t2 ON t1.Birthday=t2.Birthday
AND t1.FirstName=t2.FirstName
AND t1.LastName=t2.LastName
AND t1.UniqueID<>t2.UniqueID
INSERT INTO #PossibleMatches
SELECT t1.UniqueID [UniqueID],t2.UniqueID [Possible Matches],'Fn=' + t1.FirstName + ' DoB=' + CONVERT(VARCHAR,t1.Birthday,102) [SynKey]
FROM #T t1
INNER JOIN #T t2 ON t1.Birthday=t2.Birthday
AND t1.FirstName=t2.FirstName
AND t1.UniqueID<>t2.UniqueID
INSERT INTO #PossibleMatches
SELECT t1.UniqueID,t2.UniqueID,'Ln=' + t1.LastName + ' DoB=' + CONVERT(VARCHAR,t1.Birthday,102) [SynKey]
FROM #T t1
INNER JOIN #T t2 ON t1.Birthday=t2.Birthday
AND t1.LastName=t2.LastName
AND t1.UniqueID<>t2.UniqueID
INSERT INTO #PossibleMatches
SELECT t1.UniqueID,pm.UniqueID,'Ln=' + t1.LastName + ' Fn=' + + t1.FirstName + ' DoB=' + CONVERT(VARCHAR,t1.Birthday,102) [SynKey]
FROM #T t1
LEFT JOIN #PossibleMatches pm on pm.UniqueID=t1.UniqueID
WHERE pm.UniqueID IS NULL
SELECT *
FROM #PossibleMatches
ORDER BY UniqueID,[PossibleMatch]
I think this will work for you
SELECT
C.UniqueID,
CC.UniqueID AS PossiblyMatchingContracts,
FIRST_VALUE(CC.FirstName+CC.LastName+CC.Birthday)
OVER (PARTITION BY C.UniqueID ORDER BY CC.UniqueID) as SyntheticID
FROM
[dbo].AllContracts AS C INNER JOIN
[dbo].AllContracts AS CC ON
C.SecondaryMatchCodeFB = CC.SecondaryMatchCodeFB OR
C.SecondaryMatchCodeLB = CC.SecondaryMatchCodeLB OR
C.SecondaryMatchCodeBB = CC.SecondaryMatchCodeBB OR
C.SecondaryMatchCodeLB = CC.SecondaryMatchCodeBB OR
C.SecondaryMatchCodeBB = CC.SecondaryMatchCodeLB
WHERE
C.UniqueID NOT IN(
SELECT UniqueID FROM [dbo].DefinitiveMatches)
AND C.AssociatedUser IS NULL
You can try this:
SELECT
C.UniqueID,
CC.UniqueID AS PossiblyMatchingContracts,
FIRST_VALUE(CC.FirstName+CC.LastName+CC.Birthday)
OVER (PARTITION BY C.UniqueID ORDER BY CC.UniqueID) as SyntheticID
FROM
[dbo].AllContracts AS C
INNER JOIN
[dbo].AllContracts AS CC
ON
C.SecondaryMatchCodeFB = CC.SecondaryMatchCodeFB
OR
C.SecondaryMatchCodeLB = CC.SecondaryMatchCodeLB
OR
C.SecondaryMatchCodeBB = CC.SecondaryMatchCodeBB
OR
C.SecondaryMatchCodeLB = CC.SecondaryMatchCodeBB
OR
C.SecondaryMatchCodeBB = CC.SecondaryMatchCodeLB
WHERE
C.UniqueID NOT IN
(
SELECT UniqueID FROM [dbo].DefinitiveMatches
)
AND
C.AssociatedUser IS NULL
This will generate one extra row (because we left out C.UniqueID <> CC.UniqueID) but will give you the good souluton.
Following an example with some example data extracted from your original post. The idea: Generate all SyntheticID in a CTE, query all records with a "PossibleMatch" and Union it with all records which are not yet included:
DECLARE #t TABLE(
UniqueID int
,FirstName nvarchar(20)
,LastName nvarchar(20)
,Birthday datetime
)
INSERT INTO #t VALUES (1, 'Peter', 'Smith', '1980-11-04');
INSERT INTO #t VALUES (2, 'Peter', 'Gray', '1980-11-04');
INSERT INTO #t VALUES (3, 'Peter', 'Gray-Smith', '1980-11-04');
INSERT INTO #t VALUES (4, 'Frank', 'May', '1985-06-09');
INSERT INTO #t VALUES (5, 'Frank-Paul', 'May', '1985-06-09');
INSERT INTO #t VALUES (6, 'Gina', 'Ericson', '1950-11-04');
WITH ctePrep AS(
SELECT UniqueID, FirstName, LastName, BirthDay,
ROW_NUMBER() OVER (PARTITION BY FirstName, BirthDay ORDER BY FirstName, BirthDay) AS k,
FirstName+LastName+CONVERT(nvarchar(10), Birthday, 126) AS SyntheticID
FROM #t
),
cteKeys AS(
SELECT FirstName, BirthDay, SyntheticID
FROM ctePrep
WHERE k = 1
),
cteFiltered AS(
SELECT
C.UniqueID,
CC.UniqueID AS PossiblyMatchingContracts,
keys.SyntheticID
FROM #t AS C
JOIN #t AS CC ON C.FirstName = CC.FirstName
AND C.Birthday = CC.Birthday
JOIN cteKeys AS keys ON keys.FirstName = c.FirstName
AND keys.Birthday = c.Birthday
WHERE C.UniqueID <> CC.UniqueID
)
SELECT UniqueID, PossiblyMatchingContracts, SyntheticID
FROM cteFiltered
UNION ALL
SELECT UniqueID, NULL, FirstName+LastName+CONVERT(nvarchar(10), Birthday, 126) AS SyntheticID
FROM #t
WHERE UniqueID NOT IN (SELECT UniqueID FROM cteFiltered)
Hope this helps. The result looked OK to me:
UniqueID PossiblyMatchingContracts SyntheticID
---------------------------------------------------------------
2 1 PeterSmith1980-11-04
3 1 PeterSmith1980-11-04
1 2 PeterSmith1980-11-04
3 2 PeterSmith1980-11-04
1 3 PeterSmith1980-11-04
2 3 PeterSmith1980-11-04
4 NULL FrankMay1985-06-09
5 NULL Frank-PaulMay1985-06-09
6 NULL GinaEricson1950-11-04
Tested in SSMS, it works perfect. :)
--create table structure
create table #temp
(
uniqueID int,
firstname varchar(15),
lastname varchar(15),
birthday date
)
--insert data into the table
insert #temp
select 1, 'peter','smith','1980-11-04'
union all
select 2, 'peter','gray','1980-11-04'
union all
select 3, 'peter','gray-smith','1980-11-04'
union all
select 4, 'frank','may','1985-06-09'
union all
select 5, 'frank-paul','may','1985-06-09'
union all
select 6, 'gina','ericson','1950-11-04'
select * from #temp
--solution is as below
select ab.uniqueID
, PossiblyMatchingContracts
, c.firstname+c.lastname+cast(c.birthday as varchar) as synID
from
(
select a.uniqueID
, case
when a.uniqueID < min(b.uniqueID)over(partition by a.uniqueid)
then a.uniqueID
else min(b.uniqueID)over(partition by a.uniqueid)
end as SmallestID
, b.uniqueID as PossiblyMatchingContracts
from #temp a
left join #temp b
on (a.firstname = b.firstname OR a.lastname = b.lastname) AND a.birthday = b.birthday AND a.uniqueid <> b.uniqueID
) as ab
left join #temp c
on ab.SmallestID = c.uniqueID
Result capture is attached below:
Say we have following table (a VIEW in your case):
UniqueID PossiblyMatchingContracts SyntheticID
1 2 G1
1 3 G2
2 1 G3
2 3 G4
3 1 G4
3 4 G6
4 5 G7
5 4 G8
6 NULL G9
In your case you can set initial SyntheticID as a string like PeterSmith1980-11-04 using UniqueID for each line. Here is a recursive CTE query it divides all lines to unconnected groups and select MAX(SyntheticId) in the current group as a new SyntheticID for all lines in this group.
WITH CTE AS
(
SELECT CAST(','+CAST(UniqueID AS Varchar(100)) +','+ CAST(PossiblyMatchingContracts as Varchar(100))+',' as Varchar(MAX)) as GroupCont,
SyntheticID
FROM PossiblyMatchingContracts
UNION ALL
SELECT CAST(GroupCont+CAST(UniqueID AS Varchar(100)) +','+ CAST(PossiblyMatchingContracts as Varchar(100))+',' AS Varchar(MAX)) as GroupCont,
pm.SyntheticID
FROM CTE
JOIN PossiblyMatchingContracts as pm
ON
(
CTE.GroupCont LIKE '%,'+CAST(pm.UniqueID AS Varchar(100))+',%'
OR
CTE.GroupCont LIKE '%,'+CAST(pm.PossiblyMatchingContracts AS Varchar(100))+',%'
)
AND NOT
(
CTE.GroupCont LIKE '%,'+CAST(pm.UniqueID AS Varchar(100))+',%'
AND
CTE.GroupCont LIKE '%,'+CAST(pm.PossiblyMatchingContracts AS Varchar(100))+',%'
)
)
SELECT pm.UniqueID,
pm.PossiblyMatchingContracts,
ISNULL(
(SELECT MAX(SyntheticID) FROM CTE WHERE
(
CTE.GroupCont LIKE '%,'+CAST(pm.UniqueID AS Varchar(100))+',%'
OR
CTE.GroupCont LIKE '%,'+CAST(pm.PossiblyMatchingContracts AS Varchar(100))+',%'
))
,pm.SyntheticID) as SyntheticID
FROM PossiblyMatchingContracts pm