Update Child table Whose Parent Table has duplicates - sql

I have duplicate data in Contact Table.
Using Rank() function i am identifying duplicates.
I also have to update the id of contact in child table activity.
I want to update contact id in activity table by contact id of rank 1 where rank 2 or above are there.
Using this query to find duplicate contact
(SELECT
ExternalContactID,
RANK() OVER (PARTITION BY ExternalAccountId, Name, Email, MailingCity,
MailingCountry, MailingState, MailingStreet, Phone ORDER BY
ExternalContactID) AS rank
FROM
contact)
Table: Contact
ExternalContactID | Rank
101 | 1
102 | 2
Child table: Activity
ActivityID | ContactID
1 | 101
2 | 102
Before deleting contact(s) which have rank > 1, I need to update the child table "Activity" with rank 1 contact id.
Result:
Contact with id = 102 is deleted and Activity Record with id = 2 now has contact id = 101

You've used RANK to identify groups of duplicates and the one in each group you wish to keep. This is your mechanism for doing a self-join to relate every row to the "keeper" (the member of the duplicate group you wish to keep).
WITH cte AS (
{Query with Rank column}
)
SELECT ...
FROM cte a
INNER JOIN cte b
ON {all the partitioning columns being equal}
AND a.Rank=1
AND b.Rank<>1
With the psuedocode above, you've got all the rows in the table joined to the row that is going to be the "keeper" after you delete the duplicates. JOIN to this structure in your UPDATE to set the FK to the PK of the "Keeper" that is related to it.

Related

Aggregate and count after left join

I am aggregating columns of a table to find the count of unique values. For example, aggregating
the status shows that out of 5 alerts there are 2 in open status and 3 that are closed. The simplified table looks like this:
create table alerts (
id,
status,
owner_id
);
The query below uses grouping sets to aggregate multiple columns at once. This approach works well.
with aggs as (
select status
from alerts
where alerts.owner_id = 'x'
)
select status, count(*)
from aggs
group by grouping sets(
(),
(status)
);
the output at its simplest could look like this:
status | count
--------+-------
| 1
new | 1
However, now I need to aggregate additional columns from another table. This table (shown below) can have zero or more rows associated to the first table (alerts:users 1:N).
create table users (
id,
alert_id,
name
);
I have tried updating the query to use a left join but this approach incorrectly inflates the counts of the alert columns.
with aggs as (
select alerts.status, users.name
from alerts
left join users on alerts.id = users.alert_id
where alerts.owner_id = 'x'
-- and additional filtering by columns in the users table
)
select status, name, count(*)
from aggs
group by grouping sets(
(),
(status),
(name)
);
Below is an example of the incorrect results. Since there are 3 rows in the user table the count
for the status column is now 3 but should be 1.
status | name | count
--------+-------------------------+-------
| | 3
| user1 | 1
| user2 | 1
| user3 | 1
new | | 3
How can I perform this aggregation to include the columns from the table with a many-to-one relationship without inflating the counts? In the future I will likely need to aggregate more columns from other tables with a many-to-one relationship and need a solution that will still work with several left joins. All help is much appreciated.
edit: link to db-fiddle https://www.db-fiddle.com/f/buGD2DuJiqf9LGF9rw5EgT/2
Do you just want to count the number of alerts? If so, use count(distinct):
count(distinct alert_id)
Of course, you need this in aggs, so the select would include:
alerts.id as alert_id

Delete rows where date was least updated

How can I delete rows where dateupdated was least updated ?
My table is
Name Dateupdated ID status
john 1/02/17 JHN1 A
john 1/03/17 JHN2 A
sally 1/02/17 SLLY1 A
sally 1/03/17 SLLY2 A
Mike 1/03/17 MK1 A
Mike 1/04/17 MK2 A
I want to be left with the following after the data removal:
Name Date ID status
john 1/03/17 JHN2 A
sally 1/03/17 SLLY2 A
Mike 1/04/17 MK2 A
If you really want to "delete rows where dateupdated was least updated" then a simple single-row subquery should do the trick.
DELETE MyTable
WHERE Date = (SELECT MIN(Date) From MyTable)
If on the other hand you just want to delete the row with the earliest Date per person (as identified by their ID) you could use:
DELETE MyTable
FROM MyTable a
JOIN (SELECT ID, MIN(Date) MinDate FROM MyTable GROUP BY ID) b
ON a.ID = b.ID AND a.Date = b.MinDate
The idea here is you create an aggregate query that returns rows containing the columns that would match the rows you want deleted, then join to it. Because it's an inner join, rows that do not match the criteria will be excluded.
If people are uniquely identified by something else (e.g. Name then you can just substitute that for the ID in my example above.
I am thinking though that you don't want either of these. I think you want to delete everything except for each person's latest row. If that is the case, try this:
DELETE MyTable
WHERE EXISTS (SELECT 0 FROM MyTable b WHERE b.ID = MyTable.ID AND b.Date > MyTable.Date)
The idea here is you check for existence of another data row with the same ID and a later date. If there is a later record, delete this one.
The nice thing about the last example is you can run it over and over and every person will still be left with exactly one row. The other two queries, if run over and over, will nibble away at the table until it is empty.
P.S. As these are significantly different solutions, I suggest you spend some effort learning how to articulate unambiguous requirements. This is an extremely important skill for any developer.
This deletes rows where the name is a duplicate, and deletes all but the latest row for each name. This is different from your stated question.
Using a common table expression (cte) and row_number():
;with cte as (
select *
, rn = row_number() over (
partition by Name
order by Dateupdated desc
)
from t
)
/* ------------------------------------------------
-- Remove duplicates by deleting rows
-- where the row number (rn) is greater than 1
-- leaving the first row for each partition
------------------------------------------------ */
delete
from cte
where cte.rn > 1
select * from t
rextester: http://rextester.com/HZBQ50469
returns:
+-------+-------------+-------+--------+
| Name | Dateupdated | ID | status |
+-------+-------------+-------+--------+
| john | 2017-01-03 | JHN2 | A |
| sally | 2017-01-03 | SLLY2 | A |
| Mike | 2017-01-04 | MK2 | A |
+-------+-------------+-------+--------+
Without using the cte it can be written as:
delete d
from (
select *
, rn = row_number() over (
partition by Name
order by Dateupdated desc
)
from t
) as d
where d.rn > 1
This should do the trick:
delete
from MyTable a
where not exists (
select top 1 1
from MyTable b
where b.name = a.name
and b.DateUpdated < a.DateUpdated
)
i.e. remove any entries from the table for which there is no record on the same name with a date earlier than the record to be deleted's.
Your Name column has Mike and Mik2 which is different for each other.
So, if you did not make a mistake, standard column to group by must be ID column without last digit.
I think following is more accurate if you did not mistaken.
delete a
from MyTable a
inner join
(select substring(ID, 1, len(ID) - 1) as ID, min(Dateupdated) as MinDate
from MyTable
group by substring(ID, 1, len(ID) - 1)
) b
on substring(a.ID, 1, len(a.ID) - 1) = b.ID and a.Dateupdated = b.MinDate
You can test it at SQLFiddle: http://sqlfiddle.com/#!6/9c440/1

Using GROUP BY, select ID of record in each group that has lowest ID

I am creating a file orginization system where you can add content items to multiple folders.
I am storing the data in a table that has a structure similar to the following:
ID TypeID ContentID FolderID
1 101 1001 1
2 101 1001 2
3 102 1002 3
4 103 1002 2
5 103 1002 1
6 104 1001 1
7 105 1005 2
I am trying to select the first record for each unique TypeID and ContentID pair. For the above table, I would want the results to be:
ID
1
3
4
6
7
As you can see, the pairs 101 1001 and 103 1002 were each added to two folders, yet I only want the record with the first folder they were added to.
When I try the following query, however, I only get result that have at least two entries with the same TypeID and ContentID:
select MIN(ID)
from table
group by TypeID, ContentID
results in
ID
1
4
If I change MIN(ID) to MAX(ID) I get the correct amount of results, yet I get the record with the last folder they were added to and not the first folder:
ID
2
3
5
6
7
Am I using GROUP BY or the MIN wrong? Is there another way that I can accomplish this task of selecting the first record of each TypeID ContentID pair?
MIN() and MAX() should return the same amount of rows. Changing the function should not change the number of rows returned in the query.
Is this query part of a larger query? From looking at the sample data provided, I would assume that this code is only a snippet from a larger action you are trying to do. Do you later try to join TypeID, ContentID or FolderID with the tables the IDs are referencing?
If yes, this error is likely being caused by another part of your query and not this select statement. If you are using joins or multi-level select statements, you can get different amount of results if the reference tables do not contain a record for all the foreign IDs.
Another suggestion, check to see if any of the values in your records are NULL. Although this should not affect the GROUP BY, I have sometime encountered strange behavior when dealing with NULL values.
Use ROW_NUMBER
WITH CTE AS
(SELECT ID,TypeID,ContentID,FolderID,
ROW_NUMBER() OVER (PARTITION BY TypeID,ContentID ORDER BY ID) as rn FROM t
)
SELECT ID FROM CTE WHERE rn=1
Use it with ORDER BY:
select *
from table
group by TypeID, ContentID
order by id
SQLFiddle: http://sqlfiddle.com/#!9/024016/12
Try with first ( id) instead of min(id)
select first(id)
from table
group by TypeID, ContentID
It works ?

SQL select value if no corresponding value exists in another table

I have a database which tries to acheive point-in-time information by having a master table and a history table which records when fields in the other table will/did change. e.g.
Table: Employee
Id | Name | Department
-----------------------------
0 | Alice | 1
1 | Bob | 1
Table: History
ChangeDate | Field | RowId | NewValue
---------------------------------------------
05/05/2009 | Department | 0 | 2
That records that employee 0 (Alice) will move to department 2 on 05/05/2009.
I want to write a query to determine the employee's department on a particular date. So it needs to:
Find the first history record for that field and employee before given date
If none exists then default to the value currently in the master employee table.
How can I do this? My intuition is to select the first row of a result set which has all suitable history records reverse ordered by date and with the value in the master table last (so it's only the first result if there are no suitable history records), but I don't have the required SQL-fu to achieve this.
Note: I am conscious that this may not be the best way to implement this system - I am not able to change this in the short term - though if you can suggest a better way to implement this I'd be glad to hear it.
SELECT COALESCE (
(
SELECT newValue
FROM history
WHERE field = 'Department'
AND rowID = ID
AND changeDate =
(
SELECT MAX(changedate)
FROM history
WHERE field = 'Department'
AND rowID = ID
AND changeDate <= '01/01/2009'
)
), department)
FROM employee
WHERE id = #id
In both Oracle and MS SQL, you can also use this:
SELECT COALESCE(newValue, department)
FROM (
SELECT e.*, h.*,
ROW_NUMBER() OVER (PARTITION BY e.id ORDER BY changeDate) AS rn
FROM employee e
LEFT OUTER JOIN
history h
ON field = 'Department'
AND rowID = ID
AND changeDate <= '01/01/2009'
WHERE e.id = #id
)
WHERE rn = 1
Note, though, that ROWID is reserved word in Oracle, so you'll need to rename this column when porting.
This should work:
select iif(history.newvalue is null, employee.department, history.newvalue)
as Department
from employee left outer join history on history.RowId = employee.Id
and history.changedate < '2008-05-20' // (i.e. given date)
and history.changedate = (select max(changedate) from history h1
where h1.RowId = history.RowId and h1.changedate <= history.changedate)

How do I use a join to query two tables and get all rows from one, and related rows from the other?

Simplified for example, I have two tables, groups and items.
items (
id,
groupId,
title
)
groups (
id,
groupTitle,
externalURL
)
The regular query I'm goes something like this:
SELECT
i.`id`,
i.`title`,
g.`id` as 'groupId',
g.`groupTitle`,
g.`externalURL`
FROM
items i INNER JOIN groups g ON (i.`groupId` = g.`id`)
However I need to modify this now, because all the groups which specify an externalURL will not have any corresponding records in the items table (since they're stored externally). Is it possible to do some sort of join so that the output looks kinda like this:
items:
id title groupId
----------------------
1 Item 1 1
2 Item 2 1
groups
id groupTitle externalURL
-------------------------------
1 Group 1 NULL
2 Group 2 something
3 Group 3 NULL
Query output:
id title groupId groupTitle externalURL
---------------------------------------------------
1 Item 1 1 Group 1 NULL
2 Item 2 1 Group 1 NULL
NULL NULL 2 Group 2 something
-- note that group 3 didn't show up because it had no items OR externalURL
Is that possible in one SQL query?
This is exactly what an outer join is for: return all the rows from one table, whether or not there is a matching row in the other table. In those cases, return NULL for all the columns of the other table.
The other condition you can take care of in the WHERE clause.
SELECT
i.`id`,
i.`title`,
g.`id` as 'groupId',
g.`groupTitle`,
g.`externalURL`
FROM
items i RIGHT OUTER JOIN groups g ON (i.`groupId` = g.`id`)
WHERE i.`id` IS NOT NULL OR g.`externalURL` IS NOT NULL;
Only if both i.id and g.externalURL are NULL, then the whole row of the joined result set should be excluded.