SQL Server, renumber int column sequentially on two tables - sql

I have the following two tables and data:-
CREATE TABLE customers
([id] int, [name] varchar(10), [sex] varchar(1))
;
INSERT INTO customers
([id], [name], [sex])
VALUES
(1050, 'John Doe', 'M'),
(1060, 'Jane Doe', 'F'),
(1031, 'Joe Bloggs', 'M')
;
CREATE TABLE orders
([id] int, [fk] int, [product] varchar(13))
;
INSERT INTO orders
([id], [fk], [product])
VALUES
(51, 1050, 'Blue car'),
(57, 1050, 'Yellow car'),
(43, 1060, 'Pink bus'),
(32, 1031, 'Black pen'),
(87, 1031, 'Orange jacket')
;
What i want to do is re-number the id column in both tables sequentially starting from 1.
The linked rows in the orders table must also be renumbered, and the foreign key in this table must match the new number in the customers table.
so the data needs to end up looking like this:-
ID NAME SEX
0001 John Doe M
0002 Jane Doe F
0003 Joe Bloggs M
ID FK PRODUCT
0001 0001 Blue car
0002 0001 Yellow car
0003 0002 Pink bus
0004 0003 Black pen
0005 0003 Orange jacket
How would I go about doing this in SQL Server ?

Absolutely no need to resort to cursor (yikes!) here.... you need something like this:
-- Table "Customers" - rename column "id" to "old_id"
EXEC sp_rename 'dbo.Customers.id', 'old_id'
-- add new "id" column
ALTER TABLE Customers ADD id INT
-- fill new "id" column with sequential values, ordered by the "old_id" value
;WITH CTE AS
(
SELECT old_id, new_id = ROW_NUMBER() OVER (ORDER BY old_id)
FROM Customers
)
UPDATE dbo.Customers
SET id = CTE.new_id
FROM CTE
WHERE CTE.old_id = dbo.Customers.old_id
-- Table "Orders" - rename column "id" to "old_id"
EXEC sp_rename 'dbo.orders.id', 'old_id'
-- add new "id" column
ALTER TABLE Orders ADD id INT
-- update the FK references to the new "id" values in table "dbo.Customers"
UPDATE dbo.Orders
SET fk = c.id
FROM dbo.Customers c
WHERE dbo.ORders.fk = c.old_id
-- fill new "id" column with sequential values, ordered by the "old_id" value
;WITH CTE AS
(
SELECT old_id, new_id = ROW_NUMBER() OVER (ORDER BY old_id)
FROM dbo.Orders
)
UPDATE dbo.Orders
SET id = CTE.new_id
FROM CTE
WHERE CTE.old_id = dbo.Orders.old_id
-- drop the old, no longer required columns "old_id" from both tables
ALTER TABLE dbo.Customers DROP COLUMN old_id
ALTER TABLE dbo.Orders DROP COLUMN old_id
That'll work, if you don't have any other FK relationships that are referencing one of those two tables. If you do, then you might need to disable or drop those FK relationships before you start this upgrade script.

If this is a one time maintenace type fix you want to do, I would do this by:
disable all the foregin keys
add an oldid column to your customers table
put the current ID field into the oldid field
replace all the customer id's with new id's
update the orders table with the now udpated id using a join/update on the oldid field from the customer table
drop the oldid column
add back your foriegn key constraints.
voila

Remove foreign key.
Create customers2 table with extra column OldId and column Id as IDENTITY.
Insert all rows from cutomers to customers2 (with mapping customers.Id to customers2.OldId)
Do update on orders.fk setting orders.fk = customers2.id
Do the same with the orders table what you did with the customers in point 2 and 3.
drop customers, drop orders, rename customers2, rename orders2
Recreate foreign key.

Related

How to I get distinct combinations of one XRef column related to any value in the other XRef column

I need to select the count of unique value combinations of column B in an XRef table which is grouped by column A.
Consider the following schema and data, which represents a simple family structure. Each child has a father and mother:
TABLE Father
FatherID
Name
1
Alex
2
Bob
TABLE Mother
MotherID
Name
1
Alice
2
Barbara
TABLE Child
ChildID
FatherID
MotherID
Name
1
1 (Alex)
1 (Alice)
Adam
2
1 (Alex)
1 (Alice)
Billy
3
1 (Alex)
2 (Barbara)
Celine
4
2 (Bob)
2 (Barbara)
Derek
The distinct combinations of mothers for each father are:
Alex (Alice, Barbara)
Bob (Barbara)
In all there are two distinct combinations of mothers:
Alice, Barbara
Barbara
The query I want to write would return the count of those distinct combinations of mother, regardless of which father they are associated with:
UniqueMotherGroups
2
I was able to do this successfully using the STRING_AGG function, but it feels clunky. It also needs to operate over millions of rows and is quite slow at the moment. Is there a more idiomatic way to do this with set operations instead?
Here is my working example:
-- Drop pre-existing tables
DROP TABLE IF EXISTS dbo.Child;
DROP TABLE IF EXISTS dbo.Father;
DROP TABLE IF EXISTS dbo.Mother;
-- Create family tables.
CREATE TABLE dbo.Father
(
FatherID INT NOT NULL
, Name VARCHAR(50) NOT NULL
);
ALTER TABLE dbo.Father
ADD CONSTRAINT PK_Father
PRIMARY KEY CLUSTERED (FatherID);
ALTER TABLE dbo.Father SET (LOCK_ESCALATION = TABLE);
CREATE TABLE dbo.Mother
(
MotherID INT NOT NULL
, Name VARCHAR(50) NOT NULL
);
ALTER TABLE dbo.Mother
ADD CONSTRAINT PK_Mother
PRIMARY KEY CLUSTERED (MotherID);
ALTER TABLE dbo.Mother SET (LOCK_ESCALATION = TABLE);
CREATE TABLE dbo.Child
(
ChildID INT NOT NULL
, FatherID INT NOT NULL
, MotherID INT NOT NULL
, Name VARCHAR(50) NOT NULL
);
ALTER TABLE dbo.Child
ADD CONSTRAINT PK_Child
PRIMARY KEY CLUSTERED (ChildID);
CREATE NONCLUSTERED INDEX IX_Parents ON dbo.Child (FatherID, MotherID);
ALTER TABLE dbo.Child
ADD CONSTRAINT FK_Child_Father
FOREIGN KEY (FatherID)
REFERENCES dbo.Father (FatherID);
ALTER TABLE dbo.Child
ADD CONSTRAINT FK_Child_Mother
FOREIGN KEY (MotherID)
REFERENCES dbo.Mother (MotherID);
-- Insert two children with the same parents
INSERT INTO dbo.Father
(
FatherID
, Name
)
VALUES
(1, 'Alex')
, (2, 'Bob')
, (3, 'Charlie')
INSERT INTO dbo.Mother
(
MotherID
, Name
)
VALUES
(1, 'Alice')
, (2, 'Barbara');
INSERT INTO dbo.Child
(
ChildID
, FatherID
, MotherID
, Name
)
VALUES
(1, 1, 1, 'Adam')
, (2, 1, 1, 'Billy')
, (3, 1, 2, 'Celine')
, (4, 2, 2, 'Derek')
, (5, 3, 1, 'Eric');
-- CTE Gets distinct combinations of parents
WITH distinctParentCombinations (FatherID, MotherID)
AS (SELECT children.FatherID
, children.MotherID
FROM dbo.Child as children
GROUP BY children.FatherID
, children.MotherID
)
-- CTE Gets uses STRING_AGG to get unique combinations of mothers.
, motherGroups (Mothers)
AS (SELECT STRING_AGG(CONVERT(VARCHAR(MAX), distinctParentCombinations.MotherID), '-') WITHIN GROUP (ORDER BY distinctParentCombinations.MotherID) AS Mothers
FROM distinctParentCombinations
GROUP BY distinctParentCombinations.FatherID
)
-- Remove the COUNT function to see the actual combinations
SELECT COUNT(motherGroups.Mothers) AS UniqueMotherGroups
FROM motherGroups
-- Clean up the example
DROP TABLE IF EXISTS dbo.Child;
DROP TABLE IF EXISTS dbo.Father;
DROP TABLE IF EXISTS dbo.Mother;
You have a great explanation and setup of your "problem case".
Your setup runs great in (for example) tempdb.
You have solved the problem in a nice way, and I don't think you can optimize it much further if you are going to calculate the mother groups every time you run the query.
There is one small mistake though; You must do a COUNT(DISTINCT motherGroups.Mothers) in your final count.
Since you mention milions of rows, I would suggest a slightly different approach.
If you aggregate the mother groups as soon as there is a change in the Child table, your query can run fast every time - even with millions of rows.
The kind of queries you want to run is seldom run only once, so it would be nice if the heavy work is already done.
Usually I prefer not to use triggers, because you get extra logic in a place where it could be hard to find and debug.
But sometimes triggers are nice to have, especially when you are not able to change the source code running on the clients.
So, my solution is to add a new column to the Father table and to create a trigger which (re)generates the mother group each time there is a change in the Child table.
This way, the hard aggregation work for each father is done as soon there is a change, and you don't have to aggregate when you run your query.
Since you already have millions of rows, we also have to update these existing rows.
I have used SQL Server 2019 for this solution.
*** The solution ***
Add 1 or 2 new columns to the Father table.
If you should add 1 or 2, it depends on what your preferences are:
"Do I want to see the aggregated mother groups for debugging purpose, or do I just trust the hashed values?"
Column 1: Hashed value of the aggregated mother group for each Father row.
The hashed value is VARBINARY and is at least 32 bytes, but we will use VARBINARY(1600):
1600 is less than 1700 which is the max nonclustered index size, so we will not have any problems indexing the column.
Since the hash value is in blocks of 32 bytes, a value of 1600 will cover a really, really, really long aggreated mother group.
-- Column 1: Hashed value of the aggregated mother group for each Father row.
alter table Father add MotherHash varbinary(1600)
create index IX_MotherHash on Father(MotherHash)
Column 2: This column is more optional, and depends on your preferences.
The column could be nice to have for debugging purpose if any questions are made about the result.
Which VARCHAR-length you should use depends on your real data.
MAX? Then you have no problems storing the mother groups, but you might have problems indexing it, since 1700 is the max for an unclustered index. But maybe you don't need to index it?
1700? Then you are able to index the column, but depending on your real data, will this cover the biggest mother group?
Why indexing? If you want to list the aggregated mother groups, it could be faster to read the index than the whole table.
As said; this depends on you (and your data). If we have no need to see the aggregated mother groups, then we don't need this column at all.
For this demo/solution we will add the column for debugging purpose, without any indexing.
-- Column 2: This column is more optional, and depends on your preferences.
alter table Father add MotherGroup varchar(MAX)
go
Create a trigger on the Child table.
It will handle all inserts, updates and deletes in the Child table.
create or alter trigger trIUD_Child on Child
after insert, update, delete
as
begin
set nocount on
-- Get all FatherIDs from the Inserted and Deleted table.
-- An ordinary Temp table is created with a clustered index to get SEEK performance later.
-- The table might also have more than 100 rows, where table variables are not recommended.
declare #numRowsInInsertedDeleted int
create table #rowsInInsertedDeleted(rowId int identity(1, 1), FatherID int)
create unique clustered index ix on #rowsInInsertedDeleted(rowId)
insert #rowsInInsertedDeleted(FatherID)
select distinct f.FatherID
from
(
select i.FatherID from inserted i
union all
select i.FatherID from deleted i
) f
select #numRowsInInsertedDeleted = max(rowId) from #rowsInInsertedDeleted
-- We have to loop each of the FatherIDs, since we might have several rows in the Inserted and Deleted tables.
declare #rowId int = 0
while (#rowId < #numRowsInInsertedDeleted)
begin
-- Get the father for the next row.
select #rowId += 1
declare #fatherId int
select #fatherId = r.FatherID
from #rowsInInsertedDeleted r
where r.rowId = #rowId
-- Aggregate the mothers for this father.
declare #motherGroup varchar(max) = ''
select #motherGroup += ',' + cast(c.MotherID as varchar)
from Child c
where c.FatherID = #fatherId
group by c.MotherID
order by c.MotherID
-- Update the father record.
-- Any empty strings are handled automatically, skip the leading ','.
update Father
set MotherGroup = substring(#motherGroup, 2, 2147483647),
MotherHash = HASHBYTES('SHA2_256', #motherGroup)
where FatherID = #fatherId
end
end
go
Updating existing rows
Since you already have millions of rows, we must aggregate the mother groups for these existing rows.
If you don't have the disk space for logging the update of the whole table, maybe you should take your database out of AG and switch to Simple recovery model for this task?
In that case you should also modify the update with a WHERE clause to update only parts of the table, and run the update for each part until the whole table is updated.
Example: update Child set FatherID = FatherID where FatherID between 1 and 1000000
Note: This update statement could block access to the Child table for other users.
-- Aggregate the mother groups for the existing rows.
-- This could takes minutes to complete, depending on the number of rows.
-- NOTE: This update statement could block access to the Child table for other users.
update Child set FatherID = FatherID
That's it!
You should now be able to quickly get the mother groups on existing rows, and also after future changes in the Child table.
-- Voila - now you can get the unique mother groups any time at a fast speed.
select count(distinct MotherHash) from Father
Thank you for posting such a comprehensive setup for the test data. However, I'm not running any CREATE/DROP statements against my DB so I converted those tables into table variables. Using your data, I came up with the following query. Just change the table names back to your dbo. names and you should be able to test in your environment. I basically concatenate every father/mother combo into a text string using FOR XML PATH. Then I count up all the distinct combos. If you find error in my logic, let me know. I'm just tossing this in the ring of possible solutions.
WITH distinctCombos AS (
SELECT DISTINCT
c.FatherID, c.MotherID
FROM #Child as c
) , motherComboCount AS (
SELECT
f.FatherID
, f.[Name]
, STUFF((
SELECT
',' + CAST(dc.MotherID as nvarchar)
FROM distinctCombos as dc
WHERE dc.FatherID = f.FatherID
ORDER BY dc.MotherID ASC
FOR XML PATH('')
),1,1,'') as motherList
FROM #Father as f
)
SELECT
COUNT(DISTINCT motherList) as UniqueMotherGroups
FROM motherComboCount as mcc
To save a bit of compute power, remove the STUFF function as it's not necessary for the comparison... it just makes the list nicer to look at if displaying... and I'm in the habit of using it.
It looks like the main differences between our methods is the use of FOR XML PATH vs STRING_AGG (I'm still on older SQL.) And I use DISTINCT twice instead of GROUP BY. If you have a larger dataset to test against, let me know how the 2 methods compare. I'm trying to think of a completely set-based method but I can't see it at the moment.
Update: Method 2.
Here's an idea I had using recursive CTEs to build the distinct mother combinations. In your example data, there are only 2 mothers per father. So there would be a total of 4 set-based queries performed (first CTE, 2 queries in the recursive CTE and the final SELECT).
WITH uniqueCombo as (
SELECT DISTINCT
c.FatherID
, c.MotherID
, ROW_NUMBER() OVER(PARTITION BY c.FatherID ORDER BY c.MotherID) as row_num
FROM #Child as c
), combos as (
SELECT
uc.FatherID
, uc.MotherID
, CAST(uc.MotherID as nvarchar(max)) as [path]
, row_num
, 0 as hierarchy_num
FROM uniqueCombo as uc
WHERE uc.row_num = 1
UNION ALL
SELECT
uc.FatherID
, uc.MotherID
, co.[path] + ',' + CAST(uc.MotherID as nvarchar(max))
, uc.row_num
, co.hierarchy_num + 1 as heirarchy_num
FROM uniqueCombo as uc
INNER JOIN combos as co
ON co.FatherID = uc.FatherID
--AND co.MotherID <> uc.MotherID
AND co.row_num + 1 = uc.row_num
), rankedCombos as (
SELECT
c.[path]
, ROW_NUMBER() OVER(PARTITION BY c.FatherID ORDER BY c.hierarchy_num DESC) as row_num
FROM combos as c
)
SELECT COUNT(DISTINCT rc.[path]) as UniqueMotherGroups
FROM rankedCombos as rc
WHERE rc.row_num = 1
Update 2:
I had another idea to use a PIVOT to transpose the records so that the FatherID would be in the left-most column with the MotherIDs as the column headers. To make that work with a dynamic list of MotherIDs, you have to use a dynamic PIVOT/dynamic SQL. (FatherID isn't really needed in the PIVOT so it's not included in the PIVOT query. I just had to describe what the goal is.) After the pivot, you can SELECT DISTINCT to get the unique mother combinations. Then the last SELECT is to get the COUNT. This one I ran in SQL Fiddle:
SQL Fiddle
MS SQL Server 2017 Schema Setup:
-- Create family tables.
CREATE TABLE dbo.Father
(
FatherID INT NOT NULL
, Name VARCHAR(50) NOT NULL
);
ALTER TABLE dbo.Father
ADD CONSTRAINT PK_Father
PRIMARY KEY CLUSTERED (FatherID);
ALTER TABLE dbo.Father SET (LOCK_ESCALATION = TABLE);
CREATE TABLE dbo.Mother
(
MotherID INT NOT NULL
, Name VARCHAR(50) NOT NULL
);
ALTER TABLE dbo.Mother
ADD CONSTRAINT PK_Mother
PRIMARY KEY CLUSTERED (MotherID);
ALTER TABLE dbo.Mother SET (LOCK_ESCALATION = TABLE);
CREATE TABLE dbo.Child
(
ChildID INT NOT NULL
, FatherID INT NOT NULL
, MotherID INT NOT NULL
, Name VARCHAR(50) NOT NULL
);
ALTER TABLE dbo.Child
ADD CONSTRAINT PK_Child
PRIMARY KEY CLUSTERED (ChildID);
CREATE NONCLUSTERED INDEX IX_Parents ON dbo.Child (FatherID, MotherID);
ALTER TABLE dbo.Child
ADD CONSTRAINT FK_Child_Father
FOREIGN KEY (FatherID)
REFERENCES dbo.Father (FatherID);
ALTER TABLE dbo.Child
ADD CONSTRAINT FK_Child_Mother
FOREIGN KEY (MotherID)
REFERENCES dbo.Mother (MotherID);
-- Insert two children with the same parents
INSERT INTO dbo.Father
(
FatherID
, Name
)
VALUES
(1, 'Alex')
, (2, 'Bob')
, (3, 'Charlie')
INSERT INTO dbo.Mother
(
MotherID
, Name
)
VALUES
(1, 'Alice')
, (2, 'Barbara');
INSERT INTO dbo.Child
(
ChildID
, FatherID
, MotherID
, Name
)
VALUES
(1, 1, 1, 'Adam')
, (2, 1, 1, 'Billy')
, (3, 1, 2, 'Celine')
, (4, 2, 2, 'Derek')
, (5, 3, 1, 'Eric');
Query 1:
DECLARE #cols AS nvarchar(MAX)
DECLARE #query AS nvarchar(MAX)
SET #cols = STUFF((
SELECT DISTINCT ',' + QUOTENAME(m.MotherID)
FROM Mother as m
FOR XML PATH(''))
,1,1,'')
SET #query = 'SELECT COUNT(mCount) as UniqueMotherGroups FROM (
SELECT DISTINCT ' + #cols + ', 1 as mCount FROM (
SELECT ' + #cols + '
FROM (
SELECT
c.FatherID
, c.MotherID
, 1 as mID
FROM child as c
) x
PIVOT
(
MAX(mID)
FOR MotherID in (' + #cols + ')
) p
) as m
) as mg'
--SELECT #query
Exec(#query)
Results:
| UniqueMotherGroups |
|--------------------|
| 3 |
UPDATE 3: Here's one other idea... create a results table with a unique constraint and with IGNORE_DUP_KEY=ON. You could use this in a function or stored procedure, or, setup a trigger to put the mother combinations into a unique-combo-holding-table. With IGNORE_DUP_KEY=ON, you can insert every combo and only the unique combos will remain. Then just do a count of all the rows.
--Create a table to hold the results:
CREATE TABLE results (
ChildID int not null
, UniqueCombos nvarchar(50) not null
PRIMARY KEY WITH (IGNORE_DUP_KEY = ON)
);
--Insert all combos into the results table. The unique constraint will cause only unique entries to remain.
INSERT INTO results (ChildID, UniqueCombos)
SELECT DISTINCT
c.ChildID
, (
SELECT ',' + CAST(MotherID as nvarchar(500))
FROM Child as c2
WHERE c2.ChildID = c.ChildID
ORDER BY c2.MotherID
FOR XML PATH('')
) as mother_combos
FROM Child as c
;
--Count up all the rows in the results table. Since these are all unique combinations, it should be fast to sum.
SELECT COUNT(*)
FROM results;
If you accept to define a maximum number of mothers per father (here 7) you may try:
select count(*) as UniqueMotherGroups from (
select distinct m1, m2, m3, m4, m5, m6, m7 from (
select FatherID, row_number() over(partition by FatherID order by motherid) as rn, motherid
from (
select distinct FatherID, MotherID
from t_Child
)
)
pivot (
max(motherid) for rn in (1 as m1,2 as m2,3 as m3,4 as m4,5 as m5,6 as m6,7 as m7)
)
)
;
UNIQUEMOTHERGROUPS
------------------
3
Here is one idea. Instead of using precise STRING_AGG you can calculate a hash / checksum of the group. You don't need to know the exact composition of the group, you just need to distinguish between different groups. Calculating of the hash may be faster than concatenating strings.
SQL Server has a function CHECKSUM_AGG
You can write your own hashing function with CLR.
Sample data
CREATE TABLE #Child
(
ChildID INT NOT NULL IDENTITY PRIMARY KEY
,FatherID INT NOT NULL
,MotherID INT NOT NULL
,Name VARCHAR(50) NOT NULL
);
INSERT INTO #Child
(
FatherID
,MotherID
,Name
)
VALUES
(1, 1, 'Adam')
,(1, 1, 'Billy')
,(1, 2, 'Celine')
,(2, 2, 'Derek')
,(3, 1, 'Eric')
,(4, 1, 'A')
,(4, 1, 'B')
,(4, 2, 'C')
,(4, 2, 'D')
,(4, 2, 'E')
,(5, 2, 'F')
,(6, 2, 'G')
;
Query
WITH
distinctParentCombinations
AS
(
SELECT
FatherID
,MotherID
FROM #Child
GROUP BY
FatherID
,MotherID
)
,motherGroups
AS
(
SELECT
FatherID
,CHECKSUM_AGG(MotherID) AS MotherGroup
FROM distinctParentCombinations
GROUP BY
FatherID
)
SELECT COUNT(DISTINCT MotherGroup) AS UniqueMotherGroups
FROM motherGroups
;
Result
+--------------------+
| UniqueMotherGroups |
+--------------------+
| 3 |
+--------------------+
You need to compare performance of all methods on your actual data.
Obviously, with CHECKSUM_AGG it is possible that some of the groups will be missed. There is a chance that two different groups will generate the same checksum.
You know better if this is acceptable.
General way to speed up calculations is to have some of the results already pre-calculated. In your case, for the first part you can create indexed view as follows:
CREATE OR ALTER VIEW vw_distinctParentCombinations WITH SCHEMABINDING AS
SELECT children.FatherID
, children.MotherID
,COUNT_BIG(*) AS [wifes_count]
FROM dbo.Child as children
GROUP BY children.FatherID
, children.MotherID
GO
CREATE UNIQUE CLUSTERED INDEX IX_vw_distinctParentCombinations ON vw_distinctParentCombinations
(
FatherID,MotherID
);
Then in your initial query, you can avoid the first CTE:
-- CTE Gets distinct combinations of parents
WITH motherGroups (Mothers)
AS
(SELECT STRING_AGG(CONVERT(VARCHAR(MAX), distinctParentCombinations.MotherID), '-') WITHIN GROUP (ORDER BY distinctParentCombinations.MotherID) AS Mothers
FROM vw_distinctParentCombinations distinctParentCombinations WITH(NOEXPAND)
GROUP BY distinctParentCombinations.FatherID
)
-- Remove the COUNT function to see the actual combinations
SELECT COUNT(motherGroups.Mothers) AS UniqueMotherGroups
FROM motherGroups;
This will avoid the initial read of the large table and depending the distinct combinations of the pairs (father - mother) it can reduce the view size significantly.
Unfortunately, there are a lot of limitations in order to create an indexed view, and you are not able to create such for the second CTE.
If we change our mind and look this issue in different view, simply we can get the count of mothers with this query:
SELECT Count(distinct ConcatMothers) UniqueMothersCount from(
SELECT FatherID, concat(FatherID,'-',SUM(MotherID)) ConcatMothers
FROM dbo.Child
GROUP BY FatherID) t;
Or even you can use Dense_Rank() like this:
SELECT Max(RankMothers) UniqueMothersCount from(
SELECT FatherID, DENSE_RANK() over (order by concat(FatherID,'-',SUM(MotherID))) RankMothers
FROM dbo.Child
GROUP BY FatherID) t;
For the performance it is hard to measure because dataset is small but since we have one column in the group by and the motherId is in the select maybe we can change index as below:
CREATE NONCLUSTERED INDEX IX_Parents ON dbo.Child (FatherID) Include(MotherID);
but you need to check it on your dataset.

SQL Server, self referencing foreign compound key

I have a table with columns task_id (pk), client_id, parent_task_id, title. In other words, tasks are owned by clients, and some tasks have child tasks.
For example, client 7 may have a task "wash the car," with child tasks "vacuum carpet" and "wipe dashboard."
I want a constraint so that a task and its children are always owned by the same client.
Through a bit of experimentation, to do this, I created a self-referencing foreign key (client_id, parent_task_id) referencing (client_id, task_id). At first I received an error (There are no primary or candidate keys in the referenced table that match the referencing column list in the foreign key.) So I added a unique key for columns task_id, client_id. Now it seems to work.
I am wondering if this is the best solution (or at least reasonable one) to enforce this constraint. Any thoughts would be appreciated. Thanks much!
A 'parent' record would not need a [parent_task_id]
TASK ID | CLIENT ID | PARENT TASK ID | TITLE
1 | 7 | NULL | wash the car
(To find all of your parent records, SELECT * FROM TABLE WHERE [parent_task_id] is null)
A 'child' record would need a [parent_task_id], but not a [client_id] (because, as you stipulate, a child has the same client as it's parent).
TASK ID | CLIENT ID | PARENT TASK ID | TITLE
2 | NULL | 1 | vacuum carpent
3 | NULL | 1 | wipe dashboard
In this way, your self-referencing foreign key is all the constraint you need. No constraint / rule concerning [client_id] on child records is necessary, because all [client_id] values on child records will be ignored, in favor of the [client_id] on the parent record.
For example, if you want to know what the [client_id] is for a child record:
SELECT
c.task_id,
p.client_id,
c.title
FROM
table p --parent
INNER JOIN table c --child
ON p.task_id = c.parent_task_id
UPDATE
(How to query for the client ID of a grand-child)
--Create and populate your table (using a table var in this sample)
DECLARE #table table (task_id int, client_id int, parent_task_id int, title varchar(50))
INSERT INTO #table VALUES (1,7,NULL,'wash the car')
INSERT INTO #table VALUES (2,NULL,1,'vacuum carpet')
INSERT INTO #table VALUES (3,NULL,1,'wipe dashboard')
INSERT INTO #table VALUES (4,NULL,2,'Step 1: plug-in the vacuum')
INSERT INTO #table VALUES (5,NULL,2,'Step 2: turn-on the vacuum')
INSERT INTO #table VALUES (6,NULL,2,'Step 3: use the vacuum')
INSERT INTO #table VALUES (7,NULL,2,'Step 4: turn-off the vacuum')
INSERT INTO #table VALUES (8,NULL,2,'Step 5: empty the vacuum')
INSERT INTO #table VALUES (9,NULL,2,'Step 6: put-away the vacuum')
INSERT INTO #table VALUES (10,NULL,3,'Step 1: spray cleaner on the rag')
INSERT INTO #table VALUES (11,NULL,3,'Step 2: use the rag')
INSERT INTO #table VALUES (12,NULL,3,'Step 3: put-away the cleaner')
INSERT INTO #table VALUES (13,NULL,3,'Step 4: toss the rag in the laundry bin')
--Determine which grandchild you want the client_id for
DECLARE #task_id int
SET #task_id = 8 -- grandchild's ID to use to find client_id
--Create your CTE (this is the recursive part)
;WITH myList (task_id, client_id, parent_task_id, title)
AS
(
SELECT a.task_id, a.client_id, a.parent_task_id, a.title
FROM #table a
WHERE a.task_id = #task_id
UNION ALL
SELECT a.task_id, a.client_id, a.parent_task_id, a.title
FROM #table a
INNER JOIN myList m
ON a.task_id = m.parent_task_id
)
--Query your CTE
SELECT task_id, client_id, title FROM myList WHERE client_id is not null
In this example, I used a granchild's task_id (8 -- 'empty the vacuum') to find it's highest-level parent, which holds the client_id.
You can remove the WHERE clause from the last step if you want to see each parent, parent's parent, and so on up to the first-parent's record.

Trigger After Update SQL

I have Customer table. To simplify lets say i have two columns
Id
Name
I have a second table (Log) that I want to update ONLY when the Id column of my customer changes. Yes you heard me right that the primary key (Id) will change!
I took a stab but the NewId that gets pulled is the first record in the Customer table not the updated record
ALTER TRIGGER [dbo].[tr_ID_Modified]
ON [dbo].[customer]
AFTER UPDATE
AS
BEGIN
SET NOCOUNT ON;
IF UPDATE (Id)
BEGIN
UPDATE [log]
SET NewId = Id
FROM customer
END
END
Many would make the argument that if you are changing PK values, you need to rethink the database/table design. However, if you need a quick & dirty fix, add a column to the customer table that is unique (and not null). Use this column to join between the [inserted] and [deleted] tables in your update trigger. Here's a sample script:
CREATE TABLE dbo.Customer (
Id INT CONSTRAINT PK_Customer PRIMARY KEY,
Name VARCHAR(128),
UQColumn INT IDENTITY NOT NULL CONSTRAINT UQ_Customer_UQColumn UNIQUE
)
CREATE TABLE dbo.[Log] (
CustomerId INT NOT NULL,
LogMsg VARCHAR(MAX)
)
INSERT INTO dbo.Customer
(Id, Name)
VALUES
(1, 'Larry'),
(2, 'Curley'),
(3, 'Moe')
INSERT INTO dbo.[Log]
(CustomerId, LogMsg)
VALUES
(1, 'Larry is cool'),
(1, 'Larry rocks'),
(2, 'Curley cracks me up'),
(3, 'Moe is mean')
CREATE TRIGGER [dbo].[tr_Customer_Upd]
ON [dbo].[customer]
FOR UPDATE
AS
BEGIN
UPDATE l
SET CustomerId = i.Id
FROM inserted i
JOIN deleted d
ON i.UQColumn = d.UQColumn
JOIN [Log] l
ON l.CustomerId = d.Id
END
SELECT *
FROM dbo.[Log]
UPDATE dbo.Customer
SET Id = 4
WHERE Id = 1
SELECT *
FROM dbo.[Log]

Split One table into Two in SQL Server 2008

I need to break one table (structure built by someone else but I need the data it contains thousands of records) into two new tables I created.
Table Name: Customers_Info (Old Table)
FullName Telephone Address
Adam Johnson 01555777 Michigan
John Smith 01222333 New York
John Smith 01222333 New Jersey
Lara Thomas 01888999 New Mexico
The above is the old table. Now I created two tables to hold the data, one table for customers with a default address, and the other table to hold additional addresses. In the shown example I need 3 persons to be listed in the Customers table, and the address of "John Smith" (the second one New Jersey) to be listed in the Addresses table.
The common field to look at here is "Telephone" and it's unique for every customer.
Here's how the result should be display.
Table Name: Customers (New Table)
CustomerID FullName Telephone Default_Address
1 Adam Johnson 01555777 Michigan
2 John Smith 01222333 New York
3 Lara Thomas 01888999 New Mexico
Table Name: Addresses (New Table)
AddressID CustomerID Address
1 2 New Jersey
Of course it was easy to copy all data into the new Customers table, but what I'm stuck at now, is how to remove the duplicates from Customers and insert them into the Addresses table with the Customer ID and Address only.
Thanks!
Give a try with below code and let me know the comments/results.
CREATE TABLE [Customers_Info]
(
FullName VARCHAR(50)
,Telephone VARCHAR(50)
,Address VARCHAR(50)
)
GO
CREATE TABLE Customers
(
CustomerID INT IDENTITY(1,1)
,FullName VARCHAR(50)
,Telephone VARCHAR(50)
,Default_Address VARCHAR(50)
)
GO
ALTER TABLE dbo.Customers ADD CONSTRAINT PK_Customers
PRIMARY KEY CLUSTERED (CustomerID);
GO
CREATE TABLE Addresses
(
AddressID INT IDENTITY(1,1)
,CustomerID INT
,[Address] VARCHAR(50)
)
GO
ALTER TABLE dbo.Addresses ADD CONSTRAINT PK_Addresses
PRIMARY KEY CLUSTERED (AddressID);
GO
ALTER TABLE Addresses ADD CONSTRAINT FK_CustomerID_Addresses_Customers FOREIGN KEY (CustomerID)
REFERENCES dbo.Customers(CustomerID);
GO
INSERT INTO [Customers_Info] VALUES ('Adam Johnson', '01555777', 'Michigan')
INSERT INTO [Customers_Info] VALUES ('John Smith' , '01222333', 'New York')
INSERT INTO [Customers_Info] VALUES ('John Smith' , '01222333', 'New Jersey')
INSERT INTO [Customers_Info] VALUES ('Lara Thomas' , '01888999', 'New Mexico')
INSERT INTO [Customers_Info] VALUES ('Lara Thomas' , '01888999', 'New Mexico1')
INSERT INTO [Customers_Info] VALUES ('Lara Thomas' , '01888999', 'New Mexico2')
INSERT INTO [Customers_Info] VALUES ('Adam Johnson', '01555777', 'Michigan1')
INSERT INTO [Customers_Info] VALUES ('Adam Johnson', '01555777A', 'Michigan')
INSERT INTO [Customers_Info] VALUES ('Adam Johnson', '01555777A', 'Michigan2')
GO
SELECT * FROM [Customers_Info]
--DELETE FROM Customers
--TRUNCATE TABLE Addresses
------------------------------------------------------------------------------------------------------------------
;WITH a as
(
SELECT FullName,Telephone,[Address],
rn = row_number() over (partition by FullName, Telephone order by FullName)
FROM [Customers_Info]
)
INSERT INTO Customers SELECT
FullName,Telephone,[Address] from a where rn = 1
------------------------------------------------------------------------------------------------------------------
;WITH b as
(
SELECT FullName,Telephone,[Address],
rn = row_number() over (partition by FullName, Telephone order by FullName)
FROM [Customers_Info]
)
INSERT INTO Addresses SELECT CI.CustomerID,b.[Address] FROM Customers CI
INNER JOIN b ON b.FullName=CI.FullName AND b.Telephone=CI.Telephone
WHERE b.rn>1
SELECT * FROM Customers
SELECT * FROM Addresses
DROP TABLE [Customers_Info]
GO
DROP TABLE Addresses
GO
DROP TABLE Customers
GO
It would be more normalized if you broke it up into one more table for three total tables. Have the Customers table that has only customer data, have the Address table (which you could possibly rename to State) that has only the Address, then a CustomerAddress table that has both keys to each of those tables as Foreign Keys.
I will start you off to begin:
INSERT INTO Customers (FullName, Telephone)
SELECT DISTINCT FullName, Telephone
FROM Customers_Info
You would do the same for Address. For the 3rd table, you would perform the lookups like this:
INSERT INTO CustomerAddress (CustomerID, AddressID)
SELECT C.CustomerID, A.AddressID
FROM Customers_Info CI
INNER JOIN Customers C
ON CI.Telephone = C.Telephone
INNER JOIN Address A
ON CI.Address = A.Address

Use autogenerated column to populate another column

How can I use an auto-generated column to populate another column during an INSERT statement?
Long story short: we are reusing a database table and an related ASP page to display completely different data than was originally intended.
I have a table similar in structure to the following. It's structure is out of my control.
ID int NON-NULL, IDENTITY(1,1)
OrderNo varchar(50) NON-NULL, UNIQUE
More ...
The table has been repurposed and we are not using the OrderNo column. However, it's NON-NULL and UNIQUE. As dummy data, I want to populate it with the row's ID column.
I have the following SQL so far, but can't work out how to use the row's generated ID.
INSERT INTO MyTable (OrderNo, More)
OUTPUT INSERTED.ID
VALUES (CAST(ID AS varchar(50)))
This just gives:
Msg 207, Level 16, State 1, Line 3
Invalid column name 'ID'.
Here's a solution using the OUTPUT clause. Unfortunately, you won't be able to do it in a single statement.
CREATE TABLE Orders (
ID int not null identity(1,1),
OrderNo varchar(50) not null unique
)
CREATE TABLE #NewIDs ( ID int )
INSERT Orders (OrderNo)
OUTPUT INSERTED.ID INTO #NewIDs
SELECT 12345
UPDATE o
SET o.OrderNo = i.ID
FROM Orders o
JOIN #NewIDs i
ON i.ID = o.ID
SELECT * FROM Orders
One option would be:
create trigger YourTable_Trigger
on YourTable
INSTEAD OF INSERT
as begin
INSERT INTO YourTable (OrderNo, AnotherField)
SELECT 0, AnotherField FROM Inserted
UPDATE YourTable SET OrderNo = SCOPE_IDENTITY() WHERE ID = SCOPE_IDENTITY()
end;
And here is the Fiddle.
Good luck.