Optimizing CLUSTERED INDEX for use with JOIN - sql

table optin_channel_1 (for each 'channel' there's a dedicated table)
CREATE TABLE [dbo].[optin_channel_1](
[key_id] [bigint] NOT NULL,
[valid_to] [datetime] NOT NULL,
[valid_from] [datetime] NOT NULL,
[key_type_id] [int] NOT NULL,
[optin_flag] [tinyint] NOT NULL,
[source_proc_id] [int] NOT NULL,
[date_inserted] [datetime] NOT NULL
) ON [PRIMARY]
CREATE CLUSTERED INDEX [ix_id] ON [dbo].[optin_channel_1]
(
[key_type_id] ASC,
[key_id] ASC,
[valid_to] ASC,
[valid_from] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
table profile_conns
CREATE TABLE [dbo].[profile_conns](
[profile_key_id] [bigint] NOT NULL,
[valid_to] [datetime] NOT NULL,
[valid_from] [datetime] NOT NULL,
[conn_key_id] [bigint] NOT NULL,
[conn_key_type_id] [int] NOT NULL,
[conn_type_id] [int] NOT NULL,
[source_proc_id] [int] NOT NULL,
[date_inserted] [datetime] NOT NULL
) ON [PRIMARY]
CREATE CLUSTERED INDEX [ix_id] ON [dbo].[profile_conns]
(
[profile_key_id] ASC,
[conn_key_type_id] ASC,
[conn_key_id] ASC,
[valid_to] ASC,
[valid_from] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
table lu_channel_conns
CREATE TABLE [dbo].[lu_channel_conns](
[channel_id] [int] NOT NULL,
[conn_type_id] [int] NOT NULL,
CONSTRAINT [PK_lu_channel_conns] PRIMARY KEY CLUSTERED
(
[channel_id] ASC,
[conn_type_id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
table lu_conn_type
CREATE TABLE [dbo].[lu_conn_type](
[conn_type_id] [int] NOT NULL,
[default_key_type_id] [int] NOT NULL,
[master_key_type_id] [int] NOT NULL,
[date_inserted] [datetime] NOT NULL,
CONSTRAINT [PK_lu_conns] PRIMARY KEY CLUSTERED
(
[conn_type_id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
view v_source_proc_id_by_group_id
SELECT DISTINCT x.source_proc_id, x.source_proc_group_id
FROM lu_source_proc x INNER JOIN lu_source_proc_group y ON x.source_proc_group_id = y.group_id
There's a dynamic SQL statement going to be executed:
SET #sql_str='SELECT #ret=MAX(o.optin_flag)
FROM optin_channel_'+CAST(#channel_id AS NVARCHAR(100))+' o
INNER HASH JOIN dbo.v_source_proc_id_by_group_id y ON o.source_proc_id=y.source_proc_id AND y.source_proc_group_id=#source_proc_group_id
INNER HASH JOIN profile_conns z ON z.profile_key_id=cast(#profile_key_id AS NVARCHAR(100)) AND z.conn_key_type_id=o.key_type_id AND z.conn_key_id=o.[key_id] AND z.valid_to=''01.01.3000''
INNER HASH JOIN lu_channel_conns x ON x.channel_id=#channel_id AND z.conn_type_id=x.conn_type_id
INNER HASH JOIN lu_conn_type ct ON ct.conn_type_id=x.conn_type_id AND ct.default_key_type_id=o.key_type_id'
SET #param='#channel_id INT, #profile_key_id INT, #source_proc_group_id INT, #ret NVARCHAR(400) OUTPUT'
EXEC sp_executesql #sql_str,#param,#channel_id,#profile_key_id,#source_proc_group_id,#ret OUTPUT
I.e. this gives:
SELECT #ret=MAX(o.optin_flag) AS optin_flag
FROM optin_channel_1 o
INNER HASH JOIN dbo.v_source_proc_id_by_group_id y
ON o.source_proc_id=y.source_proc_id
AND y.source_proc_group_id=5
INNER HASH JOIN profile_conns z
ON z.profile_key_id=1
AND z.conn_key_type_id=o.key_type_id
AND z.conn_key_id=o.[key_id]
AND z.valid_to='01.01.3000'
INNER HASH JOIN lu_channel_conns x
ON x.channel_id=1
AND z.conn_type_id=x.conn_type_id
INNER HASH JOIN lu_conn_type ct
ON ct.conn_type_id=x.conn_type_id
AND ct.default_key_type_id=o.key_type_id
These tables are used for an optin database. optin_flag could be 0 or 1. With the last statement I want to get a 1 as optin_flag from optin_channel_1 for the given channel_id=1 for user with profile_key_id=1, when optin was inserted into database by process belonging to source_proc_group_id=5. I hope this is enough to comprehend what's going on.
Is this the best way to use the CLUSTERED INDEX'es? Or would it be better to remove profile_key_id from index on profile_conns and put z.profile_key_id=1 in a WHERE clause?
May be there's a much better way for optimizing this select (changes in database schema is not possible, only changes on indexes and modifing statement).

Without knowing the size of the tables and the sort of data stored in it them it is difficult to gauge.
Assuming optin_channel_1 has a lot of data and profile_cons has a lot of data I would try the following:
Clustered index on optin_channel_1(key_id) or key_type_id depending on which field has the most distinct values. (since you don't have a covering index)
Clustered index on profile_conns (cons_key_id) or cons_key_type_id depending on what you have chosen in optin_channel_1
etc...
Basically, if your table profile_conns table has not much data, I would put the clustered index on the most fragmented "filter" field (I suspect profile_key_id). If the table has a lot of data I would aim for a hash/merge join and match the clustered index with the clustered index of the optin_channel_1 table.
I would also rewrite the query as such:
SELECT #ret = MAX(o.optin_flag) AS optin_flag
FROM optin_channel_1 o
JOIN dbo.v_source_proc_id_by_group_id y
ON o.source_proc_id = y.source_proc_id
JOIN profile_conns z
ON z.conn_key_type_id = o.key_type_id
AND z.conn_key_id = o.[key_id]
JOIN lu_channel_conns x
ON z.conn_type_id = x.conn_type_id
JOIN lu_conn_type ct
ON ct.conn_type_id = x.conn_type_id
AND ct.default_key_type_id=o.key_type_id
WHERE y.source_proc_group_id = 5
AND z.profile_key_id = 1
AND x.channel_id = 1
AND z.valid_to = '01.01.3000'
The query changed this way because:
Putting the filter conditions in the where clause shows you what are relevant fields to aim for a hash/merge join
Putting join hints is rarely a good idea. It is very hard to beat the query governor to determine the best query plan. A bad plan usually indicates you have an issue with your indexes/statistics.
So as summary:
small table joined to big table ==> go for nested loops & focus your clustered index on the "filter" field in the small table & the join field in the big table.
big table joined to big table => go for hash/merge join and put the clustered index on the matching field on both sides
multi-field indexes usually only a good idea when they are "covering", this means all the fields you query are included in the index. (or are included with the include() clause)

Related

Weird result sets counts upon datetime criteria

I have the following table (the server is SQL Server 2016)
CREATE TABLE [dbo].[MyTable](
[RefID] [uniqueidentifier] NOT NULL,
[LG_ID] [bigint] NOT NULL,
[APP_SERVER_ID] [int] NOT NULL,
[WEB_SERVER_ID] [int] NOT NULL,
[WEB_BROWSER_ID] [int] NULL,
[CLIENT_TYPE_ID] [int] NOT NULL,
[OS_NAME_ID] [int] NULL,
[CLIENT_VERSION] [varchar](10) NULL,
[OS_VERSION] [varchar](10) NULL,
[BROWSER_VERSION] [varchar](10) NULL,
[DETAILS] [nvarchar](255) NULL,
[LG_SOURCE_TABLE] [char](2) NOT NULL,
[LG_TIME] [datetime] NULL,
CONSTRAINT [PK_MyTable] PRIMARY KEY CLUSTERED
(
[RefID] ASC,
[LG_ID] ASC,
[LG_SOURCE_TABLE] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [Primary]
) ON [Primary]
CREATE NONCLUSTERED INDEX [IX_MyTable_LG_TIME] ON [dbo].[MyTable]
(
[LG_TIME] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO
ALTER TABLE [dbo].[MyTable] ADD CONSTRAINT [PK_MyTable] PRIMARY KEY CLUSTERED
(
[RefID] ASC,
[LG_ID] ASC,
[LG_SOURCE_TABLE] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO
And I am querying data with the following methods (I am using only method 1, the other are tests that I am doing):
METHOD 1:
SELECT
[RefID],
[LG_ID],
[APP_SERVER_ID],
[WEB_SERVER_ID],
[WEB_BROWSER_ID],
[CLIENT_TYPE_ID],
[OS_NAME_ID],
[CLIENT_VERSION],
[OS_VERSION],
[BROWSER_VERSION],
[DETAILS],
[LG_SOURCE_TABLE],
[LG_TIME]
FROM
[MyTable] WITH (NOLOCK)
WHERE
[LG_TIME] >= '20190923'
AND [LG_TIME] < '20190924'
--returns 19.880.317 rows
METHOD 2:
SELECT
[RefID],
[LG_ID],
[APP_SERVER_ID],
[WEB_SERVER_ID],
[WEB_BROWSER_ID],
[CLIENT_TYPE_ID],
[OS_NAME_ID],
[CLIENT_VERSION],
[OS_VERSION],
[BROWSER_VERSION],
[DETAILS],
[LG_SOURCE_TABLE],
[LG_TIME]
FROM
[MyTable] WITH (NOLOCK)
WHERE
CAST([LG_TIME] AS DATE) = '20190923'
--returns 19.879.488 rows
METHOD 3:
SELECT
*
FROM
[MyTable] WITH (NOLOCK)
WHERE
[LG_TIME] >= '20190923'
AND [LG_TIME] < '20190924'
--returns 19.880.186 rows
METHOD 4:
SELECT
*
FROM
[MyTable] WITH (NOLOCK)
WHERE
CAST([LG_TIME] AS DATE) = '20190923'
--returns 19.879.488 rows
And as you can see I am having different result sets each time.
I don't understand why this is happening, am I doing something wrong?
Which is the correct/safe way to get the data I need based on datetime range?
I am doing a bcp out/in from production server (which has 1.2 billion rows but will be truncated soon) to an archive (which has 10.5 billion rows) and sometimes I am losing a record because they have the same [LG_TIME] (maybe I will create a different question about that).
EDIT 1: Some extra info:
The query (METHOD 1) is part of BCP OUT routine that happens daily at about 00:05 and the date range is always the same, 2 current days ago.
I have strict guidelines to always use WITH (NOLOCK) at queries at production servers.
There are millions of rows that have LG_TIME = NULL at the table but these rows are obsolete and will be deleted, i don't know if these rows cause any issues with my query.
The table is part of logging mechanism of the application, only INSERTS are happening and DELETEs very early morning (about 02:00-03:00) when the daily archiving is success. No one else is reading the table that time.
EDIT 2 (2019-09-26 10:30):
So i added LG_TIME IS NOT NULL to all my queries at where criteria and these are my results:
METHOD 1: 19.879.488 rows
METHOD 2: 19.879.488 rows
METHOD 3: 19.880.036 rows
METHOD 4: 19.879.488 rows
Still weird!

SQL Server aggregation with several joins

I'm trying to join data from 3 tables in SQL Servre and display in result:
Alias of an entity
if the entity is virtual
the last date (if known)
the value (if known)
I tried this :
select
sr.alias, c.virtual, max(d.date) date
from
App_references sr
join
Sensor c on (c.id_capteur = sr.id_capteur)
left join
Sensor_data d on (c.id_capteur = d.id_capteur)
group by
d.id_capteur, sr.alias, c.virtual
order by
sr.alias
Here is the database scheme:
CREATE TABLE [dbo].[App_reference]
(
[id_ref] [int] IDENTITY(1,1) NOT NULL,
[alias] [varchar](60) NOT NULL,
[id_capteur] [int] NOT NULL,
CONSTRAINT [PK_App_reference] PRIMARY KEY CLUSTERED
(
[id_ref] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
CREATE TABLE [dbo].[Sensor]
(
[id_capteur] [int] IDENTITY(1,1) NOT NULL,
[name] [varchar](50) NOT NULL,
[virtual] [tinyint] NULL,
[unite] [varchar](5) NULL,
[id_type] [int] NOT NULL,
CONSTRAINT [PK_Sensor] PRIMARY KEY CLUSTERED
(
[id_capteur] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
CREATE TABLE [dbo].[Sensor_data]
(
[id_entry] [int] IDENTITY(1,1) NOT NULL,
[id_capteur] [int] NOT NULL,
[value] [xml] NOT NULL,
[date] [datetime] NOT NULL,
CONSTRAINT [PK_Sensor_data] PRIMARY KEY CLUSTERED
(
[id_entry] ASC,
[id_capteur] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
Supposing each columns like "id_%" are linked by foreign key.
The request on top pass well, I got value :
alias virtual date
Place 1 (Physique) 0 2017-04-27 14:58:42.423
Place 2 1 NULL
Place 3 1 NULL
But I tried to select the value too by doing this :
select
sr.alias, c.virtual, max(d.date) date, d.value
from
Citopia_test.dbo.Smartparking_reference sr
join
Citopia_test.dbo.Sensor c on (c.id_capteur = sr.id_capteur)
left join
Citopia_test.dbo.Sensor_data d on (c.id_capteur = d.id_capteur)
group by
d.id_capteur, sr.alias, c.virtual
order by
sr.alias
And I got this error :
Column 'Sensor_data.value' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause.
So I tried several things like adding column in the group by but nothing changes.
You probably want the value from the record with max date. Use ROW_NUMBER to get those records.
select alias, virtual, date, value
from
(
select
sr.alias, c.virtual, d.date, d.value,
row_number() over (partition by sr.alias order by d.date desc) as rn
from Citopia_test.dbo.Smartparking_reference sr
join Citopia_test.dbo.Sensor c on (c.id_capteur = sr.id_capteur)
left join Citopia_test.dbo.Sensor_data d on (c.id_capteur = d.id_capteur)
) numbered
where rn = 1
order by sr.alias;
This gets you one row per sr.alias. If you want one row per sr.alias + c.virtual then change the partition by clause accordingly.

Offset/Fetch Query Slow

I have a SQL Server query that is performing poorly when retrieving data via pagination using offset/fetch. The earlier pages return results very fast but later ones are extremely slow and creating a bottleneck in our system. It's joining on two temp tables (#A and #T). Here is a pared down version of the query:
Select
I.CustomerID,
I.InvoiceID,
I.ItemID,
TI1.TrackID as TrackID1,
TI1.ItemName as ItemName1,
TI1.TrackCatID as TrackCatID1,
TI1.CategoryName as CategoryName1,
TI2.TrackID as TrackID2,
TI2.ItemName as ItemName2,
TI2.TrackCatID as TrackCatID2,
TI2.CategoryName as CategoryName2,
A.AccID,
A.Name as AccountName,
A.utimestamp as UpdateTimeStamp
FROM
#A A
Inner Join [dbo].[Item] I WITH(FORCESEEK)
On
A.CustomerID = I.CustomerID And
A.AccountID = I.AccountID
Left Join #T TI1 On
I.CustomerID = TI1.CustomerID And
I.TrackID1 = TI1.TrackID
Left Join #T TI2 On
I.CustomerID = TI2.CustomerID And
I.TrackID2 = TI2.TrackID
Order by
A.utimestamp
Offset 0 Rows Fetch Next 1000 Rows Only
Where the temp tables are defined as:
Create table #T (CustomerID uniqueidentifier, TrackCatID uniqueidentifier, TrackID uniqueidentifier, ItemName varchar(100), CategoryName varchar(100),PRIMARY KEY (CustomerID,TrackID))
Create table #A (CustomerID uniqueidentifier, AccountID uniqueidentifier, Name varchar(100), utimestamp timestamp, PRIMARY KEY (CustomerID, AccountID))
Regarding the DB table [dbo].[Item] it is defined as:
CREATE TABLE [dbo].[Item](
[Sequence] [int] IDENTITY(1,1) NOT NULL,
[CustomerID] [uniqueidentifier] NOT NULL,
[ItemID] [uniqueidentifier] NOT NULL,
[AccountID] [uniqueidentifier] NULL,
[TrackID1] [uniqueidentifier] NULL,
[TrackID2] [uniqueidentifier] NULL,
CONSTRAINT [PK_Item] PRIMARY KEY NONCLUSTERED
(
[CustomerID] ASC,
[ItemID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 80) ON [PRIMARY],
CONSTRAINT [CX_Item] UNIQUE CLUSTERED
(
[CustomerID] ASC,
[Sequence] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 80) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO
And has a number of indexes that each have have several columns:
Item Indexes
1: CusotmerID, Sequence
2: CusotmerID, AccountId, Sequence
3: CusotmerID, TrackId1
4: CusotmerID, TrackId2
5: CusotmerID, ItemID
Is there something I'm missing that's causing the later paginated queries to be slow? Note: The 0 in "Offset 0 Rows" increases by 1000 for every page.
Also, I added a index to the temp table #A and didn't see an improvement to the results:
CREATE INDEX IDX_Timestamp ON #A(utimestamp)

Why isn't the clustered index being used in this query?

I have the following table:
CREATE TABLE [Cache].[Marker](
[ID] [int] NOT NULL,
[SubID] [varchar](15) NOT NULL,
[ReadTime] [datetime] NOT NULL,
[EquipmentID] [varchar](25) NULL,
[Sequence] [int] NULL
) ON [PRIMARY]
With the following clustered index:
CREATE UNIQUE CLUSTERED INDEX [IX_Marker_EquipmentID_ReadTime_SubID] ON [Cache].[Marker]
(
[EquipmentID] ASC,
[ReadTime] ASC,
[SubID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO
And this query:
Declare #EquipmentId nvarchar(50)
Set #EquipmentId = 'KLM52B-MARKER'
SELECT TOP 1
cr.C44DistId,
cr.C473RightLotId
From Cache.Marker m
INNER JOIN Cache.vwCoaterRecipe AS cr ON cr.MarkerId = m.ID
Where m.EquipmentID = #EquipmentId And m.ReadTime >= '3/1/2013'
ORDER BY m.Id desc
Here is the query plan being generated:
My question is this. Why isn't the clustered index on the Cache.Marker table being used with a seek instead of a scan on another index? Furthermore, SSMS query analyzer is suggesting I add an index on Marker.ReadTime with ID and EquipmentID columns included.
There are roughly 1M rows in the Cache.Marker table.
How many unique equipment ID's do you have? It's probably decided date is a better first lookup (perhaps mistakenly). You can force it to use your index though with the WITH( INDEX() ) statement. FORCESEEK can help as well. I highly recommend this because then index behavior is predictable as databases grow to large sizes.
SELECT TOP 1
cr.C44DistId,
cr.C473RightLotId
From Cache.Marker m
WITH ( INDEX( IX_Marker_EquipmentID_ReadTime_SubID ), FORCESEEK )
INNER JOIN Cache.vwCoaterRecipe AS cr
ON cr.MarkerId = m.ID
Where m.EquipmentID = #EquipmentId And m.ReadTime >= '3/1/2013'
ORDER BY m.Id desc

How should I migrate this data into these Sql Server tables?

I wish to migrate some data from a single table into these new THREE tables.
Here's my destination schema:
Notice that I need to insert into the first Location table .. grab the SCOPE_IDENTITY() .. then insert the rows into the Boundary and Country tables.
The SCOPE_IDENTITY() is killing me :( meaning, I can only see a way to do this via CURSORS. Is there a better alternative?
UPDATE
Here's the scripts for the DB Schema....
Location
CREATE TABLE [dbo].[Locations](
[LocationId] [int] IDENTITY(1,1) NOT NULL,
[Name] [nvarchar](100) NOT NULL,
[OriginalLocationId] [int] NOT NULL,
CONSTRAINT [PK_Locations] PRIMARY KEY CLUSTERED
(
[LocationId] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
)
Country
CREATE TABLE [dbo].[Locations_Country](
[IsoCode] [nchar](2) NOT NULL,
[LocationId] [int] NOT NULL,
CONSTRAINT [PK_Locations_Country] PRIMARY KEY CLUSTERED
(
[LocationId] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[Locations_Country] WITH CHECK ADD CONSTRAINT [FK_Country_inherits_Location] FOREIGN KEY([LocationId])
REFERENCES [dbo].[Locations] ([LocationId])
GO
ALTER TABLE [dbo].[Locations_Country] CHECK CONSTRAINT [FK_Country_inherits_Location]
GO
Boundary
CREATE TABLE [dbo].[Boundaries](
[LocationId] [int] NOT NULL,
[CentrePoint] [varbinary](max) NOT NULL,
[OriginalBoundary] [varbinary](max) NULL,
[LargeReducedBoundary] [varbinary](max) NULL,
[MediumReducedBoundary] [varbinary](max) NULL,
[SmallReducedBoundary] [varbinary](max) NULL,
CONSTRAINT [PK_Boundaries] PRIMARY KEY CLUSTERED
(
[LocationId] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
GO
ALTER TABLE [dbo].[Boundaries] WITH CHECK ADD CONSTRAINT [FK_LocationBoundary] FOREIGN KEY([LocationId])
REFERENCES [dbo].[Locations] ([LocationId])
GO
ALTER TABLE [dbo].[Boundaries] CHECK CONSTRAINT [FK_LocationBoundary]
GO
I don't see a need for SCOPE_IDENTITY or cursors if you approach the data in order of the parent/child relationship:
INSERT INTO LOCATION
SELECT t.name,
t.originallocationid
FROM ORIGINAL_TABLE t
GROUP BY t.name, t.originallocationid
INSERT INTO COUNTRY
SELECT DISTINCT
t.isocode,
l.locationid
FROM ORIGINAL_TABLE t
JOIN LOCATION l ON l.name = t.name
AND l.originallocationid = t.originalocationid
INSERT INTO BOUNDARY
SELECT DISTINCT
l.locationid,
t.centrepoint,
t.originalboundary,
t.largereducedboundary,
t.mediumreducedboundary,
t.smallreducedboundary
FROM ORIGINAL_TABLE t
JOIN LOCATION l ON l.name = t.name
AND l.originallocationid = t.originalocationid
After loading your Location table you could create a query that joins Location with your source single table. The join criteria would be the natural key (is that the Name column?) and it would return the new LocationId along with the Boundary data. The results would inserted into the new Boundary table.