I am trying to insert 1,500,000 records into a table. Am facing table lock issues during the insertion. So I came up with the below batch insert.
DECLARE #BatchSize INT = 50000
WHILE 1 = 1
BEGIN
INSERT INTO [dbo].[Destination]
(proj_details_sid,
period_sid,
sales,
units)
SELECT TOP(#BatchSize) s.proj_details_sid,
s.period_sid,
s.sales,
s.units
FROM [dbo].[SOURCE] s
WHERE NOT EXISTS (SELECT 1
FROM dbo.Destination d
WHERE d.proj_details_sid = s.proj_details_sid
AND d.period_sid = s.period_sid)
IF ##ROWCOUNT < #BatchSize
BREAK
END
I have a clustered Index on Destination table (proj_details_sid ,period_sid ). NOT EXISTS part is just to restrict inserted records from again inserting into the table
Am I doing it right, will this avoid table lock ? or is there any better way.
Note : Time taken is more or less same with batch and without batch insert
Lock escalation is not likely to be related to the SELECT part of your statement at all.
It is a natural consequence of inserting a large number of rows
Lock escalation is triggered when lock escalation is not disabled on the table by using the ALTER TABLE SET LOCK_ESCALATION option, and when either of the following conditions exists:
A single Transact-SQL statement acquires at least 5,000 locks on a single nonpartitioned table or index.
A single Transact-SQL statement acquires at least 5,000 locks on a single partition of a partitioned table and the ALTER TABLE SET LOCK_ESCALATION option is set to AUTO.
The number of locks in an instance of the Database Engine exceeds memory or configuration thresholds.
If locks cannot be escalated because of lock conflicts, the Database Engine periodically triggers lock escalation at every 1,250 new locks acquired.
You can easily see this for yourself by tracing the lock escalation event in Profiler or simply trying the below with different batch sizes. For me TOP (6228) shows 6250 locks held but TOP (6229) it suddenly plummets to 1 as lock escalation kicks in. The exact numbers may vary (dependant on database settings and resources currently available). Use trial and error to find the threshold where lock escalation appears for you.
CREATE TABLE [dbo].[Destination]
(
proj_details_sid INT,
period_sid INT,
sales INT,
units INT
)
BEGIN TRAN --So locks are held for us to count in the next statement
INSERT INTO [dbo].[Destination]
SELECT TOP (6229) 1,
1,
1,
1
FROM master..spt_values v1,
master..spt_values v2
SELECT COUNT(*)
FROM sys.dm_tran_locks
WHERE request_session_id = ##SPID;
COMMIT
DROP TABLE [dbo].[Destination]
You are inserting 50,000 rows so almost certainly lock escalation will be attempted.
The article How to resolve blocking problems that are caused by lock escalation in SQL Server is quite old but a lot of the suggestions are still valid.
Break up large batch operations into several smaller operations (i.e. use a smaller batch size)
Lock escalation cannot occur if a different SPID is currently holding an incompatible table lock - The example they give is a different session executing
BEGIN TRAN
SELECT * FROM mytable (UPDLOCK, HOLDLOCK) WHERE 1=0
WAITFOR DELAY '1:00:00'
COMMIT TRAN
Disable lock escalation by enabling trace flag 1211 - However this is a global setting and can cause severe issues. There is a newer option 1224 that is less problematic but this is still global.
Another option would be to ALTER TABLE blah SET (LOCK_ESCALATION = DISABLE) but this is still not very targeted as it affects all queries against the table not just your single scenario here.
So I would opt for option 1 or possibly option 2 and discount the others.
Instead of checking the data exists in Destination, it seems better to store all data in temp table first, and batch insert into Destination
Reference: Using ROWLOCK in an INSERT statement (SQL Server)
DECLARE #batch int = 100
DECLARE #curRecord int = 1
DECLARE #maxRecord int
-- remove (nolock) if you don't want to have dirty read
SELECT row_number over (order by s.proj_details_sid, s.period_sid) as rownum,
s.proj_details_sid,
s.period_sid,
s.sales,
s.units
INTO #Temp
FROM [dbo].[SOURCE] s WITH (NOLOCK)
WHERE NOT EXISTS (SELECT 1
FROM dbo.Destination d WITH (NOLOCK)
WHERE d.proj_details_sid = s.proj_details_sid
AND d.period_sid = s.period_sid)
-- change this maxRecord if you want to limit the records to insert
SELECT #maxRecord = count(1) from #Temp
WHILE #maxRecord >= #curRecord
BEGIN
INSERT INTO [dbo].[Destination]
(proj_details_sid,
period_sid,
sales,
units)
SELECT proj_details_sid, period_sid, sales, units
FROM #Temp
WHERE rownum >= #curRecord and rownum < #curRecord + #batch
SET #curRecord = #curRecord + #batch
END
DROP TABLE #Temp
I added (NOLOCK) your destination table -> dbo.Destination(NOLOCK).
Now, You won't lock your table.
WHILE 1 = 1
BEGIN
INSERT INTO [dbo].[Destination]
(proj_details_sid,
period_sid,
sales,
units)
SELECT TOP(#BatchSize) s.proj_details_sid,
s.period_sid,
s.sales,
s.units
FROM [dbo].[SOURCE] s
WHERE NOT EXISTS (SELECT 1
FROM dbo.Destination(NOLOCK) d
WHERE d.proj_details_sid = s.proj_details_sid
AND d.period_sid = s.period_sid)
IF ##ROWCOUNT < #BatchSize
BREAK
END
To do this you can use WITH (NOLOCK) in your select statement.
BUT NOLOCK is not recommended on OLTP Databases.
Related
I am trying to convert my DELETE statements to TRUNCATE using How to delete large data of table in SQL without log?
Here is what I am trying,
-- Move recent records from Main table to a Temp table
-- TRUNCATE the Main table
-- Return back data from Temp table to Main table
In this period, I wanna stop any INSERT/UPDATE/DELETE statements (until TRUNCATE statement ran) to run on my Main table because if I allow then we might loss some data during TRUNCATE.
TRUNCATE statement acquires SCH-M lock it means that it creates a Schema Modification lock
Second type of the lock is schema modification lock – SCH-M. This lock
type is acquired by sessions that are altering the metadata and live
for duration of transaction. This lock can be described as
super-exclusive lock and it’s incompatible with any other lock types
including intent locks
Locking in Microsoft SQL Server (Part 13 – Schema locks)
During this time, the update, select and delete statements will be waiting for the table truncating operation. As a result, the CRUD operation will stop automatically until the TRUNCATE statement will be completed.
Below is an example script that reduces logging the FULL recovery model using SWITCH and TRUNCATE. The SWITCH is a fast meta data only operation. The space deallocation performed by TRUNCATE is done by an asynchronous background thread with larger tables (64MB+) so it is also fast and reduces logging greatly compared to DELETE;
A transaction is used to ensure all-or-none behavior and a schema modification lock is held for the duration of the transaction to quiesce data modifications during the process.
Below is the transaction log space used before and after the process by the example with 1M rows initially and 50K retained:
+--------+---------------+--------------------+
| | Log Size (MB) | Log Space Used (%) |
+--------+---------------+--------------------+
| Before | 1671.992 | 27.50415 |
| After | 1671.992 | 30.65533 |
+--------+---------------+--------------------+
Test setup:
--example main table
CREATE TABLE dbo.Main(
MainID int NOT NULL CONSTRAINT PK_Main PRIMARY KEY
, MainData char(1000) NOT NULL
);
--staging table with same schema and indexes as main table
CREATE TABLE dbo.MainStaging(
MainID int NOT NULL CONSTRAINT PK_MainStaging PRIMARY KEY
, MainData char(1000) NOT NULL
);
--load 1M rows into main table for testing
WITH
t10 AS (SELECT n FROM (VALUES(0),(0),(0),(0),(0),(0),(0),(0),(0),(0)) t(n))
,t1k AS (SELECT 0 AS n FROM t10 AS a CROSS JOIN t10 AS b CROSS JOIN t10 AS c)
,t1g AS (SELECT ROW_NUMBER() OVER (ORDER BY (SELECT 0)) AS num FROM t1k AS a CROSS JOIN t1k AS b CROSS JOIN t1k AS c)
INSERT INTO dbo.Main WITH(TABLOCKX) (MainID, MainData)
SELECT num, CAST(num AS char(1000))
FROM t1g
WHERE num <= 1000000;
GO
Example script:
SET XACT_ABORT ON; --ensures transaction is rolled back immediately even if script is cancelled
BEGIN TRY
BEGIN TRAN;
--truncate in same transaction so entire script can be safely rerun
TRUNCATE TABLE dbo.MainStaging;
--ALTER TABLE will block other activity until committed due to schema modification lock
--main table will be empty after switch
ALTER TABLE dbo.Main SWITCH TO dbo.MainStaging;
--keep 5% of rows
INSERT INTO dbo.Main WITH(TABLOCKX) (MainID, MainData)
SELECT MainID, MainData
FROM dbo.MainStaging
WHERE MainID > 950000;
COMMIT;
END TRY
BEGIN CATCH
IF ##TRANCOUNT > 0 ROLLBACK;
THROW;
END CATCH;
GO
Try to use transaction:
BEGIN TRANSACTION
SELECT TOP 1 *
FROM table_name
WITH (TABLOCK, HOLDLOCK)
-- do your stuff
COMMIT
Given a "main" table which has a single primary key, from which a huge number of rows need to be deleted (perhaps about 200M). In addition, there are about 30 "related" tables that are related to the main table, and related rows must also be deleted from each. It is expected that about an equivalent huge number of rows (or more) would need to be deleted from each of the related tables.
Of course it's possible to change the condition to partition the amount of data to be deleted, and run it several times, but in any case, I need an efficient solution to do this.
John Rees suggests a way to do massive deletes in a single table in Delete Large Number of Rows Is Very Slow - SQL Server
, but the problem with that is that it performs several transactional deletes in a single table. This could potentially leave the db in an inconsistent state.
John Gibb suggests a way to delete from several related tables, in How do I delete from multiple tables using INNER JOIN in SQL server
, but it does not consider the possibility that the amount of data to be deleted from each of these tables is large.
How can these two solutions be combined into an efficient way to delete a large number of rows from several related tables? (I'm new to SQL)
Perhaps it's important to note that, in the scope of this problem, each "related" table is only related to the "main" table
I think this is what you're after...
This will delete 4000 rows from the tables with the foreign key references (assuming 1:1) before deleting the same 4000 rows from the main table.
It will loop until done, or it hits the stop time (if enabled).
DECLARE #BATCHSIZE INT, #ITERATION INT, #TOTALROWS INT, #MAXRUNTIME VARCHAR(8), #BSTOPATMAXTIME BIT, #MSG VARCHAR(500)
SET DEADLOCK_PRIORITY LOW;
SET #BATCHSIZE = 4000
SET #MAXRUNTIME = '08:00:00' -- 8AM
SET #BSTOPATMAXTIME = 1 -- ENFORCE 8AM STOP TIME
SET #ITERATION = 0 -- LEAVE THIS
SET #TOTALROWS = 0 -- LEAVE THIS
IF OBJECT_ID('TEMPDB..#TMPLIST') IS NOT NULL DROP TABLE #TMPLIST
CREATE TABLE #TMPLIST (ID BIGINT)
WHILE #BATCHSIZE>0
BEGIN
-- IF #BSTOPATMAXTIME = 1, THEN WE'LL STOP THE WHOLE JOB AT A SET TIME...
IF CONVERT(VARCHAR(8),GETDATE(),108) >= #MAXRUNTIME AND #BSTOPATMAXTIME=1
BEGIN
RETURN
END
TRUNCATE TABLE #TMPLIST
INSERT INTO #TMPLIST (ID)
SELECT TOP(#BATCHSIZE) ID
FROM MAINTABLE
WHERE X=Y -- DELETE CRITERIA HERE...
SET #BATCHSIZE=##ROWCOUNT
DELETE T1
FROM SOMETABLE1 T1
WHERE EXISTS (SELECT 1 FROM #TMPLIST T WHERE T1.MAINID=T.ID)
DELETE T2
FROM SOMETABLE2 T2
WHERE EXISTS (SELECT 1 FROM #TMPLIST T WHERE T2.MAINID=T.ID)
DELETE T3
FROM SOMETABLE3 T3
WHERE EXISTS (SELECT 1 FROM #TMPLIST T WHERE T3.MAINID=T.ID)
DELETE M
FROM MAINTABLE M
WHERE EXISTS (SELECT 1 FROM #TMPLIST T WHERE T3.MAINID=M.ID)
SET #ITERATION=#ITERATION+1
SET #TOTALROWS=#TOTALROWS+#BATCHSIZE
SET #MSG = 'Iteration: ' + CAST(#ITERATION AS VARCHAR) + ' Total deletes:' + CAST(#TOTALROWS AS VARCHAR)
RAISERROR (#MSG, 0, 1) WITH NOWAIT
END
If I run a SQL statement like
UPDATE table
SET col = value
WHERE X=Y
And no rows match, therefore no rows are changed, are any locks created by the update?
The DBMS is Sybase + SQL Server
You can play with this script and see for yourself that sometimes locks are acquired and held even when no rows are updated:
CREATE TABLE dbo.Test
(
i INT NOT NULL
PRIMARY KEY ,
j INT NULL
) ;
go
INSERT dbo.Test
( i, j )
VALUES ( 1, 2 ) ;
GO
SELECT ##spid ;
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE ;
BEGIN TRANSACTION ;
UPDATE dbo.Test
SET j = 3
WHERE i = 3 ;
SELECT *
FROM sys.dm_tran_locks
WHERE request_session_id = ##spid;
COMMIT ;
If field x is indexed, then there will probably be a shared lock on that index while your UPDATE is checking it for matching records.
There should not be any row locks, but all locking behavior is contingent on your server-level isolation settings.
In case an update statement is used which does not effect the records then an exclusive intent lock is being taken for the update statement while in transaction as first the rows effected are to be selected followed by the update on the table, however as there are no rows that need to be updated this intent lock is taken on the table for the transaction in an exclusive mode.
I have a primary key that I don't want to auto increment (for various reasons) and so I'm looking for a way to simply increment that field when I INSERT. By simply, I mean without stored procedures and without triggers, so just a series of SQL commands (preferably one command).
Here is what I have tried thus far:
BEGIN TRAN
INSERT INTO Table1(id, data_field)
VALUES ( (SELECT (MAX(id) + 1) FROM Table1), '[blob of data]');
COMMIT TRAN;
* Data abstracted to use generic names and identifiers
However, when executed, the command errors, saying that
"Subqueries are not allowed in this
context. only scalar expressions are
allowed"
So, how can I do this/what am I doing wrong?
EDIT: Since it was pointed out as a consideration, the table to be inserted into is guaranteed to have at least 1 row already.
You understand that you will have collisions right?
you need to do something like this and this might cause deadlocks so be very sure what you are trying to accomplish here
DECLARE #id int
BEGIN TRAN
SELECT #id = MAX(id) + 1 FROM Table1 WITH (UPDLOCK, HOLDLOCK)
INSERT INTO Table1(id, data_field)
VALUES (#id ,'[blob of data]')
COMMIT TRAN
To explain the collision thing, I have provided some code
first create this table and insert one row
CREATE TABLE Table1(id int primary key not null, data_field char(100))
GO
Insert Table1 values(1,'[blob of data]')
Go
Now open up two query windows and run this at the same time
declare #i int
set #i =1
while #i < 10000
begin
BEGIN TRAN
INSERT INTO Table1(id, data_field)
SELECT MAX(id) + 1, '[blob of data]' FROM Table1
COMMIT TRAN;
set #i =#i + 1
end
You will see a bunch of these
Server: Msg 2627, Level 14, State 1, Line 7
Violation of PRIMARY KEY constraint 'PK__Table1__3213E83F2962141D'. Cannot insert duplicate key in object 'dbo.Table1'.
The statement has been terminated.
Try this instead:
INSERT INTO Table1 (id, data_field)
SELECT id, '[blob of data]' FROM (SELECT MAX(id) + 1 as id FROM Table1) tbl
I wouldn't recommend doing it that way for any number of reasons though (performance, transaction safety, etc)
It could be because there are no records so the sub query is returning NULL...try
INSERT INTO tblTest(RecordID, Text)
VALUES ((SELECT ISNULL(MAX(RecordID), 0) + 1 FROM tblTest), 'asdf')
I don't know if somebody is still looking for an answer but here is a solution that seems to work:
-- Preparation: execute only once
CREATE TABLE Test (Value int)
CREATE TABLE Lock (LockID uniqueidentifier)
INSERT INTO Lock SELECT NEWID()
-- Real insert
BEGIN TRAN LockTran
-- Lock an object to block simultaneous calls.
UPDATE Lock WITH(TABLOCK)
SET LockID = LockID
INSERT INTO Test
SELECT ISNULL(MAX(T.Value), 0) + 1
FROM Test T
COMMIT TRAN LockTran
We have a similar situation where we needed to increment and could not have gaps in the numbers. (If you use an identity value and a transaction is rolled back, that number will not be inserted and you will have gaps because the identity value does not roll back.)
We created a separate table for last number used and seeded it with 0.
Our insert takes a few steps.
--increment the number
Update dbo.NumberTable
set number = number + 1
--find out what the incremented number is
select #number = number
from dbo.NumberTable
--use the number
insert into dbo.MyTable using the #number
commit or rollback
This causes simultaneous transactions to process in a single line as each concurrent transaction will wait because the NumberTable is locked. As soon as the waiting transaction gets the lock, it increments the current value and locks it from others. That current value is the last number used and if a transaction is rolled back, the NumberTable update is also rolled back so there are no gaps.
Hope that helps.
Another way to cause single file execution is to use a SQL application lock. We have used that approach for longer running processes like synchronizing data between systems so only one synchronizing process can run at a time.
If you're doing it in a trigger, you could make sure it's an "INSTEAD OF" trigger and do it in a couple of statements:
DECLARE #next INT
SET #next = (SELECT (MAX(id) + 1) FROM Table1)
INSERT INTO Table1
VALUES (#next, inserted.datablob)
The only thing you'd have to be careful about is concurrency - if two rows are inserted at the same time, they could attempt to use the same value for #next, causing a conflict.
Does this accomplish what you want?
It seems very odd to do this sort of thing w/o an IDENTITY (auto-increment) column, making me question the architecture itself. I mean, seriously, this is the perfect situation for an IDENTITY column. It might help us answer your question if you'd explain the reasoning behind this decision. =)
Having said that, some options are:
using an INSTEAD OF trigger for this purpose. So, you'd do your INSERT (the INSERT statement would not need to pass in an ID). The trigger code would handle inserting the appropriate ID. You'd need to use the WITH (UPDLOCK, HOLDLOCK) syntax used by another answerer to hold the lock for the duration of the trigger (which is implicitly wrapped in a transaction) & to elevate the lock type from "shared" to "update" lock (IIRC).
you can use the idea above, but have a table whose purpose is to store the last, max value inserted into the table. So, once the table is set up, you would no longer have to do a SELECT MAX(ID) every time. You'd simply increment the value in the table. This is safe provided that you use appropriate locking (as discussed). Again, that avoids repeated table scans every time you INSERT.
use GUIDs instead of IDs. It's much easier to merge tables across databases, since the GUIDs will always be unique (whereas records across databases will have conflicting integer IDs). To avoid page splitting, sequential GUIDs can be used. This is only beneficial if you might need to do database merging.
Use a stored proc in lieu of the trigger approach (since triggers are to be avoided, for some reason). You'd still have the locking issue (and the performance problems that can arise). But sprocs are preferred over dynamic SQL (in the context of applications), and are often much more performant.
Sorry about rambling. Hope that helps.
How about creating a separate table to maintain the counter? It has better performance than MAX(id), as it will be O(1). MAX(id) is at best O(lgn) depending on the implementation.
And then when you need to insert, simply lock the counter table for reading the counter and increment the counter. Then you can release the lock and insert to your table with the incremented counter value.
Have a separate table where you keep your latest ID and for every transaction get a new one.
It may be a bit slower but it should work.
DECLARE #NEWID INT
BEGIN TRAN
UPDATE TABLE SET ID=ID+1
SELECT #NEWID=ID FROM TABLE
COMMIT TRAN
PRINT #NEWID -- Do what you want with your new ID
Code without any transaction scope (I use it in my engineer course as an exercice) :
-- Preparation: execute only once
CREATE TABLE increment (val int);
INSERT INTO increment VALUES (1);
-- Real insert
DECLARE #newIncrement INT;
UPDATE increment
SET #newIncrement = val,
val = val + 1;
INSERT INTO Table1 (id, data_field)
SELECT #newIncrement, 'some data';
declare #nextId int
set #nextId = (select MAX(id)+1 from Table1)
insert into Table1(id, data_field) values (#nextId, '[blob of data]')
commit;
But perhaps a better approach would be using a scalar function getNextId('table1')
Any critiques of this? Works for me.
DECLARE #m_NewRequestID INT
, #m_IsError BIT = 1
, #m_CatchEndless INT = 0
WHILE #m_IsError = 1
BEGIN TRY
SELECT #m_NewRequestID = (SELECT ISNULL(MAX(RequestID), 0) + 1 FROM Requests)
INSERT INTO Requests ( RequestID
, RequestName
, Customer
, Comment
, CreatedFromApplication)
SELECT RequestID = #m_NewRequestID
, RequestName = dbo.ufGetNextAvailableRequestName(PatternName)
, Customer = #Customer
, Comment = [Description]
, CreatedFromApplication = #CreatedFromApplication
FROM RequestPatterns
WHERE PatternID = #PatternID
SET #m_IsError = 0
END TRY
BEGIN CATCH
SET #m_IsError = 1
SET #m_CatchEndless = #m_CatchEndless + 1
IF #m_CatchEndless > 1000
THROW 51000, '[upCreateRequestFromPattern]: Unable to get new RequestID', 1
END CATCH
This should work:
INSERT INTO Table1 (id, data_field)
SELECT (SELECT (MAX(id) + 1) FROM Table1), '[blob of data]';
Or this (substitute LIMIT for other platforms):
INSERT INTO Table1 (id, data_field)
SELECT TOP 1
MAX(id) + 1, '[blob of data]'
FROM
Table1
ORDER BY
[id] DESC;
I have a couple large tables (188m and 144m rows) I need to populate from views, but each view contains a few hundred million rows (pulling together pseudo-dimensionally modelled data into a flat form). The keys on each table are over 50 composite bytes of columns. If the data was in tables, I could always think about using sp_rename to make the other new table, but that isn't really an option.
If I do a single INSERT operation, the process uses a huge amount of transaction log space, typicalyl filing it up and prompting a bunch of hassle with the DBAs. (And yes, this is probably a job the DBAs should handle/design/architect)
I can use SSIS and stream the data into the destination table with batch commits (but this does require the data to be transmitted over the network, since we are not allowed to run SSIS packages on the server).
Any things other than to divide the process up into multiple INSERT operations using some kind of key to distribute the rows into different batches and doing a loop?
Does the view have ANY kind of unique identifier / candidate key? If so, you could select those rows into a working table using:
SELECT key_columns INTO dbo.temp FROM dbo.HugeView;
(If it makes sense, maybe put this table into a different database, perhaps with SIMPLE recovery model, to prevent the log activity from interfering with your primary database. This should generate much less log anyway, and you can free up the space in the other database before you resume, in case the problem is that you have inadequate disk space all around.)
Then you can do something like this, inserting 10,000 rows at a time, and backing up the log in between:
SET NOCOUNT ON;
DECLARE
#batchsize INT,
#ctr INT,
#rc INT;
SELECT
#batchsize = 10000,
#ctr = 0;
WHILE 1 = 1
BEGIN
WITH x AS
(
SELECT key_column, rn = ROW_NUMBER() OVER (ORDER BY key_column)
FROM dbo.temp
)
INSERT dbo.PrimaryTable(a, b, c, etc.)
SELECT v.a, v.b, v.c, etc.
FROM x
INNER JOIN dbo.HugeView AS v
ON v.key_column = x.key_column
WHERE x.rn > #batchsize * #ctr
AND x.rn <= #batchsize * (#ctr + 1);
IF ##ROWCOUNT = 0
BREAK;
BACKUP LOG PrimaryDB TO DISK = 'C:\db.bak' WITH INIT;
SET #ctr = #ctr + 1;
END
That's all off the top of my head, so don't cut/paste/run, but I think the general idea is there. For more details (and why I backup log / checkpoint inside the loop), see this post on sqlperformance.com:
Break large delete operations into chunks
Note that if you are taking regular database and log backups you will probably want to take a full to start your log chain over again.
You could partition your data and insert your data in a cursor loop. That would be nearly the same as SSIS batchinserting. But runs on your server.
create cursor ....
select YEAR(DateCol), MONTH(DateCol) from whatever
while ....
insert into yourtable(...)
select * from whatever
where YEAR(DateCol) = year and MONTH(DateCol) = month
end
I know this is an old thread, but I made a generic version of Arthur's cursor solution:
--Split a batch up into chunks using a cursor.
--This method can be used for most any large table with some modifications
--It could also be refined further with an #Day variable (for example)
DECLARE #Year INT
DECLARE #Month INT
DECLARE BatchingCursor CURSOR FOR
SELECT DISTINCT YEAR(<SomeDateField>),MONTH(<SomeDateField>)
FROM <Sometable>;
OPEN BatchingCursor;
FETCH NEXT FROM BatchingCursor INTO #Year, #Month;
WHILE ##FETCH_STATUS = 0
BEGIN
--All logic goes in here
--Any select statements from <Sometable> need to be suffixed with:
--WHERE Year(<SomeDateField>)=#Year AND Month(<SomeDateField>)=#Month
FETCH NEXT FROM BatchingCursor INTO #Year, #Month;
END;
CLOSE BatchingCursor;
DEALLOCATE BatchingCursor;
GO
This solved the problem on loads of our large tables.
There is no pixie dust, you know that.
Without knowing specifics about the actual schema being transfered, a generic solution would be exactly as you describe it: divide processing into multiple inserts and keep track of the key(s). This is sort of pseudo-code T-SQL:
create table currentKeys (table sysname not null primary key, key sql_variant not null);
go
declare #keysInserted table (key sql_variant);
declare #key sql_variant;
begin transaction
do while (1=1)
begin
select #key = key from currentKeys where table = '<target>';
insert into <target> (...)
output inserted.key into #keysInserted (key)
select top (<batchsize>) ... from <source>
where key > #key
order by key;
if (0 = ##rowcount)
break;
update currentKeys
set key = (select max(key) from #keysInserted)
where table = '<target>';
commit;
delete from #keysInserted;
set #key = null;
begin transaction;
end
commit
It would get more complicated if you want to allow for parallel batches and partition the keys.
You could use the BCP command to load the data and use the Batch Size parameter
http://msdn.microsoft.com/en-us/library/ms162802.aspx
Two step process
BCP OUT data from Views into Text files
BCP IN data from Text files into Tables with batch size parameter
This looks like a job for good ol' BCP.