I have to merge millions of rows into a table. The target table has an AFTER UPDATE trigger. The whole process is consuming a lot more memory than I'd like to allocate, and tempdb is eating up disk space.
I'd like to have the MERGE command run in batches of 100,000 records at a time. With SET ROWCOUNT being deprecated and cursors being inefficient, I'm not sure what the best approach for this would be.
A set oriented approach would be best way to run the query efficiently. So what you are doing seems just fine to me. If it consumes temp db etc, would need to know if you are doing any row by row operations which is slowing down. Generally a MERGE is a single statement and therefore is efficent.
The other things to consider is, if you got indexes on the destination table, then you could drop those indexes and then run the MERGE followed by recreating the indexes.
Coming to your question, you can split up into batches by using the mod operator and running the MERGE in a loop
eg:
declare #i int
select #i=count(*)/10 from source
while #i>0
begin
merge
into dest d
using (select *
from source
where id%10000=i --here id is the primary key of the source table
) s
on d.id=s.id
when matched
set ...
when not matched
insert...
...rest of the insert/update logic here
set #i=#i-1
end
Try WHILE loop
DECLARE #I INT = 1
WHILE (#I > 0)
BEGIN
;MERGE INTO Dst USING (
SELECT TOP 1000
FROM Src
WHERE NotUpdated
)
...
SET #I = ##ROWCOUNT
END
Related
I have 2 tables. Here is my merge statement:
MERGE INTO Transactions_Raw AS Target
USING Four_Fields AS Source
ON
Target.PI = Source.PI AND
Target.TIME_STAMP = Source.TIME_STAMP AND
Target.STS = Source.STS
WHEN MATCHED THEN
UPDATE SET
Target.FROM_APP_A = Source.FROM_APP_A,
Target.TO_APP_A = Source.TO_APP_A,
Target.FROM_APP_B = Source.FROM_APP_B,
Target.TO_APP_B = Source.TO_APP_B;
Both tables have around 77 million rows to them.
When I run this command, it fails due to tempdb growing and running out of disk space. I cannot get more space.
I tried running it as an update statement and it still failed.
I even tried using SSIS, sort and merge-join transformations from these 2 source tables into a 3rd empty table, left join on transactions_raw... It failed and crashed the server requiring a reboot.
I am thinking I need to run it in batches of 100,000 rows or something? How would I do that? Any other suggestions?
Disclaimer: if you want to use any framework to do this job.
spring-batch is a good choice to perform this task.
Even though it involves some coding, it will be helpful in transferring the data seemless and consistent way.
You can choose the row size(100,000) you want to transfer at a time.
Any failures while inserting the data would be limited to set of
rows (100,000). So you need to deal with less data and insert only
this delta data after fixing it. Correct data can be inserted
successfully. This whole set of control will be provided by using
spring-batch.
I usually prefer BCP for such operations on huge tables.
If that query (make sure it repeats the structure of Target table)
SELECT
Target.ID,
Target.TIME_STAMP,
Target.STS,
ISNULL(Source.FROM_APP_A, Target.FROM_APP_A) AS FROM_APP_A,
ISNULL(Source.TO_APP_A , Target.TO_APP_A) AS TO_APP_A,
ISNULL(Source.FROM_APP_B, Target.FROM_APP_B) AS FROM_APP_B,
ISNULL(Source.TO_APP_B , Target.TO_APP_B) AS TO_APP_B
FROM Database.dbo.Transactions_Raw AS Target
LEFT JOIN Database.dbo.Four_Fields AS Source
ON Target.ID = Source.ID AND
Target.TIME_STAMP = Source.TIME_STAMP AND
Target.STS = Source.STS*
won't overflow your TempDB, you can use it to export data into the file using something like
bcp "query_here" queryout D:\Data.dat -S server -Uuser -k -n
Later import data in a new table using something like
bcp Database.dbo.NewTable in D:\Data.dat -S server -Uuser -n -k -b 100000 -h TABLOCK
-- this is code example doing a loop a bit more explained what I said in comments for an example you just have to plug in your code
SET NOCOUNT ON;
DECLARE #PerBatchCount as int
DECLARE #MAXID as bigint
DECLARE #WorkingOnID as bigint
Set #PerBatchCount = 1000
--Find range of IDs to process
SELECT #WorkingOnID = MIN(ID), #MAXID = MAX(ID)
FROM MasterTableYouAreWorkingOn
WHILE #WorkingOnID <= #MAXID
BEGIN
-- do an update on all the ones that exist in the offer table NOW
--MergeStatementHere with adding the where below for ID
WHERE ID BETWEEN #WorkingOnID AND (#WorkingOnID + #PerBatchCount -1)
-- if the loop runs too much data you may need to add this to delay for your log to have time to ship/empty out
WAITFOR DELAY '00:00:10' -- wait 10 seconds
set #WorkingOnID = #WorkingOnID + #PerBatchCount
END
I want to delete 10GB (1%) data from 1TB table. I have come across several articles to delete large amounts of data from a huge table but didn't find much on deleting smaller percentage of data from a huge table.
Additional details:
Trying to delete bot data from the visits table. The filter condition is a combination of fields... ip in (list of ips about 20 of them) and useragent like '%SOMETHING%'
useragent size 1024 varchar
The data can be old or new. I can't use date filter
Here is a batch delete in chunks that I use regularly. Perhaps it would give you some ideas on how to approach your need. I create a stored proc and call the proc from a SQL Agent Job. I generally schedule it to allow a transaction log backup between executions so the log does not grow too large. You could always just run it interactively if you wish.
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROC [DBA_Delete_YourTableName] AS
SET NOCOUNT ON;
---------------------------------------------------------
DECLARE #DaysHistoryToKeep INT
SET #DaysHistoryToKeep = 90
IF #DaysHistoryToKeep < 30
SET #DaysHistoryToKeep = 30
---------------------------------------------------------
DECLARE #continue INT
DECLARE #rowcount INT
DECLARE #loopCount INT
DECLARE #MaxLoops INT
DECLARE #TotalRows BIGINT
DECLARE #PurgeThruDate DATETIME
SET #PurgeThruDate = DATEADD(dd,(-1)*(#DaysHistoryToKeep+1), GETDATE())
SET #MaxLoops = 100
SET #continue = 1
SET #loopCount = 0
SELECT #TotalRows = (SELECT COUNT(*) FROM YourTableName (NOLOCK) WHERE CREATEDDATETIME < #PurgeThruDate)
PRINT 'Total Rows = ' + CAST(#TotalRows AS VARCHAR(20))
PRINT ''
WHILE #continue = 1
BEGIN
SET #loopCount = #loopCount + 1
PRINT 'Loop # ' + CAST(#loopCount AS VARCHAR(10))
PRINT CONVERT(VARCHAR(20), GETDATE(), 120)
BEGIN TRANSACTION
DELETE TOP (4500) YourTableName WHERE CREATEDDATETIME < #PurgeThruDate
SET #rowcount = ##rowcount
COMMIT
PRINT 'Rows Deleted: ' + CAST(#rowcount AS VARCHAR(10))
PRINT CONVERT(VARCHAR(20), GETDATE(), 120)
PRINT ''
IF #rowcount = 0 OR #loopCount >= #MaxLoops
BEGIN
SET #continue = 0
END
END
SELECT #TotalRows = (SELECT COUNT(*) FROM YourTableName (NOLOCK) WHERE CREATEDDATETIME < #PurgeThruDate)
PRINT 'Total Rows Remaining = ' + CAST(#TotalRows AS VARCHAR(20))
PRINT ''
GO
The filter condition is ... ip in (list of ips about 20 of them) and useragent like '%SOMETHING%'
Regarding table size, it's important to touch as few rows as possible while executing the delete.
I imagine on a table that size you already have an index on the ip column. It might help (or not) to put your list 20 or so ips in a table instead of in an in clause, especially if they're parameters. I'd look at my query plan to see.
I hope useragent like '%SOMETHING%' is usually true; otherwise it's an expensive test because SQL Server has to examine every row for an eligible ip. If not, a redesign to allow the query to avoid like would probably be beneficial.
[D]eleting smaller percentage isn't really relevant. Using selective search criteria is (per above), as is the size of the delete transaction in absolute terms. By definition, the size of the deletion in terms of rows and row size determines the size of the transaction. Very large transactions can push against machine resources. Breaking them up into smaller ones can yield better performance in such cases.
The last server I used had 0.25 TB RAM and was comfortable deleting 1 million rows at a time, but not 10 million. Your milage will vary; you have to try, and observe, to see.
How much you're willing to tax the machine will depend on what else is (or needs to be able to) run at the same time. The way you break up one logical action -- delete all rows where [condition] -- into "chunks" also depends on what you want the database to look like while the delete procedure is in process, when some chunks are deleted and others remain present.
If you do decide to break it into chunks, I recommend not using a fixed number of rows and a TOP(n) syntax, because that's the least logical solution. Unless you use order by, you're leaving to the server to choose arbitrarily which N rows to delete. If you do use order by, you're requiring the server to sort the result before starting the delete, possibly several times over the whole run. Bleh!
Instead, find some logical subset of rows, ideally distinguishable along the clustered index, that fall beneath your machine's threshold of an acceptable number of rows to delete at one time. Loop over that set. In your case, I would be tempted to iterate over the set of ip values in the in clause. Instead of delete ... where ip in(...), you get (roughly) for each ip delete ... where ip = #ip
The advantage of the latter approach is that you always know where the database stands. If you kill the procedure or it gets rolled back partway through its iteration, you can examine the database to see which ips still remain (or whatever criteria you end up using). You avoid any kind of pathological behavior, whereby some query gets a partial result because some part of your selection criteria (determined by the server alone) are present and others deleted. In thinking about the problem you can say, I'm unable to delete ip 192.168.0.1 because, without wondering which portion have already been removed.
In sum, I recommend:
Give the server every chance to touch only the rows you want to affect, and verify that's what it will do.
Construct your delete routine, if you need one, to delete logical chunks, so you can reason about the state of the database at any time.
I have database on Sql Server 2008 R2.
On that database a delete query on 400 Million records, has been running for 4 days , but I need to reboot the machine. How can I force it to commit whatever is deleted so far? I want to reject that data which is deleted by running query so far.
But problem is that query is still running and will not complete before the server reboot.
Note : I have not set any isolation / begin/end transaction for the query. The query is running in SSMS studio.
If machine reboot or I cancelled the query, then database will go in recovery mode and it will recovering for next 2 days, then I need to re-run the delete and it will cost me another 4 days.
I really appreciate any suggestion / help or guidance in this.
I am novice user of sql server.
Thanks in Advance
Regards
There is no way to stop SQL Server from trying to bring the database into a transactionally consistent state. Every single statement is implicitly a transaction itself (if not part of an outer transaction) and is executing either all or nothing. So if you either cancel the query or disconnect or reboot the server, SQL Server will from transaction log write the original values back to the updated data pages.
Next time when you delete so many rows at once, don't do it at once. Divide the job in smaller chunks (I always use 5.000 as a magic number, meaning I delete 5000 rows at the time in the loop) to minimize transaction log use and locking.
set rowcount 5000
delete table
while ##rowcount = 5000
delete table
set rowcount 0
If you are deleting that many rows you may have a better time with truncate. Truncate deletes all rows from the table very efficiently. However, I'm assuming that you would like to keep some of the records in the table. The stored procedure below backs up the data you would like to keep into a temp table then truncates then re-inserts the records that were saved. This can clean a huge table very quickly.
Note that truncate doesn't play well with Foreign Key constraints so you may need to drop those then recreate them after cleaned.
CREATE PROCEDURE [dbo].[deleteTableFast] (
#TableName VARCHAR(100),
#WhereClause varchar(1000))
AS
BEGIN
-- input:
-- table name: is the table to use
-- where clause: is the where clause of the records to KEEP
declare #tempTableName varchar(100);
set #tempTableName = #tableName+'_temp_to_truncate';
-- error checking
if exists (SELECT [Table_Name] FROM Information_Schema.COLUMNS WHERE [TABLE_NAME] =(#tempTableName)) begin
print 'ERROR: already temp table ... exiting'
return
end
if not exists (SELECT [Table_Name] FROM Information_Schema.COLUMNS WHERE [TABLE_NAME] =(#TableName)) begin
print 'ERROR: table does not exist ... exiting'
return
end
-- save wanted records via a temp table to be able to truncate
exec ('select * into '+#tempTableName+' from '+#TableName+' WHERE '+#WhereClause);
exec ('truncate table '+#TableName);
exec ('insert into '+#TableName+' select * from '+#tempTableName);
exec ('drop table '+#tempTableName);
end
GO
You must know D(Durability) in ACID first before you understand why database goes to Recovery mode.
Generally speaking, you should avoid long running SQL if possible. Long running SQL means more lock time on resource, larger transaction log and huge rollback time when it fails.
Consider divided your task some id or time. For example, you want to insert large volume data from TableSrc to TableTarget, you can write query like
DECLARE #BATCHCOUNT INT = 1000;
DECLARE #Id INT = 0;
DECLARE #Max = ...;
WHILE Id < #Max
BEGIN
INSERT INTO TableTarget
FROM TableSrc
WHERE PrimaryKey >= #Id AND #PrimaryKey < #Id + #BatchCount;
SET #Id = #Id + #BatchCount;
END
It's ugly more code and more error prone. But it's the only way I know to deal with huge data volume.
Database : SQL Server 2005
Problem : Copy values from one column to another column in the same table with a billion+
rows.
test_table (int id, bigint bigid)
Things tried 1: update query
update test_table set bigid = id
fills up the transaction log and rolls back due to lack of transaction log space.
Tried 2 - a procedure on following lines
set nocount on
set rowcount = 500000
while #rowcount > 0
begin
update test_table set bigid = id where bigid is null
set #rowcount = ##rowcount
set #rowupdated = #rowsupdated + #rowcount
end
print #rowsupdated
The above procedure starts slowing down as it proceeds.
Tried 3 - Creating a cursor for update.
generally discouraged in SQL Server documentation and this approach updates one row at a time which is too time consuming.
Is there an approach that can speed up the copying of values from one column to another. Basically I am looking for some 'magic' keyword or logic that will allow the update query to rip through the billion rows half a million at a time sequentially.
Any hints, pointers will be much appreciated.
I'm going to guess that you are closing in on the 2.1billion limit of an INT datatype on an artificial key for a column. Yes, that's a pain. Much easier to fix before the fact than after you've actually hit that limit and production is shut down while you are trying to fix it :)
Anyway, several of the ideas here will work. Let's talk about speed, efficiency, indexes, and log size, though.
Log Growth
The log blew up originally because it was trying to commit all 2b rows at once. The suggestions in other posts for "chunking it up" will work, but that may not totally resolve the log issue.
If the database is in SIMPLE mode, you'll be fine (the log will re-use itself after each batch). If the database is in FULL or BULK_LOGGED recovery mode, you'll have to run log backups frequently during the running of your operation so that SQL can re-use the log space. This might mean increasing the frequency of the backups during this time, or just monitoring the log usage while running.
Indexes and Speed
ALL of the where bigid is null answers will slow down as the table is populated, because there is (presumably) no index on the new BIGID field. You could, (of course) just add an index on BIGID, but I'm not convinced that is the right answer.
The key (pun intended) is my assumption that the original ID field is probably the primary key, or the clustered index, or both. In that case, lets take advantage of that fact, and do a variation of Jess' idea:
set #counter = 1
while #counter < 2000000000 --or whatever
begin
update test_table set bigid = id
where id between #counter and (#counter + 499999) --BETWEEN is inclusive
set #counter = #counter + 500000
end
This should be extremely fast, because of the existing indexes on ID.
The ISNULL check really wasn't necessary anyway, neither is my (-1) on the interval. If we duplicate some rows between calls, that's not a big deal.
Use TOP in the UPDATE statement:
UPDATE TOP (#row_limit) dbo.test_table
SET bigid = id
WHERE bigid IS NULL
You could try to use something like SET ROWCOUNT and do batch updates:
SET ROWCOUNT 5000;
UPDATE dbo.test_table
SET bigid = id
WHERE bigid IS NULL
GO
and then repeat this as many times as you need to.
This way, you're avoiding the RBAR (row-by-agonizing-row) symptoms of cursors and while loops, and yet, you don't unnecessarily fill up your transaction log.
Of course, in between runs, you'd have to do backups (especially of your log) to keep its size within reasonable limits.
Is this a one time thing? If so, just do it by ranges:
set counter = 500000
while #counter < 2000000000 --or whatever your max id
begin
update test_table set bigid = id where id between (#counter - 500000) and #counter and bigid is null
set counter = #counter + 500000
end
I didn't run this to try it, but if you can get it to update 500k at a time I think you're moving in the right direction.
set rowcount 500000
update test_table tt1
set bigid = (SELECT tt2.id FROM test_table tt2 WHERE tt1.id = tt2.id)
where bigid IS NULL
You can also try changing the recover model so you don't log the transactions
ALTER DATABASE db1
SET RECOVERY SIMPLE
GO
update test_table
set bigid = id
GO
ALTER DATABASE db1
SET RECOVERY FULL
GO
First step, if there are any, would be to drop indexes before the operation. This is probably what is causing the speed degrade with time.
The other option, a little outside the box thinking...can you express the update in such a way that you could materialize the column values in a select? If you can do this then you could create what amounts to a NEW table using SELECT INTO which is a minimally logged operation (assuming in 2005 that you are set to a recovery model of SIMPLE or BULK LOGGED). This would be pretty fast and then you can drop the old table, rename this table to to old table name and recreate any indexes.
select id, CAST(id as bigint) bigid into test_table_temp from test_table
drop table test_table
exec sp_rename 'test_table_temp', 'test_table'
I second the
UPDATE TOP(X) statement
Also to suggest, if you're in a loop, add in some WAITFOR delay or COMMIT between, to allow other processes some time to use the table if needed vs. blocking forever until all the updates are completed
I have a couple large tables (188m and 144m rows) I need to populate from views, but each view contains a few hundred million rows (pulling together pseudo-dimensionally modelled data into a flat form). The keys on each table are over 50 composite bytes of columns. If the data was in tables, I could always think about using sp_rename to make the other new table, but that isn't really an option.
If I do a single INSERT operation, the process uses a huge amount of transaction log space, typicalyl filing it up and prompting a bunch of hassle with the DBAs. (And yes, this is probably a job the DBAs should handle/design/architect)
I can use SSIS and stream the data into the destination table with batch commits (but this does require the data to be transmitted over the network, since we are not allowed to run SSIS packages on the server).
Any things other than to divide the process up into multiple INSERT operations using some kind of key to distribute the rows into different batches and doing a loop?
Does the view have ANY kind of unique identifier / candidate key? If so, you could select those rows into a working table using:
SELECT key_columns INTO dbo.temp FROM dbo.HugeView;
(If it makes sense, maybe put this table into a different database, perhaps with SIMPLE recovery model, to prevent the log activity from interfering with your primary database. This should generate much less log anyway, and you can free up the space in the other database before you resume, in case the problem is that you have inadequate disk space all around.)
Then you can do something like this, inserting 10,000 rows at a time, and backing up the log in between:
SET NOCOUNT ON;
DECLARE
#batchsize INT,
#ctr INT,
#rc INT;
SELECT
#batchsize = 10000,
#ctr = 0;
WHILE 1 = 1
BEGIN
WITH x AS
(
SELECT key_column, rn = ROW_NUMBER() OVER (ORDER BY key_column)
FROM dbo.temp
)
INSERT dbo.PrimaryTable(a, b, c, etc.)
SELECT v.a, v.b, v.c, etc.
FROM x
INNER JOIN dbo.HugeView AS v
ON v.key_column = x.key_column
WHERE x.rn > #batchsize * #ctr
AND x.rn <= #batchsize * (#ctr + 1);
IF ##ROWCOUNT = 0
BREAK;
BACKUP LOG PrimaryDB TO DISK = 'C:\db.bak' WITH INIT;
SET #ctr = #ctr + 1;
END
That's all off the top of my head, so don't cut/paste/run, but I think the general idea is there. For more details (and why I backup log / checkpoint inside the loop), see this post on sqlperformance.com:
Break large delete operations into chunks
Note that if you are taking regular database and log backups you will probably want to take a full to start your log chain over again.
You could partition your data and insert your data in a cursor loop. That would be nearly the same as SSIS batchinserting. But runs on your server.
create cursor ....
select YEAR(DateCol), MONTH(DateCol) from whatever
while ....
insert into yourtable(...)
select * from whatever
where YEAR(DateCol) = year and MONTH(DateCol) = month
end
I know this is an old thread, but I made a generic version of Arthur's cursor solution:
--Split a batch up into chunks using a cursor.
--This method can be used for most any large table with some modifications
--It could also be refined further with an #Day variable (for example)
DECLARE #Year INT
DECLARE #Month INT
DECLARE BatchingCursor CURSOR FOR
SELECT DISTINCT YEAR(<SomeDateField>),MONTH(<SomeDateField>)
FROM <Sometable>;
OPEN BatchingCursor;
FETCH NEXT FROM BatchingCursor INTO #Year, #Month;
WHILE ##FETCH_STATUS = 0
BEGIN
--All logic goes in here
--Any select statements from <Sometable> need to be suffixed with:
--WHERE Year(<SomeDateField>)=#Year AND Month(<SomeDateField>)=#Month
FETCH NEXT FROM BatchingCursor INTO #Year, #Month;
END;
CLOSE BatchingCursor;
DEALLOCATE BatchingCursor;
GO
This solved the problem on loads of our large tables.
There is no pixie dust, you know that.
Without knowing specifics about the actual schema being transfered, a generic solution would be exactly as you describe it: divide processing into multiple inserts and keep track of the key(s). This is sort of pseudo-code T-SQL:
create table currentKeys (table sysname not null primary key, key sql_variant not null);
go
declare #keysInserted table (key sql_variant);
declare #key sql_variant;
begin transaction
do while (1=1)
begin
select #key = key from currentKeys where table = '<target>';
insert into <target> (...)
output inserted.key into #keysInserted (key)
select top (<batchsize>) ... from <source>
where key > #key
order by key;
if (0 = ##rowcount)
break;
update currentKeys
set key = (select max(key) from #keysInserted)
where table = '<target>';
commit;
delete from #keysInserted;
set #key = null;
begin transaction;
end
commit
It would get more complicated if you want to allow for parallel batches and partition the keys.
You could use the BCP command to load the data and use the Batch Size parameter
http://msdn.microsoft.com/en-us/library/ms162802.aspx
Two step process
BCP OUT data from Views into Text files
BCP IN data from Text files into Tables with batch size parameter
This looks like a job for good ol' BCP.