How Do I Repeatedly Run SQL Delete Statements and Shrink Transaction Log? - sql

I have a customer who is running out of space on their drive, and it's entirely taken up by the SQL DB and transaction log. Unfortunately, moving the DB and log are not an option available to us at the current time, so I need to figure out how to delete lines from 2 massive tables. So far a coworker has spent 4 days trying to do this and has not been able to put a dent in it. There are only 13GB available for the transaction log, and deleting large quantities from these tables wipes that 13GB out really quick. Obviously the quickest thing to do would be to move what we want to keep to a temporary table, truncate the existing tables, then move them back. Unfortunately, this is an extremely busy environment within a hospital, and there are tens of thousands of lines being written to these tables every hour. So, unfortunately, we can't temporarily stop writing to this table to be able to truncate.
So we've been deleting a month of data at a time from each of these tables, then shrinking the transaction log to do it again. I feel like there's got to be a way to just get this to repeat, but I'm not entirely sure what I'm doing... I tried:
delete top (10000)
from Table 1_
where CreationDate_ < '2017-06-01'
Go
delete top (10000)
from Table2_
where CreationDate_ < '2017-06-01'
Go
dbcc shrinkfile (SQL_Log,4)
Go 2
Go 2
This appears to remove 10,000 lines from each table, then runs the shrinkfile for the log twice (for some reason it doesn't fully shrink down to the 4224kb size when we only run it twice), but it does not seem to repeat. I've also tried adding () starting at the first delete statement and ending after the first "Go 2" line. When I do that it just says:
Incorrect syntax near "Go"
Anybody have any clue how to do this? If we can get this to work, I plan on increasing the delete statements to a number much larger than 10,000, and increasing the repeat on the script to something much larger than 2, but I need to get it to work before I can actually do that...

You can use a WHILE loop and control the number of iterations using a COUNT of records.
DECLARE #Chunk INT = 10000
DECLARE #Date DATE = '2017-06-01'
DECLARE #Cnt INT = SELECT COUNT(*) FROM Table 1_ WHERE CreationDate_ < #Date
WHILE #Cnt > 0
BEGIN
delete top (#Chunk)
from Table 1_
where CreationDate_ < #Date
SET #Cnt = #Cnt - #Chunk
END
//Move on to the next group
SET #Date = '2017-07-01'
SET #Cnt = SELECT COUNT(*) FROM Table 1_ WHERE CreationDate_ < #Date
WHILE #Cnt > 0
BEGIN
// Your delete query
SET #Cnt = #Cnt - #Chunk
END
//and so on

You don't need to shrink the log file repeatedly. If the database is in simple recovery mode the log file will not continue to grow, so long as you don't delete too many rows in a single transaction. Once all the rows have been purged you can shrink the log file down to something reasonable for the environment.

Related

Query Session no longer respond

I'm trying to execute the following T-SQL Statement:
SET NOCOUNT ON
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
BEGIN TRANSACTION
DECLARE #nRows INT = 1
DECLARE #DataCancellazione DATE = DATEADD(DAY, -7, GETDATE())
CREATE TABLE #IDToDel (ID BIGINT)
WHILE #nRows > 0
BEGIN
INSERT INTO #IDToDel
SELECT TOP 5000 LogID
FROM MioDB.Test
WHERE CAST(ReceivedDate AS date) < #DataCancellazione
SELECT #nRows = ##ROWCOUNT
DELETE RM WITH (PAGLOCK)
FROM MioDB.Test RM WITH (PAGLOCK)
INNER JOIN #IDToDel TBD ON RM.LogID = TBD.ID
TRUNCATE TABLE #IDToDel
END
ROLLBACK
When I launch the execution the query window seems to no longer respond and without having particular increase of CPUTime and DiskIO on the process. Can anyone help me thanks.
Honestly, I think you're overly complicating the problem. SQL Server can easily handle processing millions of rows in one go, and I suspect that you could likely do this in a few 1M row batches. If you have at least 4,000,000 rows you want to delete, then at 5,000 a batch that will take 800 iterations.
There is also no need for the temporary table, a DELETE can make use of a TOP, so you can just delete that many rows each cycle. I define this with a variable, and then pass that (1,000,000 rows). This would mean everything is deleted in 4 iterations, not 800. You may want to reduce the size a little, but I would suggest that 500,000 is easy pickings for instance.
This gives you the following more succinct batch:
SET NOCOUNT ON;
--The following transaction level seems like a terrible idea when you're performing DDL statements. Don't, just don't.
--SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
DECLARE #BatchSize int = 1000000,
#DataCancellazione date = DATEADD(DAY, -7, GETDATE());
SELECT 1; --Dataset with 1 row
WHILE ##ROWCOUNT > 0
DELETE TOP (#BatchSize)
FROM MioDB.Test --A Schema called "MioDB" is a little confusing
WHERE ReceivedDate < #DataCancellazione; --Casting ReceivedDate would have had no change to the query
--And could well have slowed it down.

Stored Procedure for batch delete in Firebird

I need to delete a bunch of records (literally millions) but I don't want to make it in an individual statement, because of performance issues. So I created a view:
CREATE VIEW V1
AS
SELECT FIRST 500000 *
FROM TABLE
WHERE W_ID = 14
After that I do a bunch deletes for example:
DELETE FROM V1 WHERE TS < 2021-01-01
What I want is to import this logic in a While loop and in stored procedure. I tried SELECT COUNT query like this:
SELECT COUNT(*)
FROM TABLE
WHERE W_ID = 14 AND TS < 2021-01-01;
Can I use this number in the same procedure as a condition and how can I manage that?
This is what I have tried and I get an error
ERROR: Dynamic SQL Error; SQL error code = -104; Token unknown; WHILE
Code:
CREATE PROCEDURE DeleteBatch
AS
DECLARE VARIABLE CNT INT;
BEGIN
SELECT COUNT(*) FROM TABLE WHERE W_ID = 14 AND TS < 2021-01-01 INTO :cnt;
WHILE cnt > 0 do
BEGIN
IF (cnt > 0) THEN
DELETE FROM V1 WHERE TS < 2021-01-01;
END
ELSE break;
END
I just can't wrap my head around this.
To clarify, in my previous question I wanted to know how to manage the garbage_collection after many deleted records, and I did what was suggested - SELECT * FROM TABLE; or gfix -sweep and that worked very well. As mentioned in the comments the correct statement is SELECT COUNT(*) FROM TABLE;
After that another even bigger database was given to me - above 50 million. And the problem was the DB was very slow to operate with. And I managed to get the server it was on, killed with a DELETE statement to clean the database.
That's why I wanted to try deleting in batches. The slow-down problem there was purely hardware - HDD has gone, and we replaced it. After that there was no problem with executing statements and doing backup and restore to reclaim disk space.
Provided the data that you need to delete, doesn't ever need to be rollbacked once the stored procedure is kicked off, there is another way to handle massive DELETEs in a Stored Procedure.
The example stored procedure will delete the rows 500,000 at a time. It will loop until there aren't any more rows to delete. The AUTONOMOUS TRANSACTION will allow you to put each delete statement in its own transaction and it will commit immediately after the statement completes. This is issuing an implicit commit inside a stored procedure, which you normally can't do.
CREATE OR ALTER PROCEDURE DELETE_TABLEXYZ_ROWS
AS
DECLARE VARIABLE RC INTEGER;
BEGIN
RC = 9999;
WHILE (RC > 0) DO
BEGIN
IN AUTONOMOUS TRANSACTION DO
BEGIN
DELETE FROM TABLEXYZ ROWS 500000;
RC = ROW_COUNT;
END
END
SELECT COUNT(*)
FROM TABLEXYZ
INTO :RC;
END
because of performance issues
What are those exactly? I do not think you actually are improving performance, by just running delete in loops but within the same transaction, or even different TXs but within the same timespan. You seem to be solving some wrong problem. The issue is not how you create "garbage", but how and when Firebird "collects" it.
For example, Select Count(*) in Interbase/Firebird engines means natural scan over all the table and the garbage collection is often trigggered by it, which can itself get long if lot of garbage was created (and massive delete surely does, no matter if done by one million-rows statement or million of one-row statements).
How to delete large data from Firebird SQL database
If you really want to slow down deletion - you have to spread that activity round the clock, and make your client application call a deleting SP for example once every 15 minutes. You would have to add some column to the table, flagging it is marked for deletion and then do the job like that
CREATE PROCEDURE DeleteBatch(CNT INT)
AS
DECLARE ROW_ID INTEGER;
BEGIN
FOR SELECT ID FROM TABLENAME WHERE MARKED_TO_DEL > 0 INTO :row_id
DO BEGIN
CNT = CNT - 1;
DELETE FROM TABLENAME WHERE ID = :ROW_ID;
IF (CNT <= 0) THEN LEAVE;
END
SELECT COUNT(1) FROM TABLENAME INTO :ROW_id; /* force GC now */
END
...and every 15 minutes you do EXECUTE PROCEDURE DeleteBatch(1000).
Overall this probably would only be slower, because of single-row "precision targeting" - but at least it would spread the delays.
Use DELETE...ROWS.
https://firebirdsql.org/file/documentation/html/en/refdocs/fblangref25/firebird-25-language-reference.html#fblangref25-dml-delete-orderby
But as I already said in the answer to the previous question it is better to spend time investigating source of slowdown instead of workaround it by deleting data.

How to efficiently delete small set of data from a large sql table

I want to delete 10GB (1%) data from 1TB table. I have come across several articles to delete large amounts of data from a huge table but didn't find much on deleting smaller percentage of data from a huge table.
Additional details:
Trying to delete bot data from the visits table. The filter condition is a combination of fields... ip in (list of ips about 20 of them) and useragent like '%SOMETHING%'
useragent size 1024 varchar
The data can be old or new. I can't use date filter
Here is a batch delete in chunks that I use regularly. Perhaps it would give you some ideas on how to approach your need. I create a stored proc and call the proc from a SQL Agent Job. I generally schedule it to allow a transaction log backup between executions so the log does not grow too large. You could always just run it interactively if you wish.
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROC [DBA_Delete_YourTableName] AS
SET NOCOUNT ON;
---------------------------------------------------------
DECLARE #DaysHistoryToKeep INT
SET #DaysHistoryToKeep = 90
IF #DaysHistoryToKeep < 30
SET #DaysHistoryToKeep = 30
---------------------------------------------------------
DECLARE #continue INT
DECLARE #rowcount INT
DECLARE #loopCount INT
DECLARE #MaxLoops INT
DECLARE #TotalRows BIGINT
DECLARE #PurgeThruDate DATETIME
SET #PurgeThruDate = DATEADD(dd,(-1)*(#DaysHistoryToKeep+1), GETDATE())
SET #MaxLoops = 100
SET #continue = 1
SET #loopCount = 0
SELECT #TotalRows = (SELECT COUNT(*) FROM YourTableName (NOLOCK) WHERE CREATEDDATETIME < #PurgeThruDate)
PRINT 'Total Rows = ' + CAST(#TotalRows AS VARCHAR(20))
PRINT ''
WHILE #continue = 1
BEGIN
SET #loopCount = #loopCount + 1
PRINT 'Loop # ' + CAST(#loopCount AS VARCHAR(10))
PRINT CONVERT(VARCHAR(20), GETDATE(), 120)
BEGIN TRANSACTION
DELETE TOP (4500) YourTableName WHERE CREATEDDATETIME < #PurgeThruDate
SET #rowcount = ##rowcount
COMMIT
PRINT 'Rows Deleted: ' + CAST(#rowcount AS VARCHAR(10))
PRINT CONVERT(VARCHAR(20), GETDATE(), 120)
PRINT ''
IF #rowcount = 0 OR #loopCount >= #MaxLoops
BEGIN
SET #continue = 0
END
END
SELECT #TotalRows = (SELECT COUNT(*) FROM YourTableName (NOLOCK) WHERE CREATEDDATETIME < #PurgeThruDate)
PRINT 'Total Rows Remaining = ' + CAST(#TotalRows AS VARCHAR(20))
PRINT ''
GO
The filter condition is ... ip in (list of ips about 20 of them) and useragent like '%SOMETHING%'
Regarding table size, it's important to touch as few rows as possible while executing the delete.
I imagine on a table that size you already have an index on the ip column. It might help (or not) to put your list 20 or so ips in a table instead of in an in clause, especially if they're parameters. I'd look at my query plan to see.
I hope useragent like '%SOMETHING%' is usually true; otherwise it's an expensive test because SQL Server has to examine every row for an eligible ip. If not, a redesign to allow the query to avoid like would probably be beneficial.
[D]eleting smaller percentage isn't really relevant. Using selective search criteria is (per above), as is the size of the delete transaction in absolute terms. By definition, the size of the deletion in terms of rows and row size determines the size of the transaction. Very large transactions can push against machine resources. Breaking them up into smaller ones can yield better performance in such cases.
The last server I used had 0.25 TB RAM and was comfortable deleting 1 million rows at a time, but not 10 million. Your milage will vary; you have to try, and observe, to see.
How much you're willing to tax the machine will depend on what else is (or needs to be able to) run at the same time. The way you break up one logical action -- delete all rows where [condition] -- into "chunks" also depends on what you want the database to look like while the delete procedure is in process, when some chunks are deleted and others remain present.
If you do decide to break it into chunks, I recommend not using a fixed number of rows and a TOP(n) syntax, because that's the least logical solution. Unless you use order by, you're leaving to the server to choose arbitrarily which N rows to delete. If you do use order by, you're requiring the server to sort the result before starting the delete, possibly several times over the whole run. Bleh!
Instead, find some logical subset of rows, ideally distinguishable along the clustered index, that fall beneath your machine's threshold of an acceptable number of rows to delete at one time. Loop over that set. In your case, I would be tempted to iterate over the set of ip values in the in clause. Instead of delete ... where ip in(...), you get (roughly) for each ip delete ... where ip = #ip
The advantage of the latter approach is that you always know where the database stands. If you kill the procedure or it gets rolled back partway through its iteration, you can examine the database to see which ips still remain (or whatever criteria you end up using). You avoid any kind of pathological behavior, whereby some query gets a partial result because some part of your selection criteria (determined by the server alone) are present and others deleted. In thinking about the problem you can say, I'm unable to delete ip 192.168.0.1 because, without wondering which portion have already been removed.
In sum, I recommend:
Give the server every chance to touch only the rows you want to affect, and verify that's what it will do.
Construct your delete routine, if you need one, to delete logical chunks, so you can reason about the state of the database at any time.

How to force a running t-sql query (half done) to commit?

I have database on Sql Server 2008 R2.
On that database a delete query on 400 Million records, has been running for 4 days , but I need to reboot the machine. How can I force it to commit whatever is deleted so far? I want to reject that data which is deleted by running query so far.
But problem is that query is still running and will not complete before the server reboot.
Note : I have not set any isolation / begin/end transaction for the query. The query is running in SSMS studio.
If machine reboot or I cancelled the query, then database will go in recovery mode and it will recovering for next 2 days, then I need to re-run the delete and it will cost me another 4 days.
I really appreciate any suggestion / help or guidance in this.
I am novice user of sql server.
Thanks in Advance
Regards
There is no way to stop SQL Server from trying to bring the database into a transactionally consistent state. Every single statement is implicitly a transaction itself (if not part of an outer transaction) and is executing either all or nothing. So if you either cancel the query or disconnect or reboot the server, SQL Server will from transaction log write the original values back to the updated data pages.
Next time when you delete so many rows at once, don't do it at once. Divide the job in smaller chunks (I always use 5.000 as a magic number, meaning I delete 5000 rows at the time in the loop) to minimize transaction log use and locking.
set rowcount 5000
delete table
while ##rowcount = 5000
delete table
set rowcount 0
If you are deleting that many rows you may have a better time with truncate. Truncate deletes all rows from the table very efficiently. However, I'm assuming that you would like to keep some of the records in the table. The stored procedure below backs up the data you would like to keep into a temp table then truncates then re-inserts the records that were saved. This can clean a huge table very quickly.
Note that truncate doesn't play well with Foreign Key constraints so you may need to drop those then recreate them after cleaned.
CREATE PROCEDURE [dbo].[deleteTableFast] (
#TableName VARCHAR(100),
#WhereClause varchar(1000))
AS
BEGIN
-- input:
-- table name: is the table to use
-- where clause: is the where clause of the records to KEEP
declare #tempTableName varchar(100);
set #tempTableName = #tableName+'_temp_to_truncate';
-- error checking
if exists (SELECT [Table_Name] FROM Information_Schema.COLUMNS WHERE [TABLE_NAME] =(#tempTableName)) begin
print 'ERROR: already temp table ... exiting'
return
end
if not exists (SELECT [Table_Name] FROM Information_Schema.COLUMNS WHERE [TABLE_NAME] =(#TableName)) begin
print 'ERROR: table does not exist ... exiting'
return
end
-- save wanted records via a temp table to be able to truncate
exec ('select * into '+#tempTableName+' from '+#TableName+' WHERE '+#WhereClause);
exec ('truncate table '+#TableName);
exec ('insert into '+#TableName+' select * from '+#tempTableName);
exec ('drop table '+#tempTableName);
end
GO
You must know D(Durability) in ACID first before you understand why database goes to Recovery mode.
Generally speaking, you should avoid long running SQL if possible. Long running SQL means more lock time on resource, larger transaction log and huge rollback time when it fails.
Consider divided your task some id or time. For example, you want to insert large volume data from TableSrc to TableTarget, you can write query like
DECLARE #BATCHCOUNT INT = 1000;
DECLARE #Id INT = 0;
DECLARE #Max = ...;
WHILE Id < #Max
BEGIN
INSERT INTO TableTarget
FROM TableSrc
WHERE PrimaryKey >= #Id AND #PrimaryKey < #Id + #BatchCount;
SET #Id = #Id + #BatchCount;
END
It's ugly more code and more error prone. But it's the only way I know to deal with huge data volume.

Copy one column to another for over a billion rows in SQL Server database

Database : SQL Server 2005
Problem : Copy values from one column to another column in the same table with a billion+
rows.
test_table (int id, bigint bigid)
Things tried 1: update query
update test_table set bigid = id
fills up the transaction log and rolls back due to lack of transaction log space.
Tried 2 - a procedure on following lines
set nocount on
set rowcount = 500000
while #rowcount > 0
begin
update test_table set bigid = id where bigid is null
set #rowcount = ##rowcount
set #rowupdated = #rowsupdated + #rowcount
end
print #rowsupdated
The above procedure starts slowing down as it proceeds.
Tried 3 - Creating a cursor for update.
generally discouraged in SQL Server documentation and this approach updates one row at a time which is too time consuming.
Is there an approach that can speed up the copying of values from one column to another. Basically I am looking for some 'magic' keyword or logic that will allow the update query to rip through the billion rows half a million at a time sequentially.
Any hints, pointers will be much appreciated.
I'm going to guess that you are closing in on the 2.1billion limit of an INT datatype on an artificial key for a column. Yes, that's a pain. Much easier to fix before the fact than after you've actually hit that limit and production is shut down while you are trying to fix it :)
Anyway, several of the ideas here will work. Let's talk about speed, efficiency, indexes, and log size, though.
Log Growth
The log blew up originally because it was trying to commit all 2b rows at once. The suggestions in other posts for "chunking it up" will work, but that may not totally resolve the log issue.
If the database is in SIMPLE mode, you'll be fine (the log will re-use itself after each batch). If the database is in FULL or BULK_LOGGED recovery mode, you'll have to run log backups frequently during the running of your operation so that SQL can re-use the log space. This might mean increasing the frequency of the backups during this time, or just monitoring the log usage while running.
Indexes and Speed
ALL of the where bigid is null answers will slow down as the table is populated, because there is (presumably) no index on the new BIGID field. You could, (of course) just add an index on BIGID, but I'm not convinced that is the right answer.
The key (pun intended) is my assumption that the original ID field is probably the primary key, or the clustered index, or both. In that case, lets take advantage of that fact, and do a variation of Jess' idea:
set #counter = 1
while #counter < 2000000000 --or whatever
begin
update test_table set bigid = id
where id between #counter and (#counter + 499999) --BETWEEN is inclusive
set #counter = #counter + 500000
end
This should be extremely fast, because of the existing indexes on ID.
The ISNULL check really wasn't necessary anyway, neither is my (-1) on the interval. If we duplicate some rows between calls, that's not a big deal.
Use TOP in the UPDATE statement:
UPDATE TOP (#row_limit) dbo.test_table
SET bigid = id
WHERE bigid IS NULL
You could try to use something like SET ROWCOUNT and do batch updates:
SET ROWCOUNT 5000;
UPDATE dbo.test_table
SET bigid = id
WHERE bigid IS NULL
GO
and then repeat this as many times as you need to.
This way, you're avoiding the RBAR (row-by-agonizing-row) symptoms of cursors and while loops, and yet, you don't unnecessarily fill up your transaction log.
Of course, in between runs, you'd have to do backups (especially of your log) to keep its size within reasonable limits.
Is this a one time thing? If so, just do it by ranges:
set counter = 500000
while #counter < 2000000000 --or whatever your max id
begin
update test_table set bigid = id where id between (#counter - 500000) and #counter and bigid is null
set counter = #counter + 500000
end
I didn't run this to try it, but if you can get it to update 500k at a time I think you're moving in the right direction.
set rowcount 500000
update test_table tt1
set bigid = (SELECT tt2.id FROM test_table tt2 WHERE tt1.id = tt2.id)
where bigid IS NULL
You can also try changing the recover model so you don't log the transactions
ALTER DATABASE db1
SET RECOVERY SIMPLE
GO
update test_table
set bigid = id
GO
ALTER DATABASE db1
SET RECOVERY FULL
GO
First step, if there are any, would be to drop indexes before the operation. This is probably what is causing the speed degrade with time.
The other option, a little outside the box thinking...can you express the update in such a way that you could materialize the column values in a select? If you can do this then you could create what amounts to a NEW table using SELECT INTO which is a minimally logged operation (assuming in 2005 that you are set to a recovery model of SIMPLE or BULK LOGGED). This would be pretty fast and then you can drop the old table, rename this table to to old table name and recreate any indexes.
select id, CAST(id as bigint) bigid into test_table_temp from test_table
drop table test_table
exec sp_rename 'test_table_temp', 'test_table'
I second the
UPDATE TOP(X) statement
Also to suggest, if you're in a loop, add in some WAITFOR delay or COMMIT between, to allow other processes some time to use the table if needed vs. blocking forever until all the updates are completed